The document introduces the visual mapping sentence (MS) methodology for multifaceted research design and analysis. The MS is a visual representation that classifies research variables into facets. It guides hypothesis generation and systematic data collection/analysis. The MS links components of a research domain to suggest relationships between variables and facets. It maps all relevant variables and provides information about excluded variables. Two examples of MS are included in figures to illustrate their use. The MS methodology is presented as a valuable tool that can help address limitations of other research methods by emphasizing conceptual definition and structure of content areas.
The document discusses designing learning activities focused on computational thinking for children based on public transportation environments. It presents preliminary results from a survey of parents that found most explained step-by-step how to reach destinations to their children and many children understood differences between transportation methods. Examples are given of potential activities using the Paris metro system that could develop computational thinking skills like decomposition and pattern matching. The discussion concludes more research is needed but public transportation could enable developing these skills through daily experiences.
Knowledge maps for e-learning. Jae Hwa Lee, Aviv Segev
Maps such as concept maps and knowledge maps are often used as learning materials. These maps havenodes and links, nodes as key concepts and links as relationships between key concepts. From a map, theuser can recognize the important concepts and the relationships between them. To build concept orknowledge maps, domain experts are needed. Therefore, since these experts are hard to obtain, the costof map creation is high. In this study, an attempt was made to automatically build a domain knowledgemap for e-learning using text mining techniques. From a set of documents about a specific topic,keywords are extracted using the TF/IDF algorithm. A domain knowledge map (K-map) is based onranking pairs of keywords according to the number of appearances in a sentence and the number ofwords in a sentence. The experiments analyzed the number of relations required to identify theimportant ideas in the text. In addition, the experiments compared K-map learning to document learningand found that K-map identifies the more important ideas
The document presents an overview of searching in metric spaces. It discusses how similarity searching is needed for unstructured data like text, images, and audio, where exact matching is not possible. It describes how similarity is modeled using a distance function between objects in a metric space. The document surveys existing solutions from different fields that address proximity searching in metric spaces and vector spaces. It aims to provide a unified framework to analyze and categorize existing algorithms.
This document analyzes a single student learning episode using two theoretical lenses: the instrumental genesis perspective and the onto-semiotic approach. The instrumental genesis perspective focuses on how students develop techniques for using tools or artifacts to solve mathematical tasks, and the relationships between thinking and gestures. The onto-semiotic approach views mathematical knowledge and learning as involving systems of practices within social and institutional contexts. Analyzing the same episode from both perspectives provides complementary insights and a richer understanding of the phenomena, while also helping to identify the strengths and limitations of each theoretical approach. Networking the two theories in this way contributes to theoretical development in mathematics education.
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...mlaij
This document describes a deep learning model called the Predict Text Classification Network (PTCN) that was developed to predict the outcome (pass/fail) of congressional roll call votes based solely on the text of legislation. The PTCN uses a hybrid convolutional and long short-term memory neural network architecture to analyze legislative texts and predict whether a vote will pass or fail. The model was tested on legislative texts from 2000-2019 and achieved an average prediction accuracy of 67.32% using 10-fold cross-validation, suggesting it can recognize patterns in language that correlate with congressional voting behaviors.
Collnet turkey feroz-core_scientific domainHan Woo PARK
This document discusses mapping and visualizing the core of scientific domains using information systems research as an example. It introduces the concept of a "network of the core" (NC) to represent the theoretical constructs, models, and concepts within a research domain. An NC can be constructed to reveal characteristics like density, centrality, and bridges within a domain. Both causal and non-causal NCs are possible. Causal NCs show theoretical relationships between constructs, while non-causal NCs provide an overall picture. The document demonstrates an NC for an information systems outsourcing model and discusses additional issues like optional/mandatory nodes, directional vs. non-directional NCs, and their potential uses.
This document discusses mapping and visualizing the core of scientific domains using social network analysis techniques. It introduces the concept of a "Network of the Core" (NC) to represent relationships between theoretical constructs, models, and concepts. NCs can be directional, showing causal relationships, or directionless, showing general connections. NCs can reveal hidden characteristics of a research domain like central constructs. The document demonstrates directional and directionless NCs for information systems research domains. NCs help conceptualize domains, identify missing links, and explore research opportunities. Future work should construct more detailed NCs to analyze research domain structures.
Cmaps as intellectual prosthesis (GERAS 34, Paris)Lawrie Hunter
The document describes a case study using concept maps (Cmaps) to help EAP students improve their academic writing skills. The students mapped the introduction section of a research paper under constraints. They then critiqued their maps and created a consensus map. Based only on the consensus map, the students rewrote the introduction section. The students found that cycling between mapping and text analysis helped them better understand the paper's structure and argument. The case study suggests Cmaps are useful instructional tools, especially for identifying rhetorical structure in difficult texts.
The document discusses designing learning activities focused on computational thinking for children based on public transportation environments. It presents preliminary results from a survey of parents that found most explained step-by-step how to reach destinations to their children and many children understood differences between transportation methods. Examples are given of potential activities using the Paris metro system that could develop computational thinking skills like decomposition and pattern matching. The discussion concludes more research is needed but public transportation could enable developing these skills through daily experiences.
Knowledge maps for e-learning. Jae Hwa Lee, Aviv Segev
Maps such as concept maps and knowledge maps are often used as learning materials. These maps havenodes and links, nodes as key concepts and links as relationships between key concepts. From a map, theuser can recognize the important concepts and the relationships between them. To build concept orknowledge maps, domain experts are needed. Therefore, since these experts are hard to obtain, the costof map creation is high. In this study, an attempt was made to automatically build a domain knowledgemap for e-learning using text mining techniques. From a set of documents about a specific topic,keywords are extracted using the TF/IDF algorithm. A domain knowledge map (K-map) is based onranking pairs of keywords according to the number of appearances in a sentence and the number ofwords in a sentence. The experiments analyzed the number of relations required to identify theimportant ideas in the text. In addition, the experiments compared K-map learning to document learningand found that K-map identifies the more important ideas
The document presents an overview of searching in metric spaces. It discusses how similarity searching is needed for unstructured data like text, images, and audio, where exact matching is not possible. It describes how similarity is modeled using a distance function between objects in a metric space. The document surveys existing solutions from different fields that address proximity searching in metric spaces and vector spaces. It aims to provide a unified framework to analyze and categorize existing algorithms.
This document analyzes a single student learning episode using two theoretical lenses: the instrumental genesis perspective and the onto-semiotic approach. The instrumental genesis perspective focuses on how students develop techniques for using tools or artifacts to solve mathematical tasks, and the relationships between thinking and gestures. The onto-semiotic approach views mathematical knowledge and learning as involving systems of practices within social and institutional contexts. Analyzing the same episode from both perspectives provides complementary insights and a richer understanding of the phenomena, while also helping to identify the strengths and limitations of each theoretical approach. Networking the two theories in this way contributes to theoretical development in mathematics education.
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...mlaij
This document describes a deep learning model called the Predict Text Classification Network (PTCN) that was developed to predict the outcome (pass/fail) of congressional roll call votes based solely on the text of legislation. The PTCN uses a hybrid convolutional and long short-term memory neural network architecture to analyze legislative texts and predict whether a vote will pass or fail. The model was tested on legislative texts from 2000-2019 and achieved an average prediction accuracy of 67.32% using 10-fold cross-validation, suggesting it can recognize patterns in language that correlate with congressional voting behaviors.
Collnet turkey feroz-core_scientific domainHan Woo PARK
This document discusses mapping and visualizing the core of scientific domains using information systems research as an example. It introduces the concept of a "network of the core" (NC) to represent the theoretical constructs, models, and concepts within a research domain. An NC can be constructed to reveal characteristics like density, centrality, and bridges within a domain. Both causal and non-causal NCs are possible. Causal NCs show theoretical relationships between constructs, while non-causal NCs provide an overall picture. The document demonstrates an NC for an information systems outsourcing model and discusses additional issues like optional/mandatory nodes, directional vs. non-directional NCs, and their potential uses.
This document discusses mapping and visualizing the core of scientific domains using social network analysis techniques. It introduces the concept of a "Network of the Core" (NC) to represent relationships between theoretical constructs, models, and concepts. NCs can be directional, showing causal relationships, or directionless, showing general connections. NCs can reveal hidden characteristics of a research domain like central constructs. The document demonstrates directional and directionless NCs for information systems research domains. NCs help conceptualize domains, identify missing links, and explore research opportunities. Future work should construct more detailed NCs to analyze research domain structures.
Cmaps as intellectual prosthesis (GERAS 34, Paris)Lawrie Hunter
The document describes a case study using concept maps (Cmaps) to help EAP students improve their academic writing skills. The students mapped the introduction section of a research paper under constraints. They then critiqued their maps and created a consensus map. Based only on the consensus map, the students rewrote the introduction section. The students found that cycling between mapping and text analysis helped them better understand the paper's structure and argument. The case study suggests Cmaps are useful instructional tools, especially for identifying rhetorical structure in difficult texts.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses predicting new friendships in social networks using temporal information. It describes research on predicting new links in social networks over time using supervised learning models trained on temporal features from past network interactions. The researchers used anonymized Facebook data over 28 months to train decision tree and neural network classifiers to predict new relationships, finding models using temporal information performed better than those without it.
one of the areas of discrete mathematics is graph theory. From a pure mathematics viewpoint, graph theory studies the pairwise relationships between objects. Those objects are vertices. Graph theory is frequently applied to analysing relationships between objects. It is a natural extension of graph theory to apply that mathematical tool to the evaluation of forensic evidence. In fact the literature reveals several, limited, forensic applications of graph theory. The current paper describes a more broad based application of graph theory to the problem of evaluation relationships in forensic investigation. The process takes standard graph theory and identifies entities in the investigation as vertices with the connections between the various entities as edges. Those entities can be suspects, victims, computer system, or any entity relevant to the investigation. Regardless of the nature of the entity, all entities are represented as vertices, and the relationship between them is represented as edges connecting the vertices. This allows the mathematical modelling of the events in question and facilitates analysis of the data.
A Comprehensive Survey on Comparisons across Contextual Pre-Filtering, Contex...TELKOMNIKA JOURNAL
Recently, there has been growing interest in recommender systems (RS) and particularly in context-aware RS. Methods for generating context-aware recommendations are classified into pre-filtering, post-filtering and contextual modelling approaches. In this paper, we present the several novel approaches of the different variant of each of these three contextualization paradigms and present a complete survey on the state-of-the-art comparisons across them. We then identify the significant challenges that require being addressed by the current RS researchers, which will help academicians and practitioners in comparing these three approaches to select the best alternative according to their strategies.
This document discusses the need for a theoretical framework to interpret results from social science link analysis research. It reviews different approaches to link analysis across various disciplines. Direct approaches to interpreting link counts, such as interviewing link creators or classifying a random sample of links, have limitations for large-scale studies. The document argues that a generalized theoretical framework is needed to guide interpretation of link analysis results, especially for social science research questions concerning social factors underlying link creation.
This summarizes an academic paper that proposes an automatic ontology creation method for classifying research papers. It uses text mining techniques like classification and clustering algorithms. It first builds a research ontology by extracting keywords and patterns from previous papers. It then uses a decision tree algorithm to classify new papers into disciplines defined in the ontology. The classified papers are then clustered based on similarities to group them. The method was tested on a dataset of 100 papers and achieved average precision of 85.7% for term-based and 89.3% for pattern-based keyword extraction.
Interactive Analysis of Word Vector Embeddingsgleicher
Word vector embeddings present challenges for interactive analysis due to their high-dimensional nature and complex relationships between words. The authors conducted a task analysis of common uses of word embeddings which revealed 7 linguistic tasks. They designed 3 visualizations - Buddy Plots, Concept Axis Plots, and Co-occurrence Matrices - to support the tasks of understanding word similarities, co-occurrences, and semantic directions within concept axes. An online system implements the visualizations to enable interactive exploration of word vector embeddings.
An Efficient Modified Common Neighbor Approach for Link Prediction in Social ...IOSR Journals
This document discusses link prediction in social networks. It analyzes shortcomings of existing leading link prediction methods like common neighbor. It then proposes a modified common neighbor approach that takes into account both topological network structure and node similarities based on features. The approach generates a weight for each link based on the number of common features between nodes, divided by the total number of features. It then calculates a contribution score for each common neighbor by multiplying the weights of that neighbor's links to the two nodes. Experimental results on co-authorship networks show the modified common neighbor approach outperforms existing methods.
Kapa conference scientometrics-e-govt_khan & parkHan Woo PARK
This document analyzes international collaboration within the domain of electronic government (e-government) research through scientometric methods. It finds that collaboration occurs at the institutional, country, regional, and university-industry-government levels. Key findings include that developed countries dominate e-government research collaboration networks, and clusters are centered around several major institutions primarily located in the US. University-government relationships are stronger than other relationships. The analysis provides insights into prominent players, network structures, and characteristics within the global e-government research domain.
Unsupervised Word Usage Similarity in Social Media TextsSpandana Gella
The document presents a methodology for modeling word usage similarity (Usim) in social media texts without supervision. It uses Latent Dirichlet Allocation (LDA) topic modeling to represent tweets containing target words as topic distribution vectors and evaluate models on a dataset of manually annotated Usim scores. LDA outperforms a baseline and benchmark, and expanding documents with hashtags improves performance. The study concludes LDA is suitable for modeling Usim in tweets.
Predicting Forced Population Displacement Using News ArticlesJaresJournal
The world has witnessed mass forced population displacement across the globe. Population displacement has various indications, with different social and policy consequences. Mitigation of the humanitarian crisis requires tracking and predicting the population movements to
allocate the necessary resources and inform the policymakers. The set of events that triggers population movements can be traced in the news articles. In this paper, we propose the Population
Displacement-Signal Extraction Framework (PD-SEF) to explore a large news corpus and extract
the signals of forced population displacement. PD-SEF measures and evaluates violence signals,
which is a critical factor of forced displacement from it. Following signal extraction, we propose a
displacement prediction model based on extracted violence scores. Experimental results indicate
the effectiveness of our framework in extracting high quality violence scores and building accurate
prediction models.
Developing a meta language in multidisciplinary research projects-the case st...Lucia Lupi
This document discusses the development of a meta-language to enable collaboration across multiple disciplines in the READ-IT project, which aims to study reading experiences. A philosophical analysis was conducted to understand stakeholders' needs and inform the design of an information management system. The analysis involved decomposing theories of reading, synthesizing concepts into a model of the reading experience, and defining practices for studying reading mediated by technology. This resulted in a meta-language with a shared vocabulary, conceptual structure, and pragmatic uses to support interdisciplinary research on reading.
This document summarizes a study analyzing international collaboration within the domain of electronic government (e-government) research through scientometric methods and social network analysis. The study finds that developed countries dominate e-government research collaboration networks, while developing country participation is often solo and marginal. Institution-level analysis shows clusters of collaborating institutions across regions led by key institutions, though U.S. institutions are dominant. Analysis of university-industry-government relationships indicates a lack of strong bilateral and trilateral relationships within the e-government research domain.
Assessment of Programming Language Reliability Utilizing Soft-Computingijcsa
The document discusses assessing programming language reliability using soft computing techniques like fuzzy logic and genetic algorithms. It proposes using these methods to model programming language reliability based on linguistic variables like "Reliable", "Moderately Reliable", and "Not Reliable". The key factors examined for determining a programming language's reliability include syntax consistency, semantic consistency, error handling, modularity, and documentation. A soft computing system is simulated to evaluate programming languages based on these reliability criteria.
Meta-argumentation Frameworks For Modelling Dialogues with Information from S...Gideonbms
In this research, we propose meta-argumentation frameworks for multi-party dialogues in which participants consider how much they trust each other and the advanced arguments in order to define their preferences over the arguments, given that arguments (or information that supports the arguments) from more trustworthy sources may be preferred to arguments from less trustworthy sources.
Co-word analyses study the co-occurrence of pairs of items (for example, keywords) that are representative in a document, to identify relations between the ideas presented in the
texts.
ConNeKTion: A Tool for Exploiting Conceptual Graphs Automatically Learned fro...University of Bari (Italy)
Studying, understanding and exploiting the content of a digital library, and extracting useful information thereof, require automatic techniques that can effectively support the users. To this aim, a relevant role can be played by concept taxonomies. Unfortunately, the availability of such a kind of resources is limited, and their manual building and maintenance are costly and error-prone. This work presents ConNeKTion, a tool for conceptual graph learning and exploitation. It allows to learn conceptual graphs from plain text and to enrich them by finding concept generalizations. The resulting graph can be used for several purposes: finding relationships between concepts (if any), filtering the concepts from a particular perspective, keyword extraction and information retrieval. A suitable control panel is provided for the user to comfortably carry out these activities.
Fuzzy formal concept analysis: Approaches, applications and issuesCSITiaesprime
Formal concept analysis (FCA) is today regarded as a significant technique for knowledge extraction, representation, and analysis for applications in a variety of fields. Significant progress has been made in recent years to extend FCA theory to deal with uncertain and imperfect data. The computational complexity associated with the enormous number of formal concepts generated has been identified as an issue in various applications. In general, the generation of a concept lattice of sufficient complexity and size is one of the most fundamental challenges in FCA. The goal of this work is to provide an overview of research articles that assess and compare numerous fuzzy formal concept analysis techniques which have been suggested, as well as to explore the key techniques for reducing concept lattice size. as well as we'll present a review of research articles on using fuzzy formal concept analysis in ontology engineering, knowledge discovery in databases and data mining, and information retrieval.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
This document discusses semantic visualization in design computing. It presents an approach for designing visualization schemes that leverage predefined semantics. The approach is based on a combination of cognitive linguistics models of metaphor and form-semantics-function categorization. It includes metaphor analysis, formalization, and evaluation. Examples are provided of visualizing collaborative design data and virtual worlds to illustrate the approach. The goal is to establish and preserve semantic links between form and function in visualization metaphors.
A Science Mapping Analysis Of Blood Donation BehaviourBria Davis
This study analyzed 963 scholarly articles on blood donation behavior published between 1957-2017. It used bibliometric methods including keyword co-occurrence analysis and science mapping to identify the major topics, influential authors, journals, and countries contributing to research in this area. The analysis found that research output has significantly increased over time, with the most publications in recent years. The authors publishing the most papers were Christopher France (44 papers), Blaine Ditto (24), and Eamonn Ferguson (23). The most influential journal was Transfusion, which published 12.36% of the papers analyzed. The study provides a comprehensive overview of the structure and evolution of scientific research on blood donation behavior.
This document discusses mapping and visualizing the core of scientific domains using social network analysis techniques. Specifically, it introduces the concept of a "Network of the Core" (NC) to represent the theoretical constructs, models, and concepts within a research domain. It provides examples of causal and non-causal NCs using information systems research. Causal NCs show relationships between constructs, while non-causal NCs provide an overall picture. The document demonstrates how NCs can identify missing links, central constructs, and quantify domains. It also generalizes the approach for flexibility across different research setups. NCs provide a novel way to conceptualize domains and derive new research opportunities not otherwise visible.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses predicting new friendships in social networks using temporal information. It describes research on predicting new links in social networks over time using supervised learning models trained on temporal features from past network interactions. The researchers used anonymized Facebook data over 28 months to train decision tree and neural network classifiers to predict new relationships, finding models using temporal information performed better than those without it.
one of the areas of discrete mathematics is graph theory. From a pure mathematics viewpoint, graph theory studies the pairwise relationships between objects. Those objects are vertices. Graph theory is frequently applied to analysing relationships between objects. It is a natural extension of graph theory to apply that mathematical tool to the evaluation of forensic evidence. In fact the literature reveals several, limited, forensic applications of graph theory. The current paper describes a more broad based application of graph theory to the problem of evaluation relationships in forensic investigation. The process takes standard graph theory and identifies entities in the investigation as vertices with the connections between the various entities as edges. Those entities can be suspects, victims, computer system, or any entity relevant to the investigation. Regardless of the nature of the entity, all entities are represented as vertices, and the relationship between them is represented as edges connecting the vertices. This allows the mathematical modelling of the events in question and facilitates analysis of the data.
A Comprehensive Survey on Comparisons across Contextual Pre-Filtering, Contex...TELKOMNIKA JOURNAL
Recently, there has been growing interest in recommender systems (RS) and particularly in context-aware RS. Methods for generating context-aware recommendations are classified into pre-filtering, post-filtering and contextual modelling approaches. In this paper, we present the several novel approaches of the different variant of each of these three contextualization paradigms and present a complete survey on the state-of-the-art comparisons across them. We then identify the significant challenges that require being addressed by the current RS researchers, which will help academicians and practitioners in comparing these three approaches to select the best alternative according to their strategies.
This document discusses the need for a theoretical framework to interpret results from social science link analysis research. It reviews different approaches to link analysis across various disciplines. Direct approaches to interpreting link counts, such as interviewing link creators or classifying a random sample of links, have limitations for large-scale studies. The document argues that a generalized theoretical framework is needed to guide interpretation of link analysis results, especially for social science research questions concerning social factors underlying link creation.
This summarizes an academic paper that proposes an automatic ontology creation method for classifying research papers. It uses text mining techniques like classification and clustering algorithms. It first builds a research ontology by extracting keywords and patterns from previous papers. It then uses a decision tree algorithm to classify new papers into disciplines defined in the ontology. The classified papers are then clustered based on similarities to group them. The method was tested on a dataset of 100 papers and achieved average precision of 85.7% for term-based and 89.3% for pattern-based keyword extraction.
Interactive Analysis of Word Vector Embeddingsgleicher
Word vector embeddings present challenges for interactive analysis due to their high-dimensional nature and complex relationships between words. The authors conducted a task analysis of common uses of word embeddings which revealed 7 linguistic tasks. They designed 3 visualizations - Buddy Plots, Concept Axis Plots, and Co-occurrence Matrices - to support the tasks of understanding word similarities, co-occurrences, and semantic directions within concept axes. An online system implements the visualizations to enable interactive exploration of word vector embeddings.
An Efficient Modified Common Neighbor Approach for Link Prediction in Social ...IOSR Journals
This document discusses link prediction in social networks. It analyzes shortcomings of existing leading link prediction methods like common neighbor. It then proposes a modified common neighbor approach that takes into account both topological network structure and node similarities based on features. The approach generates a weight for each link based on the number of common features between nodes, divided by the total number of features. It then calculates a contribution score for each common neighbor by multiplying the weights of that neighbor's links to the two nodes. Experimental results on co-authorship networks show the modified common neighbor approach outperforms existing methods.
Kapa conference scientometrics-e-govt_khan & parkHan Woo PARK
This document analyzes international collaboration within the domain of electronic government (e-government) research through scientometric methods. It finds that collaboration occurs at the institutional, country, regional, and university-industry-government levels. Key findings include that developed countries dominate e-government research collaboration networks, and clusters are centered around several major institutions primarily located in the US. University-government relationships are stronger than other relationships. The analysis provides insights into prominent players, network structures, and characteristics within the global e-government research domain.
Unsupervised Word Usage Similarity in Social Media TextsSpandana Gella
The document presents a methodology for modeling word usage similarity (Usim) in social media texts without supervision. It uses Latent Dirichlet Allocation (LDA) topic modeling to represent tweets containing target words as topic distribution vectors and evaluate models on a dataset of manually annotated Usim scores. LDA outperforms a baseline and benchmark, and expanding documents with hashtags improves performance. The study concludes LDA is suitable for modeling Usim in tweets.
Predicting Forced Population Displacement Using News ArticlesJaresJournal
The world has witnessed mass forced population displacement across the globe. Population displacement has various indications, with different social and policy consequences. Mitigation of the humanitarian crisis requires tracking and predicting the population movements to
allocate the necessary resources and inform the policymakers. The set of events that triggers population movements can be traced in the news articles. In this paper, we propose the Population
Displacement-Signal Extraction Framework (PD-SEF) to explore a large news corpus and extract
the signals of forced population displacement. PD-SEF measures and evaluates violence signals,
which is a critical factor of forced displacement from it. Following signal extraction, we propose a
displacement prediction model based on extracted violence scores. Experimental results indicate
the effectiveness of our framework in extracting high quality violence scores and building accurate
prediction models.
Developing a meta language in multidisciplinary research projects-the case st...Lucia Lupi
This document discusses the development of a meta-language to enable collaboration across multiple disciplines in the READ-IT project, which aims to study reading experiences. A philosophical analysis was conducted to understand stakeholders' needs and inform the design of an information management system. The analysis involved decomposing theories of reading, synthesizing concepts into a model of the reading experience, and defining practices for studying reading mediated by technology. This resulted in a meta-language with a shared vocabulary, conceptual structure, and pragmatic uses to support interdisciplinary research on reading.
This document summarizes a study analyzing international collaboration within the domain of electronic government (e-government) research through scientometric methods and social network analysis. The study finds that developed countries dominate e-government research collaboration networks, while developing country participation is often solo and marginal. Institution-level analysis shows clusters of collaborating institutions across regions led by key institutions, though U.S. institutions are dominant. Analysis of university-industry-government relationships indicates a lack of strong bilateral and trilateral relationships within the e-government research domain.
Assessment of Programming Language Reliability Utilizing Soft-Computingijcsa
The document discusses assessing programming language reliability using soft computing techniques like fuzzy logic and genetic algorithms. It proposes using these methods to model programming language reliability based on linguistic variables like "Reliable", "Moderately Reliable", and "Not Reliable". The key factors examined for determining a programming language's reliability include syntax consistency, semantic consistency, error handling, modularity, and documentation. A soft computing system is simulated to evaluate programming languages based on these reliability criteria.
Meta-argumentation Frameworks For Modelling Dialogues with Information from S...Gideonbms
In this research, we propose meta-argumentation frameworks for multi-party dialogues in which participants consider how much they trust each other and the advanced arguments in order to define their preferences over the arguments, given that arguments (or information that supports the arguments) from more trustworthy sources may be preferred to arguments from less trustworthy sources.
Co-word analyses study the co-occurrence of pairs of items (for example, keywords) that are representative in a document, to identify relations between the ideas presented in the
texts.
ConNeKTion: A Tool for Exploiting Conceptual Graphs Automatically Learned fro...University of Bari (Italy)
Studying, understanding and exploiting the content of a digital library, and extracting useful information thereof, require automatic techniques that can effectively support the users. To this aim, a relevant role can be played by concept taxonomies. Unfortunately, the availability of such a kind of resources is limited, and their manual building and maintenance are costly and error-prone. This work presents ConNeKTion, a tool for conceptual graph learning and exploitation. It allows to learn conceptual graphs from plain text and to enrich them by finding concept generalizations. The resulting graph can be used for several purposes: finding relationships between concepts (if any), filtering the concepts from a particular perspective, keyword extraction and information retrieval. A suitable control panel is provided for the user to comfortably carry out these activities.
Fuzzy formal concept analysis: Approaches, applications and issuesCSITiaesprime
Formal concept analysis (FCA) is today regarded as a significant technique for knowledge extraction, representation, and analysis for applications in a variety of fields. Significant progress has been made in recent years to extend FCA theory to deal with uncertain and imperfect data. The computational complexity associated with the enormous number of formal concepts generated has been identified as an issue in various applications. In general, the generation of a concept lattice of sufficient complexity and size is one of the most fundamental challenges in FCA. The goal of this work is to provide an overview of research articles that assess and compare numerous fuzzy formal concept analysis techniques which have been suggested, as well as to explore the key techniques for reducing concept lattice size. as well as we'll present a review of research articles on using fuzzy formal concept analysis in ontology engineering, knowledge discovery in databases and data mining, and information retrieval.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
This document discusses semantic visualization in design computing. It presents an approach for designing visualization schemes that leverage predefined semantics. The approach is based on a combination of cognitive linguistics models of metaphor and form-semantics-function categorization. It includes metaphor analysis, formalization, and evaluation. Examples are provided of visualizing collaborative design data and virtual worlds to illustrate the approach. The goal is to establish and preserve semantic links between form and function in visualization metaphors.
A Science Mapping Analysis Of Blood Donation BehaviourBria Davis
This study analyzed 963 scholarly articles on blood donation behavior published between 1957-2017. It used bibliometric methods including keyword co-occurrence analysis and science mapping to identify the major topics, influential authors, journals, and countries contributing to research in this area. The analysis found that research output has significantly increased over time, with the most publications in recent years. The authors publishing the most papers were Christopher France (44 papers), Blaine Ditto (24), and Eamonn Ferguson (23). The most influential journal was Transfusion, which published 12.36% of the papers analyzed. The study provides a comprehensive overview of the structure and evolution of scientific research on blood donation behavior.
This document discusses mapping and visualizing the core of scientific domains using social network analysis techniques. Specifically, it introduces the concept of a "Network of the Core" (NC) to represent the theoretical constructs, models, and concepts within a research domain. It provides examples of causal and non-causal NCs using information systems research. Causal NCs show relationships between constructs, while non-causal NCs provide an overall picture. The document demonstrates how NCs can identify missing links, central constructs, and quantify domains. It also generalizes the approach for flexibility across different research setups. NCs provide a novel way to conceptualize domains and derive new research opportunities not otherwise visible.
Exploiting classical bibliometrics of CSCW: classification, evaluation, limit...António Correia
In Proceedings of the 1st International Conference on Human Factors in Computing & Informatics (SouthCHI '13), Maribor, Slovenia, June 1-3. Berlin, Heidelberg: Springer-Verlag, pp. 137-156.
ONTOLOGICAL MODEL FOR CHARACTER RECOGNITION BASED ON SPATIAL RELATIONSsipij
In this paper, we present a set of spatial relations between concepts describing an ontological model for a
new process of character recognition. Our main idea is based on the construction of the domain ontology
modelling the Latin script. This ontology is composed by a set of concepts and a set of relations. The
concepts represent the graphemes extracted by segmenting the manipulated document and the relations are
of two types, is-a relations and spatial relations. In this paper we are interested by description of second
type of relations and their implementation by java code.
Cooperating Techniques for Extracting Conceptual Taxonomies from TextFulvio Rotella
The document proposes a mixed approach using existing natural language processing techniques and novel techniques to automatically construct conceptual taxonomies from text. It identifies relevant concepts from text using keyword extraction, clustering, and computing relevance weights. It then generalizes similar concepts using WordNet to group concepts and disambiguate word senses. Preliminary evaluations show promising initial results.
The document proposes a mixed approach using existing natural language processing techniques and novel techniques to automatically construct conceptual taxonomies from text. Key steps include identifying relevant concepts and attributes from text, clustering similar concepts, computing relevance weights for concepts, and generalizing concepts using WordNet. Preliminary results suggest the approach shows promise for extending and improving automatic taxonomy construction.
NOVELTY DETECTION VIA TOPIC MODELING IN RESEARCH ARTICLES cscpconf
In today’s world redundancy is the most vital problem faced in almost all domains. Novelty detection is the identification of new or unknown data or signal that a machine learning system
is not aware of during training. The problem becomes more intense when it comes to “Research Articles”. A method of identifying novelty at each sections of the article is highly required for determining the novel idea proposed in the research paper. Since research articles are semistructured,detecting novelty of information from them requires more accurate systems. Topic model provides a useful means to process them and provides a simple way to analyze them. This work compares the most predominantly used topic model- Latent Dirichlet Allocation with the hierarchical Pachinko Allocation Model. The results obtained are promising towards hierarchical Pachinko Allocation Model when used for document retrieval.
Novelty detection via topic modeling in research articlescsandit
In today’s world redundancy is the most vital problem faced in almost all domains. Novelty
detection is the identification of new or unknown data or signal that a machine learning system
is not aware of during training. The problem becomes more intense when it comes to “Research
Articles”. A method of identifying novelty at each sections of the article is highly required for
determining the novel idea proposed in the research paper. Since research articles are semistructured,
detecting novelty of information from them requires more accurate systems. Topic
model provides a useful means to process them and provides a simple way to analyze them. This
work compares the most predominantly used topic model- Latent Dirichlet Allocation with the
hierarchical Pachinko Allocation Model. The results obtained are promising towards
hierarchical Pachinko Allocation Model when used for document retrieval.
A Semantic Scoring Rubric For Concept Maps Design And ReliabilityLiz Adams
The document describes the development of a semantic scoring rubric for concept maps created by teachers in Panama. It discusses the need to develop objective measurement tools to assess how well concept maps were helping teachers foster meaningful learning. The rubric was designed to classify maps based on increasing levels of semantic complexity and quality of content. It includes six criteria: concept relevance and completeness, correct propositional structure, presence of misconceptions, dynamic propositions, number and quality of cross-links, and presence of cycles. Testing showed classifying maps by semantic levels was challenging due to uneven progression across criteria for different learners. So a point-based rubric was developed instead of levels, and score ranges correspond to overall content quality.
The document describes latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA represents documents as random mixtures over latent topics, characterized by distributions over words. It is a three-level hierarchical Bayesian model where documents are generated by first sampling a per-document topic distribution from a Dirichlet prior, then repeatedly sampling topics and words from these distributions. LDA addresses limitations of previous models by capturing statistical structure within and between documents through the hierarchical Bayesian formulation.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
This document provides an overview of the steps involved in quantitative data analysis and applying Partial Least Square Structural Equation Modeling (PLS-SEM). It discusses pretesting questionnaires, preparing raw data through editing and coding, assessing validity through measures like content validity and unidimensionality, and establishing construct validity through convergent and discriminant validity techniques. The goal is to review all the necessary steps for quantitative data analysis using SPSS and applying SEM, from preparing the data to reporting the results.
This document proposes a new similarity measure for comparing spatial MDX queries in a spatial data warehouse to support spatial personalization approaches. The proposed similarity measure takes into account the topology, direction, and distance between the spatial objects referenced in the MDX queries. It defines the topological distance between spatial scenes referenced in queries based on a conceptual neighborhood graph. It also defines the directional distance between queries based on a graph of spatial directions and transformation costs. The similarity measure will be included in a recommendation approach the authors are developing to recommend relevant anticipated queries to users based on their previous queries.
Searching in high dimensional spaces index structures for improving the perfo...unyil96
This document provides an overview of index structures for improving the performance of multimedia databases. It discusses how multimedia databases require content-based retrieval of similar objects, which is challenging due to the high-dimensional nature of feature spaces used to represent multimedia objects. The document summarizes the problems that arise from processing queries in high-dimensional spaces, known as the "curse of dimensionality", and provides an overview of index structure approaches that have been proposed to overcome these problems to efficiently process similarity queries in multimedia databases.
This document describes a proposed concept-based mining model that aims to improve document clustering and information retrieval by extracting concepts and semantic relationships rather than just keywords. The model uses natural language processing techniques like part-of-speech tagging and parsing to extract concepts from text. It represents concepts and their relationships in a semantic network and clusters documents based on conceptual similarity rather than term frequency. The model is evaluated using singular value decomposition to increase the precision of key term and phrase extraction.
Similar to Visual mapping sentence_a_methodological (1) (20)
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.