Wanna better analyze the geographic and linguistic outreach/dynamics of web traffic? We propose a method of geo‐linguistic normalization to do so, with multilingual Wikipedia projects as the example.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing(IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Call for Papers - April Issue - International Journal on Natural Language Com...kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing(IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Call for Papers - April Issue - International Journal on Natural Language Com...kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Call for papers - International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
call for papers - International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Slides from a presentation of research in progress to the Social Informatics cluster meeting, 13 June 2014. The presentation outlines the approaches used in identifying and analysing the key patterns of participation and structures of the Twitter discussion events. The descriptive statistical approaches suggested by Bruns (2014) are used to analyse the Twitter events and to discuss the limits of such analysis with reference to recent debates on the nature and status of ‘data’ in digital research (boyd and Crawford 2012; Baym 2013). The extent to which this kind of analysis can reveal the power and participation strategies of Twitter users in these events was also discussed.
Brown Bag: New Models of Scholarly Communication for Digital Scholarship, by ...Micah Altman
In his talk for the MIT Libraries Program on Information Science, Steve Griffin discusses how how research libraries can play a key and expanded role in enabling digital scholarship and creating the supporting activities that sustain it.
HybridDocs - A Digital Learning Environment based on FlashCardsChristian Heise
HybridDocs is a software prototype for transforming analog learning materials into a hybrid format. The prototype is based on the pedagogical concept of Sebastian Leitner. We build a tool for restructing the data and writing learning cards / flashcards. The process allows users to import and convert whole books/manuscipts to datasets and a sets of digital flashcards.
Rethinking academic publishing through multimedia scholarshipCheryl Ball
Cheryl Ball presented this talk to the Digital Humanities Group at the College of William & Mary. She details how the field of digital writing studies has fostered the scholarly, social, and technical infrastructures that allow for the mentoring of scholars producing digital work. Ball then explains how this infrastructure is the backbone of the journal Kairos and how the Vega academic publishing system will bring that infrastructure to other academic publishers.
Dataset Quality Ontology - An Engineering Experiencejerdeb
Data quality is commonly defined as fitness for use. Many data consumers face the problem of identifying the quality of data. Data publishers, on the other hand, often do not have the means to identify quality issues in their data. To make the task for both stakeholders easier, we have developed the Dataset Quality Ontology (daQ) [1]. daQ is a core vocabulary for representing the results of quality benchmarking of a linked dataset. It represents quality metadata as multi-dimensional and statistical observations using the Data Cube Vocabulary. Quality metadata are organised as a self-contained graph, which can be embedded into linked datasets to support quality-based retrieval and ranking. During this talk the discussion will include design issues behind the daQ vocabulary and how it helped evolving the upcoming W3C Data Quality Vocabulary initiative [2], and some ontology quality issues related to ontologies and vocabularies.
[1] Debattista, J., Lange, C., & Auer, S. (2014). Representing dataset quality metadata using multi-dimensional views. Proceedings of the 10th International Conference on Semantic Systems, 92-99.
[2] https://www.w3.org/TR/vocab-dqv/
Liao and petzold opensym berlin wikipedia geolinguistic normalizationHanteng Liao
This paper proposes a method of geo-linguistic normalization to advance the existing comparative analysis of open collaborative communities, with multilingual Wikipedia projects as the example. Such normalization requires data regarding the potential users and/or resources of a geolinguistic unit.
Chinese-language literature about Wikipedia: a metaanalysis of academic searc...Hanteng Liao
ABSTRACT
This paper presents a webometric analysis of the academic search
engine result pages (SERPs) of the Chinese-language term of
“Wikipedia” across major Chinese-speaking regions of mainland
China, Hong Kong and Taiwan. Because of the academic
outcome, the findings can also be interpreted for further metaanalysis,
or “research about research”, of the Wikipedia research
in Chinese-language literatures. The findings cover the results
from four major search platforms: CNKI Scholar, Google Scholar
China, Google Scholar Hong Kong and Google Scholar Taiwan.
Cross tabulation of the results shows the major institutions
(journals and academic departments) and scholarly archives for
Chinese-language Wikipedia research. The findings suggest that
there exists a divide between mainland Chinese academic
sources/search results on one hand, and Hong Kong/Taiwanese
ones on the other. Meta-analysis based on academic SERPs have
implications for identifying the gaps and potentials in
internationalization of Wikipedia research.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Call for papers - International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
call for papers - International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
International Journal on Natural Language Computing (IJNLC)kevig
Natural Language Processing is a programmed approach to analyze text that is based on both a set of theories and a set of technologies. This forum aims to bring together researchers who have designed and build software that will analyze, understand, and generate languages that humans use naturally to address computers.
Slides from a presentation of research in progress to the Social Informatics cluster meeting, 13 June 2014. The presentation outlines the approaches used in identifying and analysing the key patterns of participation and structures of the Twitter discussion events. The descriptive statistical approaches suggested by Bruns (2014) are used to analyse the Twitter events and to discuss the limits of such analysis with reference to recent debates on the nature and status of ‘data’ in digital research (boyd and Crawford 2012; Baym 2013). The extent to which this kind of analysis can reveal the power and participation strategies of Twitter users in these events was also discussed.
Brown Bag: New Models of Scholarly Communication for Digital Scholarship, by ...Micah Altman
In his talk for the MIT Libraries Program on Information Science, Steve Griffin discusses how how research libraries can play a key and expanded role in enabling digital scholarship and creating the supporting activities that sustain it.
HybridDocs - A Digital Learning Environment based on FlashCardsChristian Heise
HybridDocs is a software prototype for transforming analog learning materials into a hybrid format. The prototype is based on the pedagogical concept of Sebastian Leitner. We build a tool for restructing the data and writing learning cards / flashcards. The process allows users to import and convert whole books/manuscipts to datasets and a sets of digital flashcards.
Rethinking academic publishing through multimedia scholarshipCheryl Ball
Cheryl Ball presented this talk to the Digital Humanities Group at the College of William & Mary. She details how the field of digital writing studies has fostered the scholarly, social, and technical infrastructures that allow for the mentoring of scholars producing digital work. Ball then explains how this infrastructure is the backbone of the journal Kairos and how the Vega academic publishing system will bring that infrastructure to other academic publishers.
Dataset Quality Ontology - An Engineering Experiencejerdeb
Data quality is commonly defined as fitness for use. Many data consumers face the problem of identifying the quality of data. Data publishers, on the other hand, often do not have the means to identify quality issues in their data. To make the task for both stakeholders easier, we have developed the Dataset Quality Ontology (daQ) [1]. daQ is a core vocabulary for representing the results of quality benchmarking of a linked dataset. It represents quality metadata as multi-dimensional and statistical observations using the Data Cube Vocabulary. Quality metadata are organised as a self-contained graph, which can be embedded into linked datasets to support quality-based retrieval and ranking. During this talk the discussion will include design issues behind the daQ vocabulary and how it helped evolving the upcoming W3C Data Quality Vocabulary initiative [2], and some ontology quality issues related to ontologies and vocabularies.
[1] Debattista, J., Lange, C., & Auer, S. (2014). Representing dataset quality metadata using multi-dimensional views. Proceedings of the 10th International Conference on Semantic Systems, 92-99.
[2] https://www.w3.org/TR/vocab-dqv/
Liao and petzold opensym berlin wikipedia geolinguistic normalizationHanteng Liao
This paper proposes a method of geo-linguistic normalization to advance the existing comparative analysis of open collaborative communities, with multilingual Wikipedia projects as the example. Such normalization requires data regarding the potential users and/or resources of a geolinguistic unit.
Chinese-language literature about Wikipedia: a metaanalysis of academic searc...Hanteng Liao
ABSTRACT
This paper presents a webometric analysis of the academic search
engine result pages (SERPs) of the Chinese-language term of
“Wikipedia” across major Chinese-speaking regions of mainland
China, Hong Kong and Taiwan. Because of the academic
outcome, the findings can also be interpreted for further metaanalysis,
or “research about research”, of the Wikipedia research
in Chinese-language literatures. The findings cover the results
from four major search platforms: CNKI Scholar, Google Scholar
China, Google Scholar Hong Kong and Google Scholar Taiwan.
Cross tabulation of the results shows the major institutions
(journals and academic departments) and scholarly archives for
Chinese-language Wikipedia research. The findings suggest that
there exists a divide between mainland Chinese academic
sources/search results on one hand, and Hong Kong/Taiwanese
ones on the other. Meta-analysis based on academic SERPs have
implications for identifying the gaps and potentials in
internationalization of Wikipedia research.
201309 geo-linguistic dynamics virtual work liao IS1202 MaltaHanteng Liao
How do geographic and linguistic factors encourage or prevent online participation? How can social media websites better serve users with language and regional interfaces and policies that promote "the right to participate in the cultural life of the community"(UDHR, 1948)? To answer the questions above, I use the modernization theory of "social mobilization" to better theorize the so-called "cognitive surplus" as "social mobilization surplus" as the new labour forces created through digit-net work and literacies practices and technologies. How do we account for and create "social mobilization surplus"? I argue that this theoretical and practical question has important policy and research implications for better and critical online participation because virtual work is “linguistically constituted” and also “geographically configured” for social mobilization.
Andrew Chadwick and Simon Collister (2014) "Boundary-Drawing Power and the Re...andrewchadwick
Slides for a presentation to the American Political Science Association Political Communication Section Annual Preconference, 2014, George Washington University, Washington DC, August 2014.
Download the published paper at http://j.mp/IJOC-Snowden-2
What do Chinese-language microblog users do with Baidu Baike and Chinese Wiki...Hanteng Liao
ABSTRACT
This paper presents a case study of information engagement based on microblog posts gathered from Sina Weibo and Twitter that mentioned the two major Chinese-language user-generated encyclopaedias. The content analysis shows that microblog users not only engaged in public discussions by using and citing both encyclopaedias, but also shared their perceptions and experiences more generally with various online platforms and China’s filtering/censorship regime to which user-generated content and activities are subjected. This exploratory study thus raises several research and practice questions on the links between public discussions and information engagement on user-generated platforms.
Prior empirical and theoretical work has discussed the role of dominant search engine plays in the function of information gatekeeping on the Web, and there are reports on the high ranking of Wikipedia website among the search engine result pages (SERP). However, little research has been conducted on non-Google search engines and non-English versions of user-generated encyclopedias. This paper proposes a method to quantify the “display” gatekeeping differences of the SERP ranking and presents findings based on the Chinese SERP data. Based on 2,500 mainly-Chinese-language search queries, the data set includes the SERP outcome of four Chinese-speaking regions (mainland China, Singapore, Hong Kong and Taiwan) provided by three major search engines (Baidu, and Google and Yahoo), covering over 97% of the search engine market in each region. The findings, analysed and visualized using network analysis techniques, demonstrate the followings: major user-generated encyclopedias are among the most visible; localization factors matter (certain search engine variants produce the most divergent outcomes, especially mainland Chinese ones). The indicated strong effects of “network gatekeeping” by search engines also suggest similar dynamics inside user-generated encyclopedias.
Content personalisation is becoming more prevalent. A site, it's content and/or it's products, change dynamically according to the specific needs of the user. SEO needs to ensure we do not fall behind of this trend.
Linked Open (Geo)Data and the Distributed Ontology Language – a perfect matchChristoph Lange
The Distributed Ontology Language is a meta-language for integrating
ontologies written in different languages. Our notion of “distributed”
comprises logical heterogeneity within ontologies, modularity and reuse,
and links across ontologies in different places of the Web. Not only
can ontologies be distributed across the Web, but DOL's supply of
supported ontology languages can also be extended in a decentral way.
For this functionality, DOL builds on the Linked Open Data (LOD)
principles. But DOL also contributes to LOD use cases. Many current
LOD applications are limited by the weak expressivity of the RDF and
RDFS languages commonly used to express data and vocabularies.
Completely switching to a more expressive language would impair
scalability to big datasets. DOL addresses the scalability and
expressivity requirements by allowing to represent each aspect of a
dataset in the most suitable language and keeping these different
representations connected. This is particularly useful in geographic
information systems, where big datasets (e.g. Linked Geo Data, the LOD
version of OpenStreetMap) need to be integrated with formalisations of
complex spatial notions (e.g. in the first-order language Common Logic).
LIT (Lexicon of the Italian Television) is a project conceived by the Accademia della Crusca, the leading research institution on the Italian language, in collaboration with CLIEO (Center for theoretical and historical Linguistics: Italian, European and Oriental languages), with the aim of studying frequencies of the Italian lexicon used in television content and targets the specific sector of web applications for linguistic research. The corpus of transcriptions is constituted approximately by 170 hours of random television recordings transmitted by the national broadcaster RAI (Italian Radio Television) during the year 2006.
Interlinking Data and Knowledge in Enterprises, Research and Society with Lin...Christoph Lange
The Linked Data paradigm has emerged as a powerful enabler for data and knowledge interlinking and exchange using standardised Web technologies.
In this article, we discuss our vision how the Linked Data paradigm can be employed to evolve the intranets of large organisations -- be it enterprises, research organisations or governmental and public administrations -- into networks of internal data and knowledge.
In particular for large enterprises data integration is still a key challenge. The Linked Data paradigm seems a promising approach for integrating enterprise data. Like the Web of Data, which now complements the original document-centred Web, data intranets may help to enhance and flexibilise the intranets and service-oriented architectures that exist in large organisations. Furthermore, using Linked Data gives enterprises access to 50+ billion facts from the growing Linked Open Data (LOD) cloud. As a result, a data intranet can help to bridge the gap between structured data management (in ERP, CRM or SCM systems) and semi-structured or unstructured information in documents, wikis or web portals, and make all of these sources searchable in a coherent way.
Keynote at Baltic DB&IS 2014, 9 June 2014, Tallinn, Estonia
The State of the Art of Video Summarization for Mobile Devices:
Review Article
Hesham Farouk *, Kamal ElDahshan**, Amr Abozeid **
* Computers and Systems Dept., Electronics Research Institute, Cairo, Egypt.
** Dept. of Mathematics, Computer Science Division,
Faculty of Science, Al-Azhar University, Cairo, Egypt.
Advanced Community Information Systems Group (ACIS) Annual Report 2013Ralf Klamma
Advanced Community Information Systems (ACIS)
Lehrstuhl Informatik 5 – Information Systems
RWTH Aachen University
Ahornstr. 55 | 52056 Aachen | Germany
EL-7010 Week 1 Assignment: Online Learning for the K-12 Studentseckchela
This is a North Central University PowerPoint presentation (EL 7010) Week 1 Assignment. It is written in APA format, has been graded by an instructor(A), and includes references. Most higher-education assignments are submitted to turnitin, so remember to paraphrase. Let us begin.
Unlock TikTok Success with Sociocosmos..SocioCosmos
Discover how Sociocosmos can boost your TikTok presence with real followers and engagement. Achieve your social media goals today!
https://www.sociocosmos.com/product-category/tiktok/
EASY TUTORIAL OF HOW TO USE G-TEAMS BY: FEBLESS HERNANEFebless Hernane
Using Google Teams (G-Teams) is simple. Start by opening the Google Teams app on your phone or visiting the G-Teams website on your computer. Sign in with your Google account. To join a meeting, click on the link shared by the organizer or enter the meeting code in the "Join a Meeting" section. To start a meeting, click on "New Meeting" and share the link with others. You can use the chat feature to send messages and the video button to turn your camera on or off. G-Teams makes it easy to connect and collaborate with others!
Improving Workplace Safety Performance in Malaysian SMEs: The Role of Safety ...AJHSSR Journal
ABSTRACT: In the Malaysian context, small and medium enterprises (SMEs) experience a significant
burden of workplace accidents. A consensus among scholars attributes a substantial portion of these incidents to
human factors, particularly unsafe behaviors. This study, conducted in Malaysia's northern region, specifically
targeted Safety and Health/Human Resource professionals within the manufacturing sector of SMEs. We
gathered a robust dataset comprising 107 responses through a meticulously designed self-administered
questionnaire. Employing advanced partial least squares-structural equation modeling (PLS-SEM) techniques
with SmartPLS 3.2.9, we rigorously analyzed the data to scrutinize the intricate relationship between safety
behavior and safety performance. The research findings unequivocally underscore the palpable and
consequential impact of safety behavior variables, namely safety compliance and safety participation, on
improving safety performance indicators such as accidents, injuries, and property damages. These results
strongly validate research hypotheses. Consequently, this study highlights the pivotal significance of cultivating
safety behavior among employees, particularly in resource-constrained SME settings, as an essential step toward
enhancing workplace safety performance.
KEYWORDS :Safety compliance, safety participation, safety performance, SME
Surat Digital Marketing School is created to offer a complete course that is specifically designed as per the current industry trends. Years of experience has helped us identify and understand the graduate-employee skills gap in the industry. At our school, we keep up with the pace of the industry and impart a holistic education that encompasses all the latest concepts of the Digital world so that our graduates can effortlessly integrate into the assigned roles.
This is the place where you become a Digital Marketing Expert.
This tutorial presentation provides a step-by-step guide on how to use Facebook, the popular social media platform. In simple and easy-to-understand language, this presentation explains how to create a Facebook account, connect with friends and family, post updates, share photos and videos, join groups, and manage privacy settings. Whether you're new to Facebook or just need a refresher, this presentation will help you navigate the features and make the most of your Facebook experience.
This tutorial presentation offers a beginner-friendly guide to using THREADS, Instagram's messaging app. It covers the basics of account setup, privacy settings, and explores the core features such as close friends lists, photo and video sharing, creative tools, and status updates. With practical tips and instructions, this tutorial will empower you to use THREADS effectively and stay connected with your close friends on Instagram in a private and engaging way.
Project Serenity is an innovative initiative aimed at transforming urban environments into sustainable, self-sufficient communities. By integrating green architecture, renewable energy, smart technology, sustainable transportation, and urban farming, Project Serenity seeks to minimize the ecological footprint of cities while enhancing residents' quality of life. Key components include energy-efficient buildings, IoT-enabled resource management, electric and autonomous transportation options, green spaces, and robust waste management systems. Emphasizing community engagement and social equity, Project Serenity aspires to serve as a global model for creating eco-friendly, livable urban spaces that harmonize modern conveniences with environmental stewardship.
Your Path to YouTube Stardom Starts HereSocioCosmos
Skyrocket your YouTube presence with Sociocosmos' proven methods. Gain real engagement and build a loyal audience. Join us now.
https://www.sociocosmos.com/product-category/youtube/
Grow Your Reddit Community Fast.........SocioCosmos
Sociocosmos helps you gain Reddit followers quickly and easily. Build your community and expand your influence.
https://www.sociocosmos.com/product-category/reddit/
Telegram is a messaging platform that ushers in a new era of communication. Available for Android, Windows, Mac, and Linux, Telegram offers simplicity, privacy, synchronization across devices, speed, and powerful features. It allows users to create their own stickers with a user-friendly editor. With robust encryption, Telegram ensures message security and even offers self-destructing messages. The platform is open, with an API and source code accessible to everyone, making it a secure and social environment where groups can accommodate up to 200,000 members. Customize your messenger experience with Telegram's expressive features.
Buy Pinterest Followers, Reactions & Repins Go Viral on Pinterest with Socio...SocioCosmos
Get more Pinterest followers, reactions, and repins with Sociocosmos, the leading platform to buy all kinds of Pinterest presence. Boost your profile and reach a wider audience.
https://www.sociocosmos.com/product-category/pinterest/
The Evolution of SEO: Insights from a Leading Digital Marketing AgencyDigital Marketing Lab
Explore the latest trends in Search Engine Optimization (SEO) and discover how modern practices are transforming business visibility. This document delves into the shift from keyword optimization to user intent, highlighting key trends such as voice search optimization, artificial intelligence, mobile-first indexing, and the importance of E-A-T principles. Enhance your online presence with expert insights from Digital Marketing Lab, your partner in maximizing SEO performance.
Your LinkedIn Success Starts Here.......SocioCosmos
In order to make a lasting impression on your sector, SocioCosmos provides customized solutions to improve your LinkedIn profile.
https://www.sociocosmos.com/product-category/linkedin/
Exploring The Dimensions and Dynamics of Felt Obligation: A Bibliometric Anal...AJHSSR Journal
ABSTARCT: This study presents, to our knowledge, the first bibliometric analysis focusing on the concept of
"felt obligation," examining 120 articles published between 1986 and 2024. The aim of the study is to deepen our
understanding of the existing knowledge in the field of "felt obligation" and to provide guidance for further
research. The analysis is centered around the authors, countries, institutions, and keywords of the articles. The
findings highlight prominent researchers in this field, leading universities, and influential journals. Particularly,
it is identified that China plays a leading role in "felt obligation" research. The analysis of keywords emphasizes
the thematic focuses of these studies and provides a roadmap for future research. Finally, various
recommendations are presented to deepen the knowledge in this area and promote applied research. This study
serves as a foundation to expand and advance the understanding of "felt obligation" in the field.
KEYWORDS: Felt Obligation, Bibliometric Analysis, Research Trends
Geographic and linguistic normalization opensym2014 poster
1. Han-Teng Liao defended his PhD
successfully at the Oxford
Internet Institute (OII) July 2014.
His research focus in is on user-
generated content and data, Web
analytics (webometrics), Chinese
Internet Research and integrated
digital research designs (both
qualitative and quantitative).
Thomas Petzold is a social technology analyst, TED
speaker and professor of media management at HMKW
– University of Applied Sciences for Media,
Communication and Management in Berlin, Germany.
As a research fellow at the WZB (2011–2013), he led a
project on languages and big data in social technology.
[photo: David Ausserhofer]
Abstract
What is Data Normalization?
Finer normalization: geolinguistic unit
A language tag:
• Often starts with a language code followed by a country code
• e.g. “fr‐CA” = the geolinguistic unit of French used in Canada.
• has corresponding data points in the Unicode’s Common Locale
Data Repository (CLDR) Project.
• e.g. “fr‐CA” = 7,605,004 [12]
Finer geolinguistic data normalization is useful …
• for finer comparison between, say, Egyptian Arabic and Saudi
Arabia Arabic speakers, or that of Spanish Spanish and Mexican
Spanish speakers
• for analysts or designers to better know and thus support their
users by to providing appropriate interfaces and content[7]
• for better understanding of the Wikipedia traffic data
References (partial: those mentioned in this poster)
[1] American Planning Association 2006.
Planning and Urban Design Standards.
John Wiley & Sons.
[2] Cote, P. Effective Cartography:
Mapping with Quantitative Data. Harvard
Graduate School of Design.
[3] Crowston, K. et al. 2013. Sustainability
of Open Collaborative Communities:
Analyzing Recruitment Efficiency.
Technology Innovation Management
Review. January: Open Source
Sustainability (2013).
Acknowledgments
We appreciate the Wikimedia UK for the scholarship for Han‐Teng
Liao to present the findings at the OpenSym 2014. We also
acknowledge the open source software tools called Scrapy for
making the web mining tasks easier.
Data normalization, or geographic normalization, allows data to be
compared using a sensible common denominator, thereby
producing measurements of intensity or density, such as
population density [1, 2]
Data normalization is useful …
• in “factoring out the size” in order to facilitate comparisons
across unequal areas or populations [2]
• in dividing a certain numeric attribute (e.g. GDP)
by another (e.g. population), and
so as to derive another numeric attribute (e.g. GDP per capita)
• in minimizing the differences caused by the size of a geographic
unit
It is similar to Crowston, Julien and Ortega[3] in “factoring out the
size” but different in the choice of size unit.
• Crowston et al’s work[3] have proposed a measurement to
compare how efficient a language version turns potential users
into actual contributors.
• They found “a strong (but not perfect) correlation” between
the total number of Wikipedia contributors on one side, and
the Internet population, and total tertiary‐educated population
on the other.
Han‐Teng Liao (hanteng@gmail.com) and Thomas Petzold (t.petzold@hmkw.de)
Towards a better understanding of the geolinguistic dynamics of knowledge
Geographic And Linguistic Normalization
OpenSym '14 , Aug 27‐29 2014,
Berlin, Germany
ACM 978‐1‐4503‐3016‐9/14/08.
http://dx.doi.org/10.1145/26415
80.2641623
We propose a method of geo‐linguistic normalization to advance
the existing comparative analysis of open collaborative
communities, with multilingual Wikipedia projects as the example.
Such normalization requires data regarding the potential users
and/or resources of a geolinguistic unit.
0%
20%
40%
60%
80%
Percent of the traffic
Year/Month
pgViews_perLang
Egypt
Saudi Arabia
Other
Algeria
0
2
4
6
8
Normalized by language
population
Year/Month
pgViews_perLang
Israel
Kuwait
Saudi Arabia
UAE
Jordan
Bahrain
Qatar
Egypt
Figure 1. Viewing traffic trend lines Figure 3. Normalized viewing traffic trend lines
Comparing results: before and after data normalization
Arabic Wikipedia viewing traffic
Arabic Wikipedia editing traffic?
Please refer to the extended abstract or ask the authors for more
(Figure 2 and Figure 4).
English Wikipedia editing traffic
English Wikipedia viewing traffic
Figure EN1. Viewing traffic trend lines Figure EN3. Normalized viewing traffic trend lines
0%
10%
20%
30%
40%
50%
Percent of the traffic
Year/Month
pgViews_perLang
United States
Other
United
Kingdom
Canada
0
0.5
1
1.5
2
Normalized by language
population
Year/Month
pgViews_perLang
Canada
United Kingdom
New Zealand
Australia
Ireland
United States
Malaysia
Netherlands
Spain
Italy
France
Germany
Figure EN2. Editing traffic trend lines Figure EN4. Normalized editing traffic trend lines
0%
10%
20%
30%
40%
50%
Percent of the traffic
Year/Month
pgEdits_perLang
United States
United
Kingdom
Other
Canada
0
0.5
1
1.5
2
2.5Normalized by language
population
Year/Month
pgEdits_perLang
United
Kingdom
New
Zealand
Canada
Ireland
Australia
[7] Liao, H.-T. 2013. How does localization
influence online visibility of user-generated
encyclopedias? A case study on Chinese-
language Search Engine Result Pages
(SERPs). Proceedings of the 9th
International Symposium on Open
Collaboration (Hong Kong, Aug. 2013).
[12]Unicode Consortium 2014. Language-
Territory Information, CLDR Version 25.