This is the presentation for our ongoing study "Barriers to Openly Sharing Government Data: Towards an Open Data-adapted Innovation Resistance Theory" (Anastasija Nikiforova, Anneke Zuiderwijk) presented at ICEGOV2022 conference – 15th International Conference on Theory and Practice of Electronic Governance (nominated to the Best Paper Awards).
In short, the study aims to develop an Open Government Data-adapted Innovation Resistance Theory model to empirically identify predictors affecting public agencies’ resistance to openly sharing government data. Here we want to understand:
💡what are functional and behavioural factors that facilitate or hamper opening government data by public organizations?
💡does IRT provide a new and more complete insight compared to more traditional UTAUT and TAM? – IRT has not been applied in this domain, yet, so we are checking whether it should be considered, or rather those models we are familiar so much are the best ones?
💡and additionally – does the COVID-19 pandemic had an [obvious/significant] effect on the public agencies in terms of their readiness or resistance to openly share government data?
Based on a review of the literature on both IRT research and barriers associated with open data sharing by public agencies, we developed an initial version of the model. Once the model is refined in a qualitative study (interviews with public agencies), we will validate it to study the resistance of public authorities to openly sharing government data in a quantitative study.
Read the paper and cite as -> Nikiforova A., Zuiderwijk A. (2022) Barriers to openly sharing government data: towards an open data-adapted innovation resistance theory, In 15th International Conference on Theory and Practice of Electronic Governance (ICEGOV 2022). Association for Computing Machinery, New York, NY, USA, 215–220, https://doi.org/10.1145/3560107.3560143 – best paper award nominee
Big Data can generate, through inferences, new knowledge and perspectives. The paradigm that results from using Big Data creates new opportunities. Big Data has great influence at the governmental level, positively affecting society. These systems can be made more efficient by applying transparency and open governance policies, such as Open Data. After developing predictive models for target audience behavior, Big Data can be used to generate early warnings for various situations. There is thus a positive feedback between research and practice, with rapid discoveries taken from practice.
DOI: 10.13140/RG.2.2.14677.17120
Why and how does the regulation of emerging technologies occurAraz Taeihagh
Why and how the regulation of emerging technologies occurs is not clear in the literature. In this study, we adapt the multiple streams framework – often used for explaining agenda‐setting and policy adoption – to examine the phenomenon. We hypothesize how technological change affects policy‐making and identify conditions under which the streams can be (de‐)coupled. We trace the formulation of the General Data Protection Regulation to show that the regulation occupied the legislative agenda when a policy window was exploited through policy entrepreneurship to frame technological change as a problem for data privacy and legislative harmonization within the European Union. Although constituencies interested in promoting internet technologies made every effort to stall the regulation, various actors, activities, and events helped the streams remain coupled, eventually leading to its adoption. We conclude that the alignment of problem, policy, politics, and technology – through policy entrepreneurship – influences the timing and design of technology regulation.
Big data is prevalent in our daily life. Not surprisingly, big data becomes a hot topic discussedby commercial worlds, media, magazines, general publics and elsewhere. From academic point of view, isit a research area of potential worth being explored? Or it is just another hype? Are there only computer orIS related scholars suitable for big data research due to its nature? Or scholars from other research areas are alsosuitable for this subject? This study aims to answer these questions through the use of informetricsapproach and data source form the SSCI Journal database, leveraging informetric‟s robust natures ofquantitative power of analyze information in any form onto the data source of representativeness. This research shows that big data research is at its growth phase with an exponential growth patternsince 2012 and with great potential for years to come. And perhaps surprisingly, computer or IS relateddisciplinesare not on the top 5 research areas fromthis research results. In fact, the top five research disciplinesare more diversified then expected: business economics (#1), Government Law (#2), InformationScience/ Library Science (#3), Social Science (#4) and Computer Science (#5). Scholars from the USuniversities are the most productive in this subject while Asian countries, including Taiwan, are alsovisible. Besides, this study also identifies that big data publications from SSCI journal database during2005-2015 do fit Lotka‟s law. This study contributes tounderstand the current big data research trends and also show the ways toresearchers who are interested to conduct future research in big data regardless of their research backgrounds.
How important is the social” insocial networking A perceivLizbethQuinonez813
How important is the “social” in
social networking? A perceived
value empirical investigation
Mihail Cocosila
Faculty of Business, Athabasca University, Athabasca, Canada, and
Andy Igonor
JR Shaw School of Business, Northern Alberta Institute of Technology,
Edmonton, Canada
Abstract
Purpose – The purpose of this paper is to report on a value-based empirical investigation of the
adoption of Twitter social networking application. The unprecedented popularity of social networking
applications in a short time period warrants exploring theory-based reasons of their success.
Design/methodology/approach – A cross-sectional survey-based study to elicit user views on
Twitter was conducted with participants recruited through the web site of a North-American university.
Findings – All facets of perceived value considered in the study (utilitarian, hedonic and social) had
a significant and relatively strong influence on consumer intent to use Twitter. Quite surprisingly for a
social networking application, though, the social value facet had comparatively the weakest contribution
in the use equation.
Research limitations/implications – User value perception might have been influenced by the
features of the actual social networking application under scrutiny (i.e. Twitter in this case).
Practical implications – To maximize the chances of success of new social networking applications,
developers and marketers of these media should focus on the hedonic and utilitarian sides of their
perceived value.
Social implications – Additional efforts are necessary to better understand the reasons and factors
leading to a comparatively lower social value perception of a social networking application, compared
to its hedonic and utilitarian values.
Originality/value – Overall, the study opens the door for investigating user perceptions on popular
social networking applications in an effort to understand the unparalleled success of these services in a
short time period.
Keywords Perceptions, Social media, Technology adoption, Social networking, Perceived value,
Twitter, User satisfaction
Paper type Research paper
1. Introduction
Social networking applications recorded an unprecedented success in just few of the
recent years. For instance, people in the USA have been spending 22 per cent of
the time they are online on social media sites while nine million users in Australia have
been spending almost nine hours per month, on average, using top social media
applications (Wikipedia, 2012). Despite these astonishing figures, the social networking
domain is still little understood. Definitions and borders of the social networking (also
called social media) phenomenon are still under debate. However, scholars seem to
agree that content generated by users is the key feature of any social networking
application. For instance, some conceptualization attempts define social media as “a
group of Internet-based applications that build on the ideological and technological
Informat ...
Framework for understanding quantum computing use cases from a multidisciplin...Anastasija Nikiforova
This presentation is a supplementary material for the article "Framework for understanding quantum computing use cases from a multidisciplinary perspective and future research directions" (Ukpabi, D.C., Karjaluoto, H., Botticher, A., Nikiforova, A., Petrescu, D.I., Schindler, P., Valtenbergs, V., Lehmann, L., & Yakaryılmaz, A) available at https://arxiv.org/ftp/arxiv/papers/2212/2212.13909.pdf. THe presentation, however, was delivered for QWorld Quantum Science Days 2023 | May 29-31.
Matching Uses and Protections for Government Data Releases: Presentation at t...Micah Altman
In the work included below, and presented at the Simons Institute, we describe work-in progress that aims to align emerging methods of data protections with research uses.
Big Data can generate, through inferences, new knowledge and perspectives. The paradigm that results from using Big Data creates new opportunities. Big Data has great influence at the governmental level, positively affecting society. These systems can be made more efficient by applying transparency and open governance policies, such as Open Data. After developing predictive models for target audience behavior, Big Data can be used to generate early warnings for various situations. There is thus a positive feedback between research and practice, with rapid discoveries taken from practice.
DOI: 10.13140/RG.2.2.14677.17120
Why and how does the regulation of emerging technologies occurAraz Taeihagh
Why and how the regulation of emerging technologies occurs is not clear in the literature. In this study, we adapt the multiple streams framework – often used for explaining agenda‐setting and policy adoption – to examine the phenomenon. We hypothesize how technological change affects policy‐making and identify conditions under which the streams can be (de‐)coupled. We trace the formulation of the General Data Protection Regulation to show that the regulation occupied the legislative agenda when a policy window was exploited through policy entrepreneurship to frame technological change as a problem for data privacy and legislative harmonization within the European Union. Although constituencies interested in promoting internet technologies made every effort to stall the regulation, various actors, activities, and events helped the streams remain coupled, eventually leading to its adoption. We conclude that the alignment of problem, policy, politics, and technology – through policy entrepreneurship – influences the timing and design of technology regulation.
Big data is prevalent in our daily life. Not surprisingly, big data becomes a hot topic discussedby commercial worlds, media, magazines, general publics and elsewhere. From academic point of view, isit a research area of potential worth being explored? Or it is just another hype? Are there only computer orIS related scholars suitable for big data research due to its nature? Or scholars from other research areas are alsosuitable for this subject? This study aims to answer these questions through the use of informetricsapproach and data source form the SSCI Journal database, leveraging informetric‟s robust natures ofquantitative power of analyze information in any form onto the data source of representativeness. This research shows that big data research is at its growth phase with an exponential growth patternsince 2012 and with great potential for years to come. And perhaps surprisingly, computer or IS relateddisciplinesare not on the top 5 research areas fromthis research results. In fact, the top five research disciplinesare more diversified then expected: business economics (#1), Government Law (#2), InformationScience/ Library Science (#3), Social Science (#4) and Computer Science (#5). Scholars from the USuniversities are the most productive in this subject while Asian countries, including Taiwan, are alsovisible. Besides, this study also identifies that big data publications from SSCI journal database during2005-2015 do fit Lotka‟s law. This study contributes tounderstand the current big data research trends and also show the ways toresearchers who are interested to conduct future research in big data regardless of their research backgrounds.
How important is the social” insocial networking A perceivLizbethQuinonez813
How important is the “social” in
social networking? A perceived
value empirical investigation
Mihail Cocosila
Faculty of Business, Athabasca University, Athabasca, Canada, and
Andy Igonor
JR Shaw School of Business, Northern Alberta Institute of Technology,
Edmonton, Canada
Abstract
Purpose – The purpose of this paper is to report on a value-based empirical investigation of the
adoption of Twitter social networking application. The unprecedented popularity of social networking
applications in a short time period warrants exploring theory-based reasons of their success.
Design/methodology/approach – A cross-sectional survey-based study to elicit user views on
Twitter was conducted with participants recruited through the web site of a North-American university.
Findings – All facets of perceived value considered in the study (utilitarian, hedonic and social) had
a significant and relatively strong influence on consumer intent to use Twitter. Quite surprisingly for a
social networking application, though, the social value facet had comparatively the weakest contribution
in the use equation.
Research limitations/implications – User value perception might have been influenced by the
features of the actual social networking application under scrutiny (i.e. Twitter in this case).
Practical implications – To maximize the chances of success of new social networking applications,
developers and marketers of these media should focus on the hedonic and utilitarian sides of their
perceived value.
Social implications – Additional efforts are necessary to better understand the reasons and factors
leading to a comparatively lower social value perception of a social networking application, compared
to its hedonic and utilitarian values.
Originality/value – Overall, the study opens the door for investigating user perceptions on popular
social networking applications in an effort to understand the unparalleled success of these services in a
short time period.
Keywords Perceptions, Social media, Technology adoption, Social networking, Perceived value,
Twitter, User satisfaction
Paper type Research paper
1. Introduction
Social networking applications recorded an unprecedented success in just few of the
recent years. For instance, people in the USA have been spending 22 per cent of
the time they are online on social media sites while nine million users in Australia have
been spending almost nine hours per month, on average, using top social media
applications (Wikipedia, 2012). Despite these astonishing figures, the social networking
domain is still little understood. Definitions and borders of the social networking (also
called social media) phenomenon are still under debate. However, scholars seem to
agree that content generated by users is the key feature of any social networking
application. For instance, some conceptualization attempts define social media as “a
group of Internet-based applications that build on the ideological and technological
Informat ...
Framework for understanding quantum computing use cases from a multidisciplin...Anastasija Nikiforova
This presentation is a supplementary material for the article "Framework for understanding quantum computing use cases from a multidisciplinary perspective and future research directions" (Ukpabi, D.C., Karjaluoto, H., Botticher, A., Nikiforova, A., Petrescu, D.I., Schindler, P., Valtenbergs, V., Lehmann, L., & Yakaryılmaz, A) available at https://arxiv.org/ftp/arxiv/papers/2212/2212.13909.pdf. THe presentation, however, was delivered for QWorld Quantum Science Days 2023 | May 29-31.
Matching Uses and Protections for Government Data Releases: Presentation at t...Micah Altman
In the work included below, and presented at the Simons Institute, we describe work-in progress that aims to align emerging methods of data protections with research uses.
Identifying Structures in Social Conversations in NSCLC Patients through the ...IJERA Editor
The exploration of social conversations for addressing patient’s needs is an important analytical task in which
many scholarly publications are contributing to fill the knowledge gap in this area. The main difficulty remains
the inability to turn such contributions into pragmatic processes the pharmaceutical industry can leverage in
order to generate insight from social media data, which can be considered as one of the most challenging source
of information available today due to its sheer volume and noise. This study is based on the work by Scott
Spangler and Jeffrey Kreulen and applies it to identify structure in social media through the extraction of a
topical taxonomy able to capture the latent knowledge in social conversations in health-related sites. The
mechanism for automatically identifying and generating a taxonomy from social conversations is developed and
pressured tested using public data from media sites focused on the needs of cancer patients and their families.
Moreover, a novel method for generating the category’s label and the determination of an optimal number of
categories is presented which extends Scott and Jeffrey’s research in a meaningful way. We assume the reader is
familiar with taxonomies, what they are and how they are used.
Identifying Structures in Social Conversations in NSCLC Patients through the ...IJERA Editor
The exploration of social conversations for addressing patient’s needs is an important analytical task in which
many scholarly publications are contributing to fill the knowledge gap in this area. The main difficulty remains
the inability to turn such contributions into pragmatic processes the pharmaceutical industry can leverage in
order to generate insight from social media data, which can be considered as one of the most challenging source
of information available today due to its sheer volume and noise. This study is based on the work by Scott
Spangler and Jeffrey Kreulen and applies it to identify structure in social media through the extraction of a
topical taxonomy able to capture the latent knowledge in social conversations in health-related sites. The
mechanism for automatically identifying and generating a taxonomy from social conversations is developed and
pressured tested using public data from media sites focused on the needs of cancer patients and their families.
Moreover, a novel method for generating the category’s label and the determination of an optimal number of
categories is presented which extends Scott and Jeffrey’s research in a meaningful way. We assume the reader is
familiar with taxonomies, what they are and how they are used.
Challenges Faced By Entertainment IndustryIntroductionUn.docxtidwellveronique
Challenges Faced By Entertainment Industry
Introduction
Under the current competitive and dramatically ever-changing environment, many companies are succeeding whereas at the same time many are failing terribly. Offering the right products for the right markets at the right time is the main reason why some companies and it states an important way in terms of forecasting the shape of things, analyzing strategic alternatives and developing greater sensitivity to long-term implications.
Analyzing situation prior a strategic decision is critical for generating or sustaining competitive advantages, especially when facing the dynamic environmental trend, which can affect corporations’ performance positively or negatively. The main task for a situation analysis is to explore the external factors and internal factors and requires a corporation using its relative corporate strengths to better satisfy customer needs and finally achieve maximum positive differentiation over its competitors in a numbers of internal and external variables. The market situation today is changing rapidly followed by increasing keen competitors hence the main objective is to maintain competitive advantage
Background
Today, Entertainment Industry is a significant part of the society. It takes different forms like televisions, prints, and films. It also includes smaller segments like radio, music, OOH, animation, gaming and visual effects (VFX) and Internet advertising (Peterson, 1994, P.78). Nearly everyone uses either of the above. For instance it is common to see people taking photos or video or even accessing Internet using their mobile phone. Entertainment Industry has become one of the icons of today’s life, a symbol of an instant society, connection and organization. Many people find Entertainment exceedingly convenient resources for everyday life, but very few people stop to consider the negative impacts the Entertainment Industry forms could have on society if not handled appropriately or in the right places. Behind the iconic triviality lie grave issues, which have a negative impact to the members of the society. While different forms are arguably convenient, they can also be rather obnoxious (Dowswell, 2002, P.57). This report explores the different challenges facing the Entertainment Industry.
Problem statement
The purpose of this research is to investigate the challenges facing Entertainment Industry. It intends to explore how different entertainment forms can adversely affect general welfare the society. The entertainment industry is ever changing and faces challenges mainly from technological changes, changes in consumer behavior, and variety of different forms, which make the industry unstable and create an unpredictable environment (Lüsted, 2011, P.123). The media and entertainment organizations are obliged to become efficient, proactive, forward thinking and embrace the technological advancements in the industry.
The traditional approach used by most ...
Assessing the regulatory challenges of emerging disruptive technologiesAraz Taeihagh
The past decade has witnessed the emergence of many technologies that have the potential to fundamentally alter our economic, social, and indeed personal lives. The problems they pose are in many ways unprecedented, posing serious challenges for policymakers. How should governments respond to the challenges given that the technologies are still evolving with unclear trajectories? Are there general principles that can be developed to design governance arrangements for these technologies? These are questions confronting policymakers around the world and it is the objective of this special issue to offer insights into answering them both in general and with respect to specific emerging disruptive technologies. Our objectives are to help better understand the regulatory challenges posed by disruptive technologies and to develop generalizable propositions for governments' responses to them.
Experiments on Crowdsourcing Policy Assessment - Oxford IPP 2014Araz Taeihagh
Can non-experts Crowds perform as well as experts in the assessment of policy measures? To what degree does geographical location relevant to the policy context alter the performance of non-experts in the assessment of policy measures? This research in progress seeks to answer these questions by outlining experiments designed to replicate expert policy assessments with non-expert Crowds. We use a set of ninety-eight policy measures previously evaluated by experts, as our control condition, and conduct the experiments using two discrete sets of non-expert Crowds recruited from a Virtual Labor Market (VLM). We vary the composition of our non-expert Crowds along two conditions; participants recruited from a geographical location relevant to the policy context, and participants recruited at-large. In each case we recruit a sample of 100 participants for each Crowd. Each experiment is then repeated at the VLM with completely new participants in each group to assess the reliability of our results. We will present results on the performance of all four groups of non-experts jointly and severally in comparison to the expert assessments, and discuss the ramifications of our findings for the use of non-expert Crowds and VLM’s for policy design. Our experimental design applies climate change adaptation policy measures.
FirstReview these assigned readings; they will serve as your .docxclydes2
First:
Review these assigned readings; they will serve as your scientific sources of accurate information:
http://www.closerlookatstemcells.org/Top_10_Stem_Cell_Treatment_Facts.html
http://www.closerlookatstemcells.org/How_Science_Becomes_Medicine.html
http://www.newvision.co.ug/news/649266-fighting-ageing-using-stem-cell-therapy.html
http://www.nature.com/news/stem-cells-in-texas-cowboy-culture-1.12404
http://www.cbc.ca/radio/whitecoat/blog/stem-cell-hype-and-risk-1.3654515
http://stm.sciencemag.org/content/7/278/278ps4.full
Next:
Use a standard Google search for this phrase: “stem cell therapy.” Do not go to Google Scholar. Select one of the websites, blogs, or other locations that offer stem cell therapies.
Save the link for your selected site.
Read the materials provided on your selected site and find out who the authors and sponsors of the site are by going to their “home” or “about us” pages.
Finally, submit your responses to the following in an essay of 500-750 words (2-3 pages of text—use a separate page for a title and for your references):
You are going to prepare a critique of the site you located and compare it to the scientific information available on this therapy.
Give the full title of the website, web blog, or other site that you selected, along with the link.
Describe the therapy that is being offered and what conditions it is designed to treat.
Who are the authors and sponsors of the site you selected?
Compare the claims about the therapy offered to what is said in the assigned readings about this type of therapy. You may have to use our library, as well, to determine what scientists and researchers have to say about the use of stem cells to treat this condition.
Would you say that the therapy you found is a well-established, proven technique for humans, or more of an experimental, unproven approach?
What about the type of language discussed in the Goldman article? Is the therapy you found using sensationalist claims and terminology that are not supported by the scientific research?
Would you recommend that a patient with this condition go ahead and participate in this treatment? Why or why not?
Literature review on how Information Technology has impacted governing bodies’ ability to align public policy with stakeholder needs
Nowadays, the governing bodies both in public and private sectors are dealing with complex systems on a day to day operations. These systems are made up of different components which present varying interactions and interrelationships with and/or among each other; therefore, making their management to be difficult or challenging. Indeed, Ruiz, Zabaleta & Elorza (2016), highlighted that public policymakers have to deal with complex systems which involve heterogeneous agents that act in non-linear behaviors making their management difficult. Neziraj & Shaqiri (2018) also stated that the policymakers are faced with problems which are complex and non-uniform due to a lot of uncertainties and risk situ.
The Workshop on Data4Impact methodology and indicators took place on 24 June 2019 at the premises of the Research Executive Agency in Brussels. The goal of this hands-on, interactive workshop was to gather feedback on the chosen methodology, coverage and latency/timeliness of the developed indicators, to maximise the relevance for all stakeholders involved (particularly for funding agencies and policymakers).
This workshop report summarises the key sessions which took place during the event, including the introduction to Data4Impact, our conceptual framework, and the development of a series of indicators on the performance and societal impact of 40+ research programmes in the health domain. Furthermore, the report summarises the key group/panel discussion outcomes and suggestions for further steps. The list of workshop participants is annexed to the report.
A COMPREHENSIVE STUDY ON POTENTIAL RESEARCH OPPORTUNITIES OF BIG DATA ANALYTI...ijcseit
Companies, organizations and policy makers shake out with flood flowing volume of transactional data,
accumulating trillions of bytes of information about their customers, suppliers and operations. The advanced networked sensors are being implanted in devices such as mobile phones, smart energy meters,automobiles and industrial machines that sense, generate and transfer data to multiple storage devices. In fact, as they go about their business and interact with individuals, they are producing an incredible amount of fatigue digital data. Social media sites, smart phones, and other customer devices have allowed billions
of individuals around the world to contribute to the amount of data available. In addition, the extremely
increasing size of multimedia data has also take part a key role in the rapid growth of data. The technology
of high-definition video creates more than 2,000 times as many bytes as necessary to store as normal text
data. Moreover, in a digitized world, consumers are leaving enormous amount of data about their day-today
communicating, browsing, buying, sharing, searching and so on. As a result, it evolved as a big data and in turn has motivated the advances in big data analytics paradigms, endorsed as a basic motivation factor for the present researchers.
A COMPREHENSIVE STUDY ON POTENTIAL RESEARCH OPPORTUNITIES OF BIG DATA ANALYTI...ijcseit
Companies, organizations and policy makers shake out with flood flowing volume of transactional data, accumulating trillions of bytes of information about their customers, suppliers and operations. The advanced networked sensors are being implanted in devices such as mobile phones, smart energy meters, automobiles and industrial machines that sense, generate and transfer data to multiple storage devices. In fact, as they go about their business and interact with individuals, they are producing an incredible amount of fatigue digital data. Social media sites, smart phones, and other customer devices have allowed billions of individuals around the world to contribute to the amount of data available. In addition, the extremely increasing size of multimedia data has also take part a key role in the rapid growth of data. The technology of high-definition video creates more than 2,000 times as many bytes as necessary to store as normal text data. Moreover, in a digitized world, consumers are leaving enormous amount of data about their day-today communicating, browsing, buying, sharing, searching and so on. As a result, it evolved as a big data and in turn has motivated the advances in big data analytics paradigms, endorsed as a basic motivation factor for the present researchers.
The authors in the present paper conduct a comprehensive study to explore the impact of big data analytics in key domains namely, Health Care (HC), Retail Industry (RI), Public Governance (PG), Pubic Security & Safety (PSS) and Personal Location Tracking (PLT). Initially, the study looks at the insights of data sources along with their characteristics in each domain. Later, it presents the highly productive and competitive big data applications with innovative big data technologies. Subsequently, the study showcases the impact of big data on each domain to capture value addition in its services. Finally, the study put forwards many more research opportunities as all these domains differ in their complexity and development in the usage of big data analytics
A COMPREHENSIVE STUDY ON POTENTIAL RESEARCH OPPORTUNITIES OF BIG DATA ANALYTI...ijcseit
Companies, organizations and policy makers shake out with flood flowing volume of transactional data,
accumulating trillions of bytes of information about their customers, suppliers and operations. The
advanced networked sensors are being implanted in devices such as mobile phones, smart energy meters,
automobiles and industrial machines that sense, generate and transfer data to multiple storage devices. In
fact, as they go about their business and interact with individuals, they are producing an incredible amount
of fatigue digital data. Social media sites, smart phones, and other customer devices have allowed billions
of individuals around the world to contribute to the amount of data available. In addition, the extremely
increasing size of multimedia data has also take part a key role in the rapid growth of data. The technology
of high-definition video creates more than 2,000 times as many bytes as necessary to store as normal text
data. Moreover, in a digitized world, consumers are leaving enormous amount of data about their day-today communicating, browsing, buying, sharing, searching and so on. As a result, it evolved as a big data
and in turn has motivated the advances in big data analytics paradigms, endorsed as a basic motivation
factor for the present researchers.
The authors in the present paper conduct a comprehensive study to explore the impact of big data analytics
in key domains namely, Health Care (HC), Retail Industry (RI), Public Governance (PG), Pubic Security &
Safety (PSS) and Personal Location Tracking (PLT). Initially, the study looks at the insights of data sources
along with their characteristics in each domain. Later, it presents the highly productive and competitive big
data applications with innovative big data technologies. Subsequently, the study showcases the impact of
big data on each domain to capture value addition in its services. Finally, the study put forwards many
more research opportunities as all these domains differ in their complexity and development in the usage of
big data analytics.
Introduction to Technology Assessments As tool for Forecasting and evaluation...Premsankar Chakkingal
TA is a scientific, interactive and communicative process, which aims to contribute to the formation of public and political opinion on societal aspects of science and technology
A gigantic archive of terabytes of information is created every day from current data frameworks and computerized advances, for example, Internet of Things and distributed computing. Examination of these gigantic information requires a ton of endeavors at various levels to extricate information for dynamic. Hence, huge information examination is an ebb and flow region of innovative work. The essential goal of this paper is to investigate the likely effect of huge information challenges, and different instruments related with it. Accordingly, this article gives a stage to investigate enormous information at various stages. Moreover, it opens another skyline for analysts to build up the arrangement, in light of the difficulties and open exploration issues.
Data Quality for AI or AI for Data quality: advances in Data Quality Manageme...Anastasija Nikiforova
“Data is the new oil” is only partly true, since according to Forbes, data is more than oil, while according to Ataccama, “Manual Data Quality Doesn’t Cut It in 2023” – this was the main driver behind of my guest lecture entitled “Data Quality for AI or AI for Data quality: advances in Data Quality Management for the success and sustainability of emerging technologies, business and society”, as part of which we discussed what is the role of artificial intelligence in data quality management and what is the role of data quality for AI, concluding that it is not about “data quality for AI” OR “AI for data quality” but rather about AND.
We also looked at what is the current market offer regarding AI-driven data quality management, what are the pros and cons of these solutions and what are the prerequisites that we have to take into account when using them (e.g., metadata and their quality for those, which derive DQ rules based on metadata analysis), and how possibly more promising solution could be built.
We also looked at what are those data quality specificities we should consider depending on the artifact – a data object (dataset), whose owner is known / is unknown (open data), Information System, Data Warehouse, Data Lake, Data Lakehouse, Data Mesh – where, when and how DQ takes place in them? What are the current trends? And are these indeed trends or rather hype?
Towards High-Value Datasets determination for data-driven development: a syst...Anastasija Nikiforova
Slides for the talk delivered as part of EGOV-CeDEM-ePart 2023 (EGOV2023) conference, aimed at examining how HVD determination has been reflected in the literature over the years and what has been found by these studies to date, incl. the indicators used in them, involved stakeholders, data-related aspects, and frameworks, which was done by conducting a Systematic Literature Review.
Read the paper here -> https://link.springer.com/chapter/10.1007/978-3-031-41138-0_14
More Related Content
Similar to Barriers to Openly Sharing Government Data: Towards an Open Data-adapted Innovation Resistance Theory
Identifying Structures in Social Conversations in NSCLC Patients through the ...IJERA Editor
The exploration of social conversations for addressing patient’s needs is an important analytical task in which
many scholarly publications are contributing to fill the knowledge gap in this area. The main difficulty remains
the inability to turn such contributions into pragmatic processes the pharmaceutical industry can leverage in
order to generate insight from social media data, which can be considered as one of the most challenging source
of information available today due to its sheer volume and noise. This study is based on the work by Scott
Spangler and Jeffrey Kreulen and applies it to identify structure in social media through the extraction of a
topical taxonomy able to capture the latent knowledge in social conversations in health-related sites. The
mechanism for automatically identifying and generating a taxonomy from social conversations is developed and
pressured tested using public data from media sites focused on the needs of cancer patients and their families.
Moreover, a novel method for generating the category’s label and the determination of an optimal number of
categories is presented which extends Scott and Jeffrey’s research in a meaningful way. We assume the reader is
familiar with taxonomies, what they are and how they are used.
Identifying Structures in Social Conversations in NSCLC Patients through the ...IJERA Editor
The exploration of social conversations for addressing patient’s needs is an important analytical task in which
many scholarly publications are contributing to fill the knowledge gap in this area. The main difficulty remains
the inability to turn such contributions into pragmatic processes the pharmaceutical industry can leverage in
order to generate insight from social media data, which can be considered as one of the most challenging source
of information available today due to its sheer volume and noise. This study is based on the work by Scott
Spangler and Jeffrey Kreulen and applies it to identify structure in social media through the extraction of a
topical taxonomy able to capture the latent knowledge in social conversations in health-related sites. The
mechanism for automatically identifying and generating a taxonomy from social conversations is developed and
pressured tested using public data from media sites focused on the needs of cancer patients and their families.
Moreover, a novel method for generating the category’s label and the determination of an optimal number of
categories is presented which extends Scott and Jeffrey’s research in a meaningful way. We assume the reader is
familiar with taxonomies, what they are and how they are used.
Challenges Faced By Entertainment IndustryIntroductionUn.docxtidwellveronique
Challenges Faced By Entertainment Industry
Introduction
Under the current competitive and dramatically ever-changing environment, many companies are succeeding whereas at the same time many are failing terribly. Offering the right products for the right markets at the right time is the main reason why some companies and it states an important way in terms of forecasting the shape of things, analyzing strategic alternatives and developing greater sensitivity to long-term implications.
Analyzing situation prior a strategic decision is critical for generating or sustaining competitive advantages, especially when facing the dynamic environmental trend, which can affect corporations’ performance positively or negatively. The main task for a situation analysis is to explore the external factors and internal factors and requires a corporation using its relative corporate strengths to better satisfy customer needs and finally achieve maximum positive differentiation over its competitors in a numbers of internal and external variables. The market situation today is changing rapidly followed by increasing keen competitors hence the main objective is to maintain competitive advantage
Background
Today, Entertainment Industry is a significant part of the society. It takes different forms like televisions, prints, and films. It also includes smaller segments like radio, music, OOH, animation, gaming and visual effects (VFX) and Internet advertising (Peterson, 1994, P.78). Nearly everyone uses either of the above. For instance it is common to see people taking photos or video or even accessing Internet using their mobile phone. Entertainment Industry has become one of the icons of today’s life, a symbol of an instant society, connection and organization. Many people find Entertainment exceedingly convenient resources for everyday life, but very few people stop to consider the negative impacts the Entertainment Industry forms could have on society if not handled appropriately or in the right places. Behind the iconic triviality lie grave issues, which have a negative impact to the members of the society. While different forms are arguably convenient, they can also be rather obnoxious (Dowswell, 2002, P.57). This report explores the different challenges facing the Entertainment Industry.
Problem statement
The purpose of this research is to investigate the challenges facing Entertainment Industry. It intends to explore how different entertainment forms can adversely affect general welfare the society. The entertainment industry is ever changing and faces challenges mainly from technological changes, changes in consumer behavior, and variety of different forms, which make the industry unstable and create an unpredictable environment (Lüsted, 2011, P.123). The media and entertainment organizations are obliged to become efficient, proactive, forward thinking and embrace the technological advancements in the industry.
The traditional approach used by most ...
Assessing the regulatory challenges of emerging disruptive technologiesAraz Taeihagh
The past decade has witnessed the emergence of many technologies that have the potential to fundamentally alter our economic, social, and indeed personal lives. The problems they pose are in many ways unprecedented, posing serious challenges for policymakers. How should governments respond to the challenges given that the technologies are still evolving with unclear trajectories? Are there general principles that can be developed to design governance arrangements for these technologies? These are questions confronting policymakers around the world and it is the objective of this special issue to offer insights into answering them both in general and with respect to specific emerging disruptive technologies. Our objectives are to help better understand the regulatory challenges posed by disruptive technologies and to develop generalizable propositions for governments' responses to them.
Experiments on Crowdsourcing Policy Assessment - Oxford IPP 2014Araz Taeihagh
Can non-experts Crowds perform as well as experts in the assessment of policy measures? To what degree does geographical location relevant to the policy context alter the performance of non-experts in the assessment of policy measures? This research in progress seeks to answer these questions by outlining experiments designed to replicate expert policy assessments with non-expert Crowds. We use a set of ninety-eight policy measures previously evaluated by experts, as our control condition, and conduct the experiments using two discrete sets of non-expert Crowds recruited from a Virtual Labor Market (VLM). We vary the composition of our non-expert Crowds along two conditions; participants recruited from a geographical location relevant to the policy context, and participants recruited at-large. In each case we recruit a sample of 100 participants for each Crowd. Each experiment is then repeated at the VLM with completely new participants in each group to assess the reliability of our results. We will present results on the performance of all four groups of non-experts jointly and severally in comparison to the expert assessments, and discuss the ramifications of our findings for the use of non-expert Crowds and VLM’s for policy design. Our experimental design applies climate change adaptation policy measures.
FirstReview these assigned readings; they will serve as your .docxclydes2
First:
Review these assigned readings; they will serve as your scientific sources of accurate information:
http://www.closerlookatstemcells.org/Top_10_Stem_Cell_Treatment_Facts.html
http://www.closerlookatstemcells.org/How_Science_Becomes_Medicine.html
http://www.newvision.co.ug/news/649266-fighting-ageing-using-stem-cell-therapy.html
http://www.nature.com/news/stem-cells-in-texas-cowboy-culture-1.12404
http://www.cbc.ca/radio/whitecoat/blog/stem-cell-hype-and-risk-1.3654515
http://stm.sciencemag.org/content/7/278/278ps4.full
Next:
Use a standard Google search for this phrase: “stem cell therapy.” Do not go to Google Scholar. Select one of the websites, blogs, or other locations that offer stem cell therapies.
Save the link for your selected site.
Read the materials provided on your selected site and find out who the authors and sponsors of the site are by going to their “home” or “about us” pages.
Finally, submit your responses to the following in an essay of 500-750 words (2-3 pages of text—use a separate page for a title and for your references):
You are going to prepare a critique of the site you located and compare it to the scientific information available on this therapy.
Give the full title of the website, web blog, or other site that you selected, along with the link.
Describe the therapy that is being offered and what conditions it is designed to treat.
Who are the authors and sponsors of the site you selected?
Compare the claims about the therapy offered to what is said in the assigned readings about this type of therapy. You may have to use our library, as well, to determine what scientists and researchers have to say about the use of stem cells to treat this condition.
Would you say that the therapy you found is a well-established, proven technique for humans, or more of an experimental, unproven approach?
What about the type of language discussed in the Goldman article? Is the therapy you found using sensationalist claims and terminology that are not supported by the scientific research?
Would you recommend that a patient with this condition go ahead and participate in this treatment? Why or why not?
Literature review on how Information Technology has impacted governing bodies’ ability to align public policy with stakeholder needs
Nowadays, the governing bodies both in public and private sectors are dealing with complex systems on a day to day operations. These systems are made up of different components which present varying interactions and interrelationships with and/or among each other; therefore, making their management to be difficult or challenging. Indeed, Ruiz, Zabaleta & Elorza (2016), highlighted that public policymakers have to deal with complex systems which involve heterogeneous agents that act in non-linear behaviors making their management difficult. Neziraj & Shaqiri (2018) also stated that the policymakers are faced with problems which are complex and non-uniform due to a lot of uncertainties and risk situ.
The Workshop on Data4Impact methodology and indicators took place on 24 June 2019 at the premises of the Research Executive Agency in Brussels. The goal of this hands-on, interactive workshop was to gather feedback on the chosen methodology, coverage and latency/timeliness of the developed indicators, to maximise the relevance for all stakeholders involved (particularly for funding agencies and policymakers).
This workshop report summarises the key sessions which took place during the event, including the introduction to Data4Impact, our conceptual framework, and the development of a series of indicators on the performance and societal impact of 40+ research programmes in the health domain. Furthermore, the report summarises the key group/panel discussion outcomes and suggestions for further steps. The list of workshop participants is annexed to the report.
A COMPREHENSIVE STUDY ON POTENTIAL RESEARCH OPPORTUNITIES OF BIG DATA ANALYTI...ijcseit
Companies, organizations and policy makers shake out with flood flowing volume of transactional data,
accumulating trillions of bytes of information about their customers, suppliers and operations. The advanced networked sensors are being implanted in devices such as mobile phones, smart energy meters,automobiles and industrial machines that sense, generate and transfer data to multiple storage devices. In fact, as they go about their business and interact with individuals, they are producing an incredible amount of fatigue digital data. Social media sites, smart phones, and other customer devices have allowed billions
of individuals around the world to contribute to the amount of data available. In addition, the extremely
increasing size of multimedia data has also take part a key role in the rapid growth of data. The technology
of high-definition video creates more than 2,000 times as many bytes as necessary to store as normal text
data. Moreover, in a digitized world, consumers are leaving enormous amount of data about their day-today
communicating, browsing, buying, sharing, searching and so on. As a result, it evolved as a big data and in turn has motivated the advances in big data analytics paradigms, endorsed as a basic motivation factor for the present researchers.
A COMPREHENSIVE STUDY ON POTENTIAL RESEARCH OPPORTUNITIES OF BIG DATA ANALYTI...ijcseit
Companies, organizations and policy makers shake out with flood flowing volume of transactional data, accumulating trillions of bytes of information about their customers, suppliers and operations. The advanced networked sensors are being implanted in devices such as mobile phones, smart energy meters, automobiles and industrial machines that sense, generate and transfer data to multiple storage devices. In fact, as they go about their business and interact with individuals, they are producing an incredible amount of fatigue digital data. Social media sites, smart phones, and other customer devices have allowed billions of individuals around the world to contribute to the amount of data available. In addition, the extremely increasing size of multimedia data has also take part a key role in the rapid growth of data. The technology of high-definition video creates more than 2,000 times as many bytes as necessary to store as normal text data. Moreover, in a digitized world, consumers are leaving enormous amount of data about their day-today communicating, browsing, buying, sharing, searching and so on. As a result, it evolved as a big data and in turn has motivated the advances in big data analytics paradigms, endorsed as a basic motivation factor for the present researchers.
The authors in the present paper conduct a comprehensive study to explore the impact of big data analytics in key domains namely, Health Care (HC), Retail Industry (RI), Public Governance (PG), Pubic Security & Safety (PSS) and Personal Location Tracking (PLT). Initially, the study looks at the insights of data sources along with their characteristics in each domain. Later, it presents the highly productive and competitive big data applications with innovative big data technologies. Subsequently, the study showcases the impact of big data on each domain to capture value addition in its services. Finally, the study put forwards many more research opportunities as all these domains differ in their complexity and development in the usage of big data analytics
A COMPREHENSIVE STUDY ON POTENTIAL RESEARCH OPPORTUNITIES OF BIG DATA ANALYTI...ijcseit
Companies, organizations and policy makers shake out with flood flowing volume of transactional data,
accumulating trillions of bytes of information about their customers, suppliers and operations. The
advanced networked sensors are being implanted in devices such as mobile phones, smart energy meters,
automobiles and industrial machines that sense, generate and transfer data to multiple storage devices. In
fact, as they go about their business and interact with individuals, they are producing an incredible amount
of fatigue digital data. Social media sites, smart phones, and other customer devices have allowed billions
of individuals around the world to contribute to the amount of data available. In addition, the extremely
increasing size of multimedia data has also take part a key role in the rapid growth of data. The technology
of high-definition video creates more than 2,000 times as many bytes as necessary to store as normal text
data. Moreover, in a digitized world, consumers are leaving enormous amount of data about their day-today communicating, browsing, buying, sharing, searching and so on. As a result, it evolved as a big data
and in turn has motivated the advances in big data analytics paradigms, endorsed as a basic motivation
factor for the present researchers.
The authors in the present paper conduct a comprehensive study to explore the impact of big data analytics
in key domains namely, Health Care (HC), Retail Industry (RI), Public Governance (PG), Pubic Security &
Safety (PSS) and Personal Location Tracking (PLT). Initially, the study looks at the insights of data sources
along with their characteristics in each domain. Later, it presents the highly productive and competitive big
data applications with innovative big data technologies. Subsequently, the study showcases the impact of
big data on each domain to capture value addition in its services. Finally, the study put forwards many
more research opportunities as all these domains differ in their complexity and development in the usage of
big data analytics.
Introduction to Technology Assessments As tool for Forecasting and evaluation...Premsankar Chakkingal
TA is a scientific, interactive and communicative process, which aims to contribute to the formation of public and political opinion on societal aspects of science and technology
A gigantic archive of terabytes of information is created every day from current data frameworks and computerized advances, for example, Internet of Things and distributed computing. Examination of these gigantic information requires a ton of endeavors at various levels to extricate information for dynamic. Hence, huge information examination is an ebb and flow region of innovative work. The essential goal of this paper is to investigate the likely effect of huge information challenges, and different instruments related with it. Accordingly, this article gives a stage to investigate enormous information at various stages. Moreover, it opens another skyline for analysts to build up the arrangement, in light of the difficulties and open exploration issues.
Data Quality for AI or AI for Data quality: advances in Data Quality Manageme...Anastasija Nikiforova
“Data is the new oil” is only partly true, since according to Forbes, data is more than oil, while according to Ataccama, “Manual Data Quality Doesn’t Cut It in 2023” – this was the main driver behind of my guest lecture entitled “Data Quality for AI or AI for Data quality: advances in Data Quality Management for the success and sustainability of emerging technologies, business and society”, as part of which we discussed what is the role of artificial intelligence in data quality management and what is the role of data quality for AI, concluding that it is not about “data quality for AI” OR “AI for data quality” but rather about AND.
We also looked at what is the current market offer regarding AI-driven data quality management, what are the pros and cons of these solutions and what are the prerequisites that we have to take into account when using them (e.g., metadata and their quality for those, which derive DQ rules based on metadata analysis), and how possibly more promising solution could be built.
We also looked at what are those data quality specificities we should consider depending on the artifact – a data object (dataset), whose owner is known / is unknown (open data), Information System, Data Warehouse, Data Lake, Data Lakehouse, Data Mesh – where, when and how DQ takes place in them? What are the current trends? And are these indeed trends or rather hype?
Towards High-Value Datasets determination for data-driven development: a syst...Anastasija Nikiforova
Slides for the talk delivered as part of EGOV-CeDEM-ePart 2023 (EGOV2023) conference, aimed at examining how HVD determination has been reflected in the literature over the years and what has been found by these studies to date, incl. the indicators used in them, involved stakeholders, data-related aspects, and frameworks, which was done by conducting a Systematic Literature Review.
Read the paper here -> https://link.springer.com/chapter/10.1007/978-3-031-41138-0_14
Public data ecosystems in and for smart cities: how to make open / Big / smar...Anastasija Nikiforova
This is a set of slides used as part of my keynote "Public data ecosystems in and for smart cities: how to make open / Big / smart / geo data ecosystems value-adding for SDG-compliant Smart Living and Society 5.0" delivered at the 5th International Conference on Advanced Research Methods and Analytics (CARMA 2023) -> https://carmaconf2023.wordpress.com/keynote-speakers/. read more here -> https://anastasijanikiforova.com/2023/06/30/keynote-at-the-5th-international-conference-on-advanced-research-methods-and-analytics-carma-2023/
Artificial Intelligence for open data or open data for artificial intelligence?Anastasija Nikiforova
This is a presentation used to deliver an invited talk for Babu Banarasi Das University (BBDU, Department of Computer Science and Engineering) Development Program «Artificial Intelligence for Sustainable Development» organized by AI Research Centre, Department of Computer Science & Engineering, ShodhGuru Research Labs, Soft Computing Research Society, IEEE UP Section, Computational Intelligence Society Chapter in 2022. Read more here -> https://anastasijanikiforova.com/2022/09/24/ai-for-open-data-or-open-data-for-ai-an-invited-talk-for-bbdu-development-program-artificial-intelligence-for-sustainable-development%f0%9f%8e%a4/
Overlooked aspects of data governance: workflow framework for enterprise data...Anastasija Nikiforova
This presentation is a supplementary material for the article "Overlooked aspects of data governance: workflow framework for enterprise data deduplication" (Azeroual, Nikiforova, Shei) presented at The International Conference on Intelligent Computing, Communication, Networking and Services (ICCNS2023).
Abstract of the paper: Data quality in companies is decisive and critical to the benefits their products and services can provide. However, in heterogeneous IT infrastructures where, e.g., different applications for Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), product management, manufacturing, and marketing are used, duplicates, e.g., multiple entries for the same customer or product in a database or information system, occur. There can be several reasons for this, but the result of non-unique or duplicate records is a degraded data quality. This ultimately leads to poorer, inefficient, and inaccurate data-driven decisions. For this reason, in this paper, we develop a conceptual data governance framework for effective and efficient management of duplicate data, and improvement of data accuracy and consistency in large data ecosystems. We present methods and recommendations for companies to deal with duplicate data in a meaningful way.
Data Quality as a prerequisite for you business success: when should I start ...Anastasija Nikiforova
These are slides for my talk "Data Quality as a prerequisite for you business success: when should I start taking care of it?" I delivered as an invited keynote for HackCodeX Forum that gathered international experts to share their experience and knowledge on the emerging technologies and areas such as Artificial Intelligence, Security, Data Quality, Quantum Computing, Sustainability, Open Data, Privacy etc.
Data Lake or Data Warehouse? Data Cleaning or Data Wrangling? How to Ensure t...Anastasija Nikiforova
This presentation was delivered as part of the Data Science Seminar titled “When, Why and How? The Importance of Business Intelligence“ organized by the Institute of Computer Science (University of Tartu) in cooperation with Swedbank.
In this presentation I talked about:
*“Data warehouse vs. data lake – what are they and what is the difference between them?” (structured vs unstructured, static vs dynamic (real-time data), schema-on-write vs schema on-read, ETL vs ELT) with further elaboration on What are their goals and purposes? What is their target audience? What are their pros and cons?
*“Is the Data warehouse the only data repository suitable for BI?” – no, (today) data lakes can also be suitable. And even more, both are considered the key to “a single version of the truth”. Although, if descriptive BI is the only purpose, it might still be better to stay within data warehouse. But, if you want to either have predictive BI or use your data for ML (or do not have a specific idea on how you want to use the data, but want to be able to explore your data effectively and efficiently), you know that a data warehouse might not be the best option.
*“So, the data lake will save my resources a lot, because I do not have to worry about how to store /allocate the data – just put it in one storage and voila?!” – no, in this case your data lake will turn into a data swamp! And you are forgetting about the data quality you should (must!) be thinking of!
*“But how do you prevent the data lake from becoming a data swamp?” – in short and simple terms – proper data governance & metadata management is the answer (but not as easy as it sounds – do not forget about your data engineer and be friendly with him [always… literally always :D) and also think about the culture in your organization.
*“So, the use of a data warehouse is the key to high quality data?” – no, it is not! Having ETL do not guarantee the quality of your data (transform&load is not data quality management). Think about data quality regardless of the repository!
*“Are data warehouses and data lakes the only options to consider or are we missing something?“– true! Data lakehouse!
*“If a data lakehouse is a combination of benefits of a data warehouse and data lake, is it a silver bullet?“– no, it is not! This is another option (relatively immature) to consider that may be the best bit for you, but not a panacea. Dealing with data is not easy (still)…
In addition, in this talk I also briefly introduced the ongoing research into the integration of the data lake as a data repository and data wrangling seeking for an increased data quality in IS. In short, this is somewhat like an improved data lakehouse, where we emphasize the need of data governance and data wrangling to be integrated to really get the benefits that the data lakehouses promise (although we still call it a data lake, since a data lakehouse is nut sufficiently mature concept with different definitions of it).
Putting FAIR Principles in the Context of Research Information: FAIRness for ...Anastasija Nikiforova
This presentation is a supplementary material for "Putting FAIR Principles in the Context of Research Information: FAIRness for CRIS and CRIS for FAIRness" (Otmane Azeroual, Joachim Schopfel, Janne Polonen, and Anastasija Nikiforova) paper presented at 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K) conference, and also received the Best Paper Award. In this presentation we raise a discussion on this topic showing that the improvement of FAIRness is a dual or bidirectional process, where CRIS promotes and contributes to the FAIRness of data and infrastructures, and FAIR principles push for further improvement in the underlying CRIS data model and format, positively affecting the sustainability of these systems and underlying artifacts. CRIS are beneficial for FAIR, and FAIR is beneficial for CRIS.
See the text here -> https://www.scitepress.org/Link.aspx?doi=10.5220/0011548700003335
Cite as -> Azeroual, O.; Schöpfel, J.; Pölönen, J. and Nikiforova, A. (2022). Putting FAIR Principles in the Context of Research Information: FAIRness for CRIS and CRIS for FAIRness. In Proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - KMIS, ISBN 978-989-758-614-9; ISSN 2184-3228, pages 63-71. DOI: 10.5220/0011548700003335
Open data hackathon as a tool for increased engagement of Generation Z: to h...Anastasija Nikiforova
This is presentation for the paper "Open data hackathon as a tool for increased engagement of Generation Z: to hack or not to hack?" presented at EGETC2022.
A hackathon is known as a form of civic innovation in which participants representing citizens can point out existing problems or social needs and propose a solution. Given the high social, technical, and economic potential of open government data (OGD), the concept of open data hackathons is becoming popular around the world. This concept has become popular in Latvia with the annual hackathons organised for a specific cluster of citizens – Generation Z. This study presents the latest findings on the role of open data hackathons and the benefits that they can bring to both the society, participants, and government. First, a systematic literature review is carried out to establish a knowledge base. Then, empirical research of 4 case studies of open data hackathons for Generation Z participants held between 2018 and 2021 in Latvia is conducted to understand which ideas dominated and what were the main results of these events for the OGD initiative. It demonstrates that, despite the widespread belief that young people are indifferent to current
societal and natural problems, the ideas developed correspond to current situation and are aimed at solving them, revealing aspects for improvement in both the
provision of data, infrastructure, culture, and government- related areas.
Combining Data Lake and Data Wrangling for Ensuring Data Quality in CRISAnastasija Nikiforova
This presentation is a supplementary material for the "Combining Data Lake and Data Wrangling for Ensuring Data Quality in CRIS" presented at 15th International Conference on Current Research Information Systems (CRIS2022) - Linking Research Information across data spaces. It provides an insight on the ongoing study of combining data lake as a data repository and data wrangling seeking for an increased data quality in CRIS systems, although the proposed approach is domain-agnostic and can be used not only within CRIS.
Read the article here -> Azeroual, O., Schöpfel, J., Ivanovic, D., & Nikiforova, A. (2022, May). Combining Data Lake and Data Wrangling for Ensuring Data Quality in CRIS. In CRIS2022: 15th International Conference on Current Research Information Systems --> https://hal.archives-ouvertes.fr/hal-03694519/
The role of open data in the development of sustainable smart cities and smar...Anastasija Nikiforova
This presentation is a supplementary material for the guest lecture "The role of open data in the development of sustainable smart cities and smart society" I delivered for the Federal University of Technology – Paraná (Universidade Tecnológica Federal do Paraná (UTFPR)) (Brazil, May 2022).
Data security as a top priority in the digital world: preserve data value by ...Anastasija Nikiforova
Today, in the age of information and Industry 4.0, billions of data sources, including but not limited to interconnected devices (sensors, monitoring devices) forming Cyber-Physical Systems (CPS) and the Internet of Things (IoT) ecosystem, continuously generate, collect, process, and exchange data. With the rapid increase in the number of devices and information systems in use, the amount of data is increasing. Moreover, due to the digitization and variety of data being continuously produced and processed with a reference to Big Data, their value, is also growing. As a result, the risk of security breaches and data leaks. The value of data, however, is dependent on several factors, where data quality and data security that can affect the data quality if the data are accessed and corrupted, are the most vital. Data serve as the basis for decision-making, input for models, forecasts, simulations etc., which can be of high strategical and commercial / business value. This has become even more relevant in terms of COVID-19 pandemic, when in addition to affecting the health, lives, and lifestyle of billions of citizens globally, making it even more digitized, it has had a significant impact on business. This is especially the case because of challenges companies have faced in maintaining business continuity in this so-called “new normal”. However, in addition to those cybersecurity threats that are caused by changes directly related to the pandemic and its consequences, many previously known threats have become even more desirable targets for intruders, hackers. Every year millions of personal records become available online. Moreover, the popularity of IoTSE decreased a level of complexity of searching for connected devices on the internet and easy access even for novices due to the widespread popularity of step-by-step guides on how to use IoT search engine to find and gain access if insufficiently protected to webcams, routers, databases and other artifacts. A recent research demonstrated that weak data and database protection in particular is one of the key security threats. Various measures can be taken to address the issue. The aim of the study to which this presentation refers is to examine whether “traditional” vulnerability registries provide a sufficiently comprehensive view of DBMS security, or whether they should be intensively and dynamically inspected by DBMS holders by referring to Internet of Things Search Engines moving towards a sustainable and resilient digitized environment. The paper brings attention to this problem and make the reader think about data security before looking for and introducing more advanced security and protection mechanisms, which, in the absence of the above, may bring no value.
IoTSE-based Open Database Vulnerability inspection in three Baltic Countries:...Anastasija Nikiforova
This presentation is devoted to the "IoTSE-based Open Database Vulnerability inspection in three Baltic Countries: ShoBEVODSDT sees you" research paper developed by Artjoms Daskevics and Anastasija Nikiforova and presented during the The International conference on Internet of Things, Systems, Management and Security (IOTSMS2021) co-located with The 8th International Conference on Social Networks Analysis, Management and Security (SNAMS2021), December 6-9, 2021, Valencia, Spain (online)
Read paper here -> Daskevics, A., & Nikiforova, A. (2021, December). IoTSE-based open database vulnerability inspection in three Baltic countries: ShoBEVODSDT sees you. In 2021 8th International Conference on Internet of Things: Systems, Management and Security (IOTSMS) (pp. 1-8). IEEE -> https://ieeexplore.ieee.org/abstract/document/9704952?casa_token=NfEjYuud0wEAAAAA:6QxucVPuY762I3qzD6D_oWqa0B9eMUFRNMG-E7dyHKohSYIzI0bH1V9bLaAcly_Lp-Ll52ghO5Y
Stakeholder-centred Identification of Data Quality Issues: Knowledge that Can...Anastasija Nikiforova
This presentations is a supplementary material for presenting the "Stakeholder-centred Identification of Data Quality Issues: Knowledge that Can Save Your Business" (authored by Anastasija Nikiforova and Natalija Kozmina) research paper during the The International Conference on Intelligent Data Science Technologies and Applications (IDSTA2021), November 15-16, 2021. Tartu, Estonia (web-based)
Read paper here -> Nikiforova, A., & Kozmina, N. (2021, November). Stakeholder-centred Identification of Data Quality Issues: Knowledge that Can Save Your Business. In 2021 Second International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (pp. 66-73). IEEE -> https://ieeexplore.ieee.org/abstract/document/9660802?casa_token=LFJa20LrXAwAAAAA:wVwhTcCPWqxdloAvDQ3-l98KkkLx70xzG3zNvIIkJbC6wvJ4VxwX_VGc3mmW_7c1T-QJlOtTiao
ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detect...Anastasija Nikiforova
This presentation is devoted to the "ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detection tool or what Internet of Things Search Engines know about you" research paper developed by Artjoms Daskevics and Anastasija Nikiforova and presented during the The International Conference on Intelligent Data Science Technologies and Applications (IDSTA2021), November 15-16, 2021. Tartu, Estonia (web-based).
Read paper here -> Daskevics, A., & Nikiforova, A. (2021, November). ShoBeVODSDT: Shodan and Binary Edge based vulnerable open data sources detection tool or what Internet of Things Search Engines know about you. In 2021 Second International Conference on Intelligent Data Science Technologies and Applications (IDSTA) (pp. 38-45). IEEE.
OPEN DATA: ECOSYSTEM, CURRENT AND FUTURE TRENDS, SUCCESS STORIES AND BARRIERSAnastasija Nikiforova
"OPEN DATA: ECOSYSTEM, CURRENT AND FUTURE TRENDS, SUCCESS STORIES AND BARRIERS" set of slides was prepared for the Guest Lecture, which I has delivered to the students of the University of South-Eastern Norway (USN), October 2021
Invited talk "Open Data as a driver of Society 5.0: how you and your scientif...Anastasija Nikiforova
This presentation is prepared as a part of my talk on the openness (open data and open science) in the context of Society 5.0 during the International Conference and Expo on Nanotechnology and Nanomaterials. It was very pleasant to receive an invitation to deliver the talk on my recently published article Smarter Open Government Data for Society 5.0: Are Your Open Data Smart Enough? (Sensors 2021, 21(15), 5204), which I have entitled as “Open Data as a driver of Society 5.0: how you and your scientific outputs can contribute to the development of the Super Smart Society and transformation into Smart Living?“. The paper has been briefly discussed in my previous post, thus, just a few words on this talk and overall experience.
Towards enrichment of the open government data: a stakeholder-centered determ...Anastasija Nikiforova
This set of slides is a part of the presentation prepared and delivered in the scope of the 14th International Conference on Theory and Practice of Electronic Governance (ICEGOV 2021), 6-8 October, 2021, Smart Digital Governance for Global Sustainability
It is based on the paper -> Nikiforova, A. (2021, October). Towards enrichment of the open government data: a stakeholder-centered determination of High-Value Data sets for Latvia. In 14th International Conference on Theory and Practice of Electronic Governance (pp. 367-372) -> https://dl.acm.org/doi/abs/10.1145/3494193.3494243?casa_token=bPeuwmFWwQwAAAAA:ls-xXIPK5uXDHyxtBxqsMJOCuV6ud_ip59BX8n78uJnqvql6e8H9urlDG9zzeNklRmGFwI4sCXU06w
Atvērtā lekcija "Atvērto datu potenciāls" notika LU SZF maģistrantūras kursa “Datu sabiedrības vadība” ietvaros, ko nolasīja Dr.sc.comp. Anastasija Ņikiforova, LU Datorikas fakultātes docente, pētniece.
Atvērtie dati tiek uzskatīti par vērtīgu resursu, kura izmantošana ir potenciāli spējīga sniegt ievērojamus ekonomiskus, tehnoloģiskus un sociālus ieguvumus. Taču to panākšanai ir jāizpildās virknei priekšnosacījumu, kas attiecināmi gan uz datiem, gan uz infrastruktūru, gan uz lietotājiem, t.i. atvērto datu iniciatīvas veiksmes faktors ir ilgtspējīgas atvērto pārvaldes datu ekosistēmas izveide un uzturēšana. Lekcijas mērķis ir sniegt ieskatu par atvērto datu popularitāti un potenciālu tehnoloģisko un ekonomisko procesu attīstībai, uzmanību pievēršot to praktiskiem pielietojumiem gan Latvijā, gan ārpus tās, datus transformējot (inovatīvajos) risinājumos un pakalpojumos. Tāpat, ir plānots sniegts ieskatu par nozīmīgākajiem aspektiem, kas potenciāli ir spējīgi sekmēt ilgtspējīgas atvērto datu ekosistēmas izveidi, nodrošinot iespēju ikvienam interesentam atvērtus datus transformēt vērtībā.
PhD, Dc. comp.sc. Anastasija Ņikiforova ir Latvijas Universitātes Datorikas Fakultātes docente un Inovatīvo informācijas tehnoloģiju laboratorijas pētniece. Dr. Ņikiforovas pētnieciskas intereses ir saistītas ar datu pārvaldības, īpaši datu kvalitātes, un atvērto datu saistītājiem jautājumiem. LU Datorikas fakultātē papildus citiem docētājiem kursiem viņa ir izstrādājusi Specsemināru “Atvērtie dati un datu kvalitāte” un maģistra programmas kursu “Atvērtie pārvaldes dati datu-virzītā pasaulē”. Dr. Ņikiforova ir Latvijas Zinātnes padomes eksperte Inženierzinātnes un tehnoloģijas (Elektrotehnika, elektronika, informācijas un komunikāciju tehnoloģijas) un Dabaszinātnes (Datorzinātnes un informātika) nozarēs, kā arī LATA (Latvijas Atvērto Tehnoloģiju Asociācija) asociētā biedre. Viņa ir vairāk kā 25 zinātnisko rakstu (līdz-)autore, 4 no kuriem ir publicēti augstākā rangā Q1 žurnālos.
TIMELINESS OF OPEN DATA IN OPEN GOVERNMENT DATA PORTALS THROUGH PANDEMIC-RELA...Anastasija Nikiforova
This presentation is a supplementary material for the following article -> Nikiforova, A. (2020, October). Timeliness of open data in open government data portals through pandemic-related data: a long data way from the publisher to the user. In 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA) (pp. 131-138). IEEE.
The paper addresses the “timeliness” of data in open government data (OGD) portals. It is one of the primary principles of open data, which is considered to be a success factor, while at the same time it is one of the biggest barriers that can disrupt users trust in data and even the desire to use the entire open data portal. However, assessing this aspect is a very difficult task that, in most cases, becomes an impossible for open data users. There is therefore a lack of comparative studies on the timeliness of data of different national open data portals. Unfortunately, 2020 gave the opportunity to find out this. It became easy enough to compare how long is the data path from the data holder to the OGD portal by analysing the timeliness of Covid-19-related data sets in relation to the first case observed in a country. The study thus fills the gap of comparative studies by addressing 60 countries and their OGD portals concerning the timeliness of the data, providing a report on how much and what countries provide the open data as quickly as possible. It makes it possible to understand how quickly OGD portals react to emergencies by opening and updating data for their further potential reuse, which is essential in the digital data-driven world.
Read paper here -> Nikiforova, A. (2020, October). Timeliness of open data in open government data portals through pandemic-related data: a long data way from the publisher to the user. In 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA) (pp. 131-138). IEEE.https://ieeexplore.ieee.org/abstract/document/9264298?casa_token=FtfC_6bqZnsAAAAA:TaSnKrE7ZCxLyq5hvxX-X8O2sK_vZYcodTBtxoWOvaOAIFmMmy65f5dIK-kKYxFAMiC5jyl7Eeg
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Large Language Model (LLM) and it’s Geospatial Applications
Barriers to Openly Sharing Government Data: Towards an Open Data-adapted Innovation Resistance Theory
1. Barriers to Openly Sharing Government
Data: Towards an Open Data-adapted
Innovation Resistance Theory
ANASTASIJA NIKIFOROVA1 & ANNEKE ZUIDERWIJK2
1UNIVERSITY OF TARTU, FACULTY OF SCIENCE AND TECHNOLOGY, TARTU, ESTONIA & EUROPEAN OPEN SCIENCE CLOUD TF
2DELFT UNIVERSITY OF TECHNOLOGY, FACULTY OF TECHNOLOGY, POLICY AND MANAGEMENT, DELFT, NETHERLANDS
2. to develop an OGD-adapted IRT model to empirically identify predictors affecting public agencies’
resistance to openly sharing government data
This paper – an ongoing research, while in general our Research Questions (RQ) are:
(RQ1) What are functional and behavioural factors that facilitate or hamper opening government data by public organizations?
(RQ2) Does Innovation Resistance Theory provide a new and more complete insight into the barriers of the determinants affecting the
resistance of public agencies to openly share government data compared to more traditional UTAUT and TAM?
Additionally: does the COVID-19 pandemic have an [obvious/significant] effect on the public agencies
in terms of their readiness or resistance to openly share government data?
Aim
3. ✓ In the past two decades, research on OGD has started to thrive - many studies on the drivers and inhibitors for the
adoption of OGD have been conducted, both from the data providers’ and data users’ perspectives.
✓ From the data user perspective, the acceptance of OGD by different user types has been investigated using various
theoretical models, e.g.:
✓ Technology Acceptance Model (TAM) to examine the determinants of OGD use,
✓ Unified Theory of Acceptance and Use of Technology (UTAUT) to study the behavioural intention to accept and use OGD
in different countries,
✓ gamification theory has been applied to examine how playful interfaces can help tailor OGD portals for lay citizens
✓ Several other studies focused on the perspective of the OGD provider, i.e., the public organizations, in their
resistance to openly sharing government data, e.g., Technology–Organization–Environment (TOE) to investigate
factors influencing the adoption of OGD among government agencies
✓ Although various studies have applied theoretical models to investigate open data, most are focused on the reuse
of these data by companies and citizens
Rationale / motivation (1/2)
4. ✓ There is a paucity of research applying theoretical models to study the provision of OGD, and more specifically, the
resistance of public organizations to make government data publicly available
✓ Moreover, most studies on OGD barriers were carried out before the COVID-19 pandemic:
➢ previous research on OGD in relation to COVID-19 suggests that the pandemic affected the mind-set of citizens, researchers, and
governments on the role of OGD and the benefits of these data for these stakeholders
➢ the behavioural patterns of both OGD users and, more importantly, OGD providers may have changed their attitude towards OGD,
perhaps moving towards a more open paradigm
➢ at the same time, several new issues were identified because of a more scrupulous analysis of data being opened by public agencies
and their value
New insights might be gained through an OGD barrier study conducted after the COVID-19 pandemic
Rationale / motivation (2/2)
5. ✓ Resistance to change is «any conduct that serves to maintain status quo in the face of pressure to alter the status quo»
Zaltman and Wallendorf, 1979
✓ IRT is a special version of ‘resistance to change’ widely discussed in (social) psychology in behavioural science ➔ resistance is a
normal response of consumers when faced with innovation suffering from changes that affect the typical process of obtaining
information, purchase, use or dispose of new products (proposed by Ram (1987, 1989).
✓ The main claim - resistance depends on three sets of factors:
(1) perceived innovation resistance, which may be:
(a)consumer dependent,
(b)consumer independent (e.g. trialability, divisibility, communicability, reversibility),
(2) consumer characteristics:
(a)psychological variables,
(b)demographics variables,
(3) propagation mechanism divided into:
(a) type, e.g., marketer controlled vs. non-marketer controlled, personal vs. impersonal,
(b) characteristics described by clarity, credibility, source similarity and informativeness.
Result - one of three – adoption, rejection, or modification of innovation (if it is amendable to changes)
Innovation Resistance Theory (IRT) (1/2)
6. Innovation Resistance Theory (IRT) (2/2)
Resistance
factors
Resistance sub-
factors
Definition and source
Functional
Barriers
Usage Barrier The degree to which an innovation is perceived as requiring changes in consumers’ routines
Value Barrier The degree to which an innovations’ value-to-price ratio is perceived in relation to other product substitutes
Risk Barrier The degree of uncertainty in regard to financial, functional, and social consequences of using an innovation
Psychological
Barriers
Tradition Barrier The degree to which an innovation forces consumer to accept cultural changes
Image Barrier The degree to which an innovation is perceived as having an unfavorable image
The decision on the adoption is achieved by considering both functional and psychological barriers (Ram and Sheth, 1989)
Assumtion#1: OGD can be considered as an «innovation» at some extent. It is also an essential source for
sustainability-oriented and data-driven innovation by citizens, companies, researchers, and public organisations.
Assumption#2: in the light of the above, and given the specificity of the OGD nature, we assume that
not only end-users, but also data publishers can be considered as customers to some extent.
7. STEP I: Systematic Literature
Review on IRT (54)
to extract existing IRT models
and the corresponding
measurement items
STEP II: Mapping
barriers to openly
sharing government
data to the IRT barrier
categories (21)
✓ Scopus and Web of Science
✓ ("Innovation Resistance Theory"
OR ("IRT" AND "innovation" AND "resistance"))
✓ title, keywords, and abstract
✓ only English peer-reviewed papers and book
chapters
✓ 52 articles in Scopus and 34 in WoS
54 unique studies for their further
examination
✓ Extraction (from the previous literature
and own epxerience) of types of barriers to
openly sharing government data, which we
assume can lead to resistance in OGD
adoption.
✓ Mapping them in the IRT barrier
categories: usage, value, risk, tradition, and
image barriers
21 barrier category mapped in 5
IRT barrier categories
STEP III: Research
model* and hypotheses
(5)
✓ Defining research model and
measurement items
✓ Measurement items are drawn from
existing IRT models and the
corresponding measurement items
found in the literature (STEP I),
combined with the insights obtained on
OGD-specific barriers (STEP II)
Model with 36 measurement
items, 5 hypotheses
Research design
STEP IV: interview
protocol
✓ Transforming research model and
measurement items into questionanire
✓ Adding general questions
26 questions
…
8. STEP I: Systematic Literature
Review on IRT (54)
to extract existing IRT models
and the corresponding
measurement items
STEP II: Mapping
barriers to openly
sharing government
data to the IRT barrier
categories (21)
STEP III: Research
model* and hypotheses
(5)
Exploratory
interviews to
refine the
model
(qualitative
study)
Validate the refined model
(quantitative study)
Predictors affecting public
agencies’ resistance to
openly sharing
government data
Research design
STEP IV: Interview
protocol
***Considering the context of this model and the current rise in
popularity of the B2G data sharing, in light of which the EC is preparing
the Data Act to set the rules and conditions, thereby changing the current
voluntary model to a more mandatory data sharing, we believe that the
proposed model can become a reference model to analyse predictors
affecting resistance to share data in this subdomain.
9. ✓ A vast majority of scholars used the IRT as the basis for the empirical evaluation of consumer resistance to innovations
✓ Digital financial services such as mobile payments, mobile banking, and e-commerce, including mobile social commerce, mobile
website shopping, online shopping, are the two main research contexts for IRT-applications
✓ There is also a growing focus on food innovations such as organic food, IoT, and the collaborative consumption or sharing
economy, and there is a call for the exploration of innovation resistance in emerging trends, which can be associated with some
degree of risk or uncertainty, and innovations that are associated with social and environmental benefits
✓ Although IRT is rather domain-agnostic, it allows and even requires adaptations to the concerned topic and its specificities
✓ In some cases, IRT is also used in combination with other theories such as TAM, UTAUT framework, and Distrust Theory
✓ Most studies adopt a quantitative approach
Several insights from literature review
10. Resistance
sub-factors
Barriers to openly sharing government data, leading to
resistance
Usage
barriers
OGD often suffer from quality issues
Openly sharing government data is a complicated process
OGD portals suffer from low ease of use
Insufficient user-friendliness of the data
Value
barriers
OGD do not always provide value to users
Datasets may be incomplete
There may be concerns about the quality of open data
Openly sharing data requires resources, including time and costs
Impossible to sell the data when it is openly available
Data providers are usually the ones who invest the most effort and time in
publishing data, while businesses and citizens as data users profit the most
Risk
barriers
Organizations’ fear that openly shared government data will be misused
Organizations’ fear of users drawing false conclusions
Organizations fear that (privacy) sensitive data will be shared openly
Organizations fear making mistakes when preparing data for publication
Organizations fear being liable for data quality
Resistance
sub-factors
Barriers to openly sharing government
data, leading to resistance
Tradition
barriers
The risk-averse culture of governmental
organizations avoids openly sharing the data
Organizations are reluctant to change their
processes
Incompatible routines and processes of
organizations
Civil servants may lack the skills required for
openly sharing government data
Image
barriers
Organizations’ fear that their reputation will be
damaged due to the publication of low-quality
data
Organizations’ fear that they will be associated
with incorrect conclusions drawn from OGD
analysis
Barriers to openly sharing government data that
can lead to resistance
Functional Barriers
Psychological Barriers
11. IRT suggests to define five hypotheses, one for each barrier type, and to test and validate those using quantitative
research ➔ each of five hypotheses is developed as:
“[Construct∈ {Usage barrier; Value Barrier; Risk barrier; Tradition Barrier; Image Barrier}]
has a positive effect on public agencies’ resistance toward openly sharing government data”
OGD-adapted Innovation Resistance Theory model
12. Barrier Measurement item
Usage Barrier
(UB)
UB1: It is difficult to attain the appropriate quality level for open government data to be shared openly
UB2: It is difficult to prepare data for publication so that they comply with OGD principles
UB3: It is difficult to prepare data for publication so that they become appropriate for reuse
UB4: Data are difficult to publish on the OGD portal due to the complexity of the process
UB5: Data are difficult to publish on the OGD portal due to the unclear process
UB6: Data are difficult to publish on the OGD portal due to their limited functionality
UB7: Open government data portals often do not allow for semi-automation of the publishing process
UB8: It is difficult to maintain openly shared government data
Value Barrier
(VB)
VB1: My organization believes that openly sharing government data is often not valuable for the public
VB2: Many open government datasets are not appropriate for reuse
VB3: Many open government datasets suffer from data quality issues (completeness, accuracy, uniqueness, consistency etc.)
VB4: The public gains of openly sharing government data are often lower than the costs
VB5: My organizations’ gains of openly sharing government data are often lower than the costs
VB6: Data preparation is too resource-consuming for my organization
VB7: Open government data do not provide any value to my organization
VB8: Open data that my organization can openly share will not provide value to users
VB9: The amount of resources to be spent to prepare, publish and maintain open government data outweigh the benefit my organization gains from it
Risk Barrier
(RB)
RB1: My organization fears the misuse of openly shared government data
RB2: My organization fears the misinterpretation of openly shared government data
RB3: My organization fears that openly shared government data will not be reused
RB4: My organization fears violating data protection legislation when openly sharing government data
RB5: My organization fears that sensitive data will be exposed as a result of opening its data
RB6: My organization fears making mistakes when preparing data for publication
RB7: My organization fears that users will find existing errors in the data
RB8: My organization fears that openly sharing its data will reduce its gains (otherwise the organization could sell the data)
RB9: My organization fears that openly sharing its data will allow its competitors to benefit from this data
Tradition Barrier
(TB)
TB1: Freedom of information requests are sufficient for the public to obtain government data
TB2: My organization is reluctant to implement the culture change required for openly sharing government data
TB3: Employees in my organization lack the skills required for openly sharing government data
TB4: Employees in my organization lack the skills required for maintaining openly shared government data
TB5: My organization is reluctant to radically change the organizational processes that would enable openly sharing government data
Image Barrier
(IB)
IB1: My organization has a negative image of open government data
IB2: My organization believes that open government data is not valuable for users
IB3: My organization fears that openly sharing government data will damage the reputation of my organization
IB4: My organization fears that the accidental publication of low-quality data will damage the reputation of my organization
IB5: My organization fears that associating them to incorrect conclusions drawn from OGD analysis by OGD users will damage the reputation of my organization
OGD-adapted Innovation Resistance Theory model
and its elements (1/3)
✓ 5 categories → 5 hypotheses
✓ 36 measurement items by drawing from existing
IRT models and the corresponding measurement
items found in the literature, combined with the
insights obtained on OGD-specific barriers
13. Barrier Measurement item
Usage
Barrier (UB)
UB1: It is difficult to attain the appropriate quality level for open government data to be shared openly
UB2: It is difficult to prepare data for publication so that they comply with OGD principles
UB3: It is difficult to prepare data for publication so that they become appropriate for reuse
UB4: Data are difficult to publish on the OGD portal due to the complexity of the process
UB5: Data are difficult to publish on the OGD portal due to the unclear process
UB6: Data are difficult to publish on the OGD portal due to their limited functionality
UB7: Open government data portals often do not allow for semi-automation of the publishing process
UB8: It is difficult to maintain openly shared government data
Value
Barrier (VB)
VB1: My organization believes that openly sharing government data is often not valuable for the public
VB2: Many open government datasets are not appropriate for reuse
VB3: Many open government datasets suffer from data quality issues (completeness, accuracy, uniqueness, consistency etc.)
VB4: The public gains of openly sharing government data are often lower than the costs
VB5: My organizations’ gains of openly sharing government data are often lower than the costs
VB6: Data preparation is too resource-consuming for my organization
VB7: Open government data do not provide any value to my organization
VB8: Open data that my organization can openly share will not provide value to users
VB9: The amount of resources to be spent to prepare, publish and maintain open government data outweigh the benefit my organization gains from it
Risk Barrier
(RB)
RB1: My organization fears the misuse of openly shared government data
RB2: My organization fears the misinterpretation of openly shared government data
RB3: My organization fears that openly shared government data will not be reused
RB4: My organization fears violating data protection legislation when openly sharing government data
RB5: My organization fears that sensitive data will be exposed as a result of opening its data
RB6: My organization fears making mistakes when preparing data for publication
RB7: My organization fears that users will find existing errors in the data
RB8: My organization fears that openly sharing its data will reduce its gains (otherwise the organization could sell the data)
RB9: My organization fears that openly sharing its data will allow its competitors to benefit from this data
OGD-adapted Innovation Resistance Theory model
and its elements: Functional barriers
14. Barrier Measurement item
Tradition
Barrier (TB)
TB1: Freedom of information requests are sufficient for the public to obtain government data
TB2: My organization is reluctant to implement the culture change required for openly sharing government data
TB3: Employees in my organization lack the skills required for openly sharing government data
TB4: Employees in my organization lack the skills required for maintaining openly shared government data
TB5: My organization is reluctant to radically change the organizational processes that would enable openly sharing government data
Image Barrier
(IB)
IB1: My organization has a negative image of open government data
IB2: My organization believes that open government data is not valuable for users
IB3: My organization fears that openly sharing government data will damage the reputation of my organization
IB4: My organization fears that the accidental publication of low-quality data will damage the reputation of my organization
IB5: My organization fears that associating them to incorrect conclusions drawn from OGD analysis by OGD users will damage the
reputation of my organization
OGD-adapted Innovation Resistance Theory model
and its elements: Pshychological barriers
15. Barrier Measurement item Barrier-related questions
Usage Barrier (UB)
UB1: It is difficult to attain the appropriate quality level for
open government data to be shared openly
UB2: It is difficult to prepare data for publication so that they
comply with OGD principles
UB3: It is difficult to prepare data for publication so that they
become appropriate for reuse
UB4: Data are difficult to publish on the OGD portal due to the
complexity of the process
UB5: Data are difficult to publish on the OGD portal due to the
unclear process
UB6: Data are difficult to publish on the OGD portal due to
their limited functionality
UB7: Open government data portals often do not allow for
semi-automation of the publishing process
UB8: It is difficult to maintain openly shared government data
Q12. To what extent do the following situations form a barrier to openly sharing
your organization’s data:
-an inappropriate quality level of your organization’s data? (UB1)
-a complicated process to prepare data for sharing? (UB2)
-a complicated process to make your organization’s data reusable by others?
(UB3)
-a complicated process to publish your organization’s data on an open data
portal? (UB4)
Do you think data are difficult to publish on the OGD portal
-due to the unclear process? (UB5)
-limited functionality of open data portals? (UB6)
-no possibility to semi-automate my organization’s process to openly share
its data? (UB7)
Do you think it is difficult to maintain openly shared government data? (UB8)
Q13. Are there any other usage barriers that form a barrier for your organization to
openly share its data?
Value Barrier (VB) …
Risk Barrier (RB)
Tradition Barrier (TB)
Image Barrier (IB)
Interview protocol (part for UB only)
16. ✓ This study aims to develop an OGD-adapted IRT model to empirically identify predictors affecting public agencies’
resistance to openly sharing government data.
✓ Based on the literature review concerning both IRT research and barriers associated with open data sharing by
public agencies, we develop an initial version of the model.
✓ Compliant with the IRT literature, our conceptual model consists of five main constructs, including user barriers,
value barriers, risk barriers, tradition barriers, and image barriers.
✓ Based on these barriers we defined five hypotheses to study the resistance of public authorities to openly share
government data.
✓ For each of these constructs, we defined a list of measurement items specific to the context of OGD. This study is
conceptual, and we did not validate the created model yet.
Conclusions
***Considering the context of this model and the current rise in popularity of the Business-to-Government (B2G) data sharing, in light of which the
European Commission is taking regulatory action and is preparing the Data Act to set the rules and conditions, thereby changing the current voluntary
model to a more mandatory data sharing, we believe that the proposed model can become a reference model to analyse predictors affecting resistance
to share data in this subdomain.
17. ✓ To refine the model by conducting exploratory interviews in countries with different maturity
levels of OGD initiatives AND get an insight in possible control variables that need to be included, such as
organization size, the existence of OGD legislation and policies, and the available funding:
✓ Latvia
✓ Estonia (???)
✓ Netherlands
✓ Denmark
✓ Italy (???)
✓ Belgium (???)
??? //in case you want to get invovled, you are more than welcome to join us!
✓ To validate the refined model in a quantitative study of public agencies’ resistance to OGD provision
✓ To identify predictors affecting public agencies’ resistance to openly sharing government data.
Future work