Building an industrial community around Big Data in Europe is the priority of the BIG: Big Data Public Private Forum project. In this workshop we will present the work of the project including analysis of foundational Big Data research technologies, technology and strategy roadmaps to enable business to understand the potential of Big Data technologies, and the necessary collaboration and dissemination infrastructure to link technology suppliers, integrators and leading user organizations. BIG is working towards the definition and implementation of a clear strategy that tackles the necessary efforts in terms of Big Data research and innovation, while also providing a major boost for technology adoption and supporting actions for the successful implementation of the Big Data economy.
Open Data Innovation in Smart Cities: Challenges and TrendsEdward Curry
Open Data initiatives are increasingly considered as defining elements of emerging smart cities. However, few studies have attempted to provide a better understanding of the nature of this convergence and the impact on both domains. This talk examines the challenges and trends with open data initiatives using a socio-technical perspective of smart cities. The talk presents findings from a detailed study of 18 open data initiatives across five smart cities to identify emerging best practice. Three distinct waves of open data innovation for smart cities are discussed. The talk details the specific impacts of open data innovation on the different smart cities domains, governance of the cities, and the nature of datasets available in the open data ecosystem within smart cities.
Citizen Actuation For Lightweight Energy ManagementEdward Curry
In this work, we aim to utilise the concept of citizen sensors but also introduce the theory of citizen actuation. Citizen sensors observe, report, and collect data – we propose by supporting these citizen sensors with methods to affect their surroundings we enable them to become citizen actuators. We outline a use case for citizen actuation in the Energy Management domain, propose an architecture (a Cyber-Physical Social System) built on previous work in Energy Management with Twitter integration, use of Complex Event Processing (CEP), and perform an experiment to test this theory. We motivate the need for citizen actuation in Building Management Systems due to the high cost of actuation systems. We define the concept of citizen actuation and outline an experiment that shows a reduction in average energy usage of 24%. The experiment supports the concept of citizen actuation to improve energy usage within the experimental environment and we discuss future research directions in this area.
Within the operational phase buildings are now producing more data than ever before, from energy usage, utility information, occupancy patterns, weather data, etc. In order to manage a building holistically it is important to use knowledge from across these information sources. However, many barriers exist to their interoperability and there is little interaction between these islands of information. As part of moving building data to the cloud there is a critical need to reflect on the design of cloud-based data services and how they are designed from an interoperability perspective. If new cloud data services are designed in the same manner as traditional building management systems they will suffer from the data interoperability problems. Linked data technology leverages the existing open protocols and W3C standards of the Web architecture for sharing structured data on the web. In this paper we propose the use of linked data as an enabling technology for cloud-based building data services. The objective of linking building data in the cloud is to create an integrated well-connected graph of relevant information for managing a building. This paper describes the fundamentals of the approach and demonstrates the concept within a Small Medium sized Enterprise (SME) with an owner-occupied office building.
Towards Unified and Native Enrichment in Event Processing SystemsEdward Curry
Events are encapsulated pieces of information that flow from one event agent to another. In order to process an event, additional information that is external to the event is often needed. This is achieved using a process called event enrichment. Current approaches to event enrichment are external to event processing engines and are handled by specialized agents. Within large-scale environments with high heterogeneity among events, the enrichment process may become difficult to maintain. This paper examines event enrichment in terms of information completeness and presents a unified model for event enrichment that takes place natively within the event processing engine. The paper describes the requirements of event enrichment and highlights its challenges such as finding enrichment sources, retrieval of information items, finding complementary information and its fusion with events. It then details an instantiation of the model using Semantic Web and Linked Data technologies. Enrichment is realised by dynamically guiding a spreading activation algorithm in a Linked Data graph. Multiple spreading activation strategies have been evaluated on a set of Wikipedia events and experimentation shows the viability of the approach.
Dealing with Semantic Heterogeneity in Real-Time InformationEdward Curry
Tutorial at the EarthBiAs 2014 Summer School on Dealing with Semantic Heterogeneity in Real-Time Information
Part I: Large Scale Open Environments
Part Ii: Computational Paradigms
Part III: RDF Event Processing
Part IV: Theory of Event Exchange
Part V: Approaches to Semantic Decoupling
Part VI: Example Application: Linked Energy Intelligence
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...Edward Curry
The Real-time Linked Dataspace (RLD) is an enabling platform for data management for intelligent systems within smart environments that combines the pay-as-you-go paradigm of dataspaces, linked data, and knowledge graphs with entity-centric real-time query capabilities.
The RLD contains all the relevant information within a data ecosystem including things, sensors, and data sources and has the responsibility for managing the relationships among these participants.
It manages sources without presuming a pre-existing semantic integration among them using specialised dataspace support services for loose administrative proximity and semantic integration for event and stream systems. Support services leverage approximate and best-effort techniques and operate under a 5 star model for “pay-as-you-go” incremental data management.
Transforming the European Data Economy: A Strategic Research and Innovation A...Edward Curry
Transforming the European Data Economy: A Strategic Research and Innovation Agenda
Keynote at European Data Forum 2016
Prof. Dr. Milan Petković, Vice President BDVA, Philips
Dr. Edward Curry, Vice President BDVA, Insight
Open Data Innovation in Smart Cities: Challenges and TrendsEdward Curry
Open Data initiatives are increasingly considered as defining elements of emerging smart cities. However, few studies have attempted to provide a better understanding of the nature of this convergence and the impact on both domains. This talk examines the challenges and trends with open data initiatives using a socio-technical perspective of smart cities. The talk presents findings from a detailed study of 18 open data initiatives across five smart cities to identify emerging best practice. Three distinct waves of open data innovation for smart cities are discussed. The talk details the specific impacts of open data innovation on the different smart cities domains, governance of the cities, and the nature of datasets available in the open data ecosystem within smart cities.
Citizen Actuation For Lightweight Energy ManagementEdward Curry
In this work, we aim to utilise the concept of citizen sensors but also introduce the theory of citizen actuation. Citizen sensors observe, report, and collect data – we propose by supporting these citizen sensors with methods to affect their surroundings we enable them to become citizen actuators. We outline a use case for citizen actuation in the Energy Management domain, propose an architecture (a Cyber-Physical Social System) built on previous work in Energy Management with Twitter integration, use of Complex Event Processing (CEP), and perform an experiment to test this theory. We motivate the need for citizen actuation in Building Management Systems due to the high cost of actuation systems. We define the concept of citizen actuation and outline an experiment that shows a reduction in average energy usage of 24%. The experiment supports the concept of citizen actuation to improve energy usage within the experimental environment and we discuss future research directions in this area.
Within the operational phase buildings are now producing more data than ever before, from energy usage, utility information, occupancy patterns, weather data, etc. In order to manage a building holistically it is important to use knowledge from across these information sources. However, many barriers exist to their interoperability and there is little interaction between these islands of information. As part of moving building data to the cloud there is a critical need to reflect on the design of cloud-based data services and how they are designed from an interoperability perspective. If new cloud data services are designed in the same manner as traditional building management systems they will suffer from the data interoperability problems. Linked data technology leverages the existing open protocols and W3C standards of the Web architecture for sharing structured data on the web. In this paper we propose the use of linked data as an enabling technology for cloud-based building data services. The objective of linking building data in the cloud is to create an integrated well-connected graph of relevant information for managing a building. This paper describes the fundamentals of the approach and demonstrates the concept within a Small Medium sized Enterprise (SME) with an owner-occupied office building.
Towards Unified and Native Enrichment in Event Processing SystemsEdward Curry
Events are encapsulated pieces of information that flow from one event agent to another. In order to process an event, additional information that is external to the event is often needed. This is achieved using a process called event enrichment. Current approaches to event enrichment are external to event processing engines and are handled by specialized agents. Within large-scale environments with high heterogeneity among events, the enrichment process may become difficult to maintain. This paper examines event enrichment in terms of information completeness and presents a unified model for event enrichment that takes place natively within the event processing engine. The paper describes the requirements of event enrichment and highlights its challenges such as finding enrichment sources, retrieval of information items, finding complementary information and its fusion with events. It then details an instantiation of the model using Semantic Web and Linked Data technologies. Enrichment is realised by dynamically guiding a spreading activation algorithm in a Linked Data graph. Multiple spreading activation strategies have been evaluated on a set of Wikipedia events and experimentation shows the viability of the approach.
Dealing with Semantic Heterogeneity in Real-Time InformationEdward Curry
Tutorial at the EarthBiAs 2014 Summer School on Dealing with Semantic Heterogeneity in Real-Time Information
Part I: Large Scale Open Environments
Part Ii: Computational Paradigms
Part III: RDF Event Processing
Part IV: Theory of Event Exchange
Part V: Approaches to Semantic Decoupling
Part VI: Example Application: Linked Energy Intelligence
From Data Platforms to Dataspaces: Enabling Data Ecosystems for Intelligent S...Edward Curry
The Real-time Linked Dataspace (RLD) is an enabling platform for data management for intelligent systems within smart environments that combines the pay-as-you-go paradigm of dataspaces, linked data, and knowledge graphs with entity-centric real-time query capabilities.
The RLD contains all the relevant information within a data ecosystem including things, sensors, and data sources and has the responsibility for managing the relationships among these participants.
It manages sources without presuming a pre-existing semantic integration among them using specialised dataspace support services for loose administrative proximity and semantic integration for event and stream systems. Support services leverage approximate and best-effort techniques and operate under a 5 star model for “pay-as-you-go” incremental data management.
Transforming the European Data Economy: A Strategic Research and Innovation A...Edward Curry
Transforming the European Data Economy: A Strategic Research and Innovation Agenda
Keynote at European Data Forum 2016
Prof. Dr. Milan Petković, Vice President BDVA, Philips
Dr. Edward Curry, Vice President BDVA, Insight
Key Technology Trends for Big Data in EuropeEdward Curry
In this presentation we will discuss some of the results of the BIG project including analysis of foundational Big Data research technologies, technology and strategy roadmaps to enable business to understand the potential of Big Data technologies across different sectors, and the necessary collaboration and dissemination infrastructure to link technology suppliers, integrators and leading user organizations.
Edward Curry is leading the Technical Working Group of the BIG Project with over 30 committed experts along the big data value chain (Acquisition, Analysis, Curation, Storage, Usage). With the help of the other technical leads, he will elaborate on the key technology trends identified in the BIG Project and how they bring data-driven value to industrial sectors.
Improving Policy Coherence and Accessibility through Semantic Web Technologie...Edward Curry
The complexity, volume and diversity of government policies and regulations raises significant burden on both the complying parties and government itself. On the one hand, businesses, civil organizations and other societal entities are required to simultaneously comply with and interpret different and possibly conflicting or inconsistent regulations. On the other hand, government as a whole must ensure policy and regulatory coherence across its various policy domains. While the recent wave of open government initiatives have led to significantly more public access to these documents, features allowing cross-referencing related documents and linking to less formal documents or comments on other media more understandable and accessible to the public are not common if at all available today. As a solution to this challenge, we propose an Open Government-wide Policy and Regulation Information Space consisting of documents that are “semantically” annotated and cross-linked to other documents in the information space as well as to external resources such as interpretations, comments and blogs on the social web.
Our approach is three-fold. First, we identify the requirements for the infrastructure. Second, we eloborate a Reference Architecture identifying the various elements needed within the infrastructure. Third, we show how such infrastructure may be realised as a linked data portal where policies and regulations are published as linked open data. Finally, we present a case study involving environmental policy and regulations; discuss the potential impact of such infrastructure on coherency and accessibility of policies and regulations and concludes with challenges associated with provisioning a linked open policy and regulatory information infrastructure.
Crowdsourcing Approaches for Smart City Open Data ManagementEdward Curry
A wide-scale bottom-up approach to the creation and management of open data has been demonstrated by projects like Freebase, Wikipedia, and DBpedia. This talk explores how to involving a wide community of users in collaborative management of open data activities within a Smart City. The talk discusses how crowdsourcing techniques can be applied within a Smart City context using crowdsourcing and human computation platforms such as Amazon Mechanical Turk, Mobile Works, and Crowd Flower.
Building Optimisation using Scenario Modeling and Linked DataEdward Curry
As buildings become more complex, it becomes more difficult to manage and operate them effectively. The holistic management and maintenance of facilities is a multi-domain problem encompassing financial accounting, building maintenance, human resources, asset management and code compliance, affecting different stakeholders in different ways. One technique, called scenario modelling, customises data-driven decision support for building managers during building operation. However, current implementations of scenario modeling have been limited to data from Building Management Systems with little interaction with other relevant data sources due to interoperability issues. Linked data helps to overcome interoperability challenges to enable data from multiple domains to be merged into holistic scenario models for different stakeholders of the building. The approach is demonstrated using an owner-occupied office building.
Challenges Ahead for Converging Financial DataEdward Curry
Consumers of financial information come in many guises from personal investors looking for that value for money share, to government regulators investigating corporate fraud, to business executives seeking competitive advantage over their competition. While the particular analysis performed by each of these information consumers will vary, they all have to deal with the explosion of information available from multiple sources including, SEC filings, corporate press releases, market press coverage, and expert commentary. Recent economic events have begun to bring sharp focus on the activities and actions of financial markets, institutions and not least regulatory authorities. Calls for enhanced scrutiny will bring increased regulation and information transparency While extracting information from individual filings is relatively easy to perform when a machine readable format is utilized (for example, using XBRL, the eXtensible Business Reporting Language), cross comparison of extracted financial information can be problematic as descriptions and accounting terms vary across companies and jurisdictions. Across multiple sources the problem becomes the classical data integration problem where a common data abstraction is necessary before functional data use can begin. Within this paper we discuss the challenges in converging financial data from multiple sources. We concentrate on integrating data from multiple sources in terms of the abstraction, linking, and consolidation activities needed to consolidate data before more sophisticated analysis algorithms can examine the data for the objectives of particular information consumers (for e.g. competitive analysis, regulatory compliance, or investor analysis). We base our discussion on several years researching and deploying data integration systems in both the web and enterprise environments.
E. Curry, A. Harth, and S. O’Riain, “Challenges Ahead for Converging Financial Data,” in Proceedings of the XBRL/W3C Workshop on Improving Access to Financial Data on the Web, 2009.
Collaborative Data Management: How Crowdsourcing Can Help To Manage DataEdward Curry
Data management efforts such as MDM are a popular approach for high quality enterprise data. However, MDM can be heavily centralized and labour intensive, where the cost and effort can become prohibitively high. The concentration of data management and stewardship onto a few highly skilled individuals, like developers and data experts, can be a significant bottleneck. This talk explores how to effectively involving a wider community of users within collaborative data management activities. The bottom-up approach of involving crowds in the creation and management of data has been demonstrated by projects like Freebase, Wikipedia, and DBpedia. The talk is discusses how collaborative data management can be applied within an enterprise context using platforms such as Amazon Mechanical Turk, Mobile Works, and internal enterprise human computation platforms.
Topics covered include:
- Introduction to Crowdsourcing and Human Computation for Data Management
- Crowds vs. Communities, When to use them and why
- Push vs. Pull methods of crowdsourcing data management
- Setting up and running a collaborative data management process
- Modelling the expertise of communities
Linked Water Data For Water Information ManagementEdward Curry
The management of water consumption is hindered by low general awareness and absence of precise historical and contextual information. Effective and efficiency management of water resources requires a holistic approach considering all the stages of water usage. A decision support tool for water management services requires access to a number of different data domains and different data providers. The design of next-generation water information management systems poses significant technical challenges in terms of information management, integration of heterogeneous data, and real-time processing of dynamic data. Linked Data is a set of web technologies that enables integration of different data sources. This work investigates the usage of Linked Data technologies in the Water Management domain, describes the fundamental concepts of the approach, details an architecture, and discusses possible water management applications.
Querying Heterogeneous Datasets on the Linked Data WebEdward Curry
The growing number of datasets published on the Web as linked data brings both opportunities for high data availability and challenges inherent to querying data in a semantically heterogeneous and distributed environment. Approaches used for querying siloed databases fail at Web-scale because users don't have an a priori understanding of all the available datasets. This article investigates the main challenges in constructing a query and search solution for linked data and analyzes existing approaches and trends.
Interactive Water Services: The Waternomics ApproachEdward Curry
WATERNOMICS focuses on the development of ICT as an enabling technology to manage water as a resource, increase end-user conservation awareness and affect behavioral changes. Unique aspects of WATERNOMICS include personalized feedback about end-user water consumption, the development of systematic and standards-based water resource management systems, new sensor hardware developments, and the introduction of forecasting and fault detection diagnosis to the analysis of water consumption data. These services will be bundled into the WATERNOMICS Water Information Services Platform. This paper presents the overall architectural approach to WATERNOMICS and details the potential interactive services possible based on this novel platform.
Approximate Semantic Matching of Heterogeneous EventsEdward Curry
Event-based systems have loose coupling within space, time and synchronization, providing a scalable infrastructure for information exchange and distributed workflows. However, event-based systems are tightly coupled, via event subscriptions and patterns, to the semantics of the underlying event schema and values. The high degree of semantic heterogeneity of events in large and open deployments such as smart cities and the sensor web makes it difficult to develop and maintain event-based systems. In order to address semantic coupling within event-based systems, we propose vocabulary free subscriptions together with the use of approximate semantic matching of events. This paper examines the requirement of event semantic decoupling and discusses approximate semantic event matching and the consequences it implies for event processing systems. We introduce a semantic event matcher and evaluate the suitability of an approximate hybrid matcher based on both thesauri-based and distributional semantics-based similarity and relatedness measures. The matcher is evaluated over show that the approach matches a representation of Wikipedia and Freebase events. Initial evaluations events structured with maximal combined precision-recall F1 score of 75.89% on average in all experiments with a subscription set of 7 subscriptions. The evaluation shows how a hybrid approach to semantic event matching outperforms a single similarity measure approach.
Hasan S, O'Riain S, Curry E. Approximate Semantic Matching of Heterogeneous Events. In: 6th ACM International Conference on Distributed Event-Based Systems (DEBS 2012).
Wikipedia (DBpedia): Crowdsourced Data CurationEdward Curry
Wikipedia is an open-source encyclopedia, built collaboratively by a large community of web editors. The success of Wikipedia as one of the most important sources of information available today still challenges existing models of content creation. Despite the fact that the term ‘curation’ is not commonly addressed by Wikipedia’s contributors, the task of digital curation is the central activity of Wikipedia editors, who have the responsibility for information quality standards.
Wikipedia, is already widely used as a collaborative environment inside organizations5.
The investigation of the collaboration dynamics behind Wikipedia highlights important features and good practices which can be applied to different organizations. Our analysis focuses on the curation perspective and covers two important dimensions: social organization and artifacts, tools & processes for cooperative work coordination. These are key enablers that support the creation of high quality information products in Wikipedia’s decentralized environment.
An Environmental Chargeback for Data Center and Cloud Computing ConsumersEdward Curry
Government, business, and the general public increasingly agree that the polluter should pay. Carbon dioxide and environmental damage are considered viable chargeable commodities. The net effect of this for data center and cloud computing operators is that they should look to “chargeback” the environmental impacts of their services to the consuming end-users. An environmental chargeback model can have a positive effect on environmental impacts by linking consumers to the indirect impacts of their usage, facilitating clearer understanding of the impact of their actions. In this paper we motivate the need for environmental chargeback mechanisms. The environmental chargeback model is described including requirements, methodology for definition, and environmental impact allocation strategies. The paper details a proof-of-concept within an operational data center together with discussion on experiences gained and future research directions.
Curry, E.; Hasan, S.; White, M.; and Melvin, H. 2012. An Environmental Chargeback for Data Center and Cloud Computing Consumers. In Huusko, J.; de Meer, H.; Klingert, S.; and Somov, A., eds., First International Workshop on Energy-Efficient Data Centers. Madrid, Spain: Springer Berlin / Heidelberg.
Towards Lightweight Cyber-Physical Energy Systems using Linked Data, the Web ...Edward Curry
Cyber-Physical Energy Systems (CPES) exploit the potential of information technology to boost energy efficiency while minimising environmental impacts. CPES can help manage energy more efficiently by providing a functional view of the entire energy system so that energy activities can be understood, changed, and reinvented to better support sustainable practices. CPES can be applied at different scales from Smart Grids and Smart Cities to Smart Enterprises and Smart Buildings. Significant technical challenges exist in terms of information management, leveraging real-time sensor data, coordination of the various stakeholders to optimize energy usage.
In this talk I describe an approach to overcome these challenges by re-using the Web standards to quickly connect the required systems within a CPES. The resulting lightweight architecture leverages Web technologies including Linked Data, the Web of Things, and Social Media. The paper describes the fundamentals of the approach and demonstrates it within an Enterprise Energy Management scenario smart building.
SLUA: Towards Semantic Linking of Users with Actions in CrowdsourcingEdward Curry
Recent advances in web technologies allow people to help solve complex problems by performing online tasks in return for money, learning, or fun. At present, human contribution is limited to the tasks defined on individual crowdsourcing platforms. Furthermore, there is a lack of tools and technologies that support matching of tasks with appropriate users, across multiple systems. A more explicit capture of the semantics of crowdsourcing tasks could enable the design and development of matchmaking services between users and tasks. The paper presents the SLUA ontology that aims to model users and tasks in crowdsourcing systems in terms of the relevant actions, capabilities, and rewards. This model describes different types of human tasks that help in solving complex problems using crowds. The paper provides examples of describing users and tasks in some real world systems, with SLUA ontology.
System of Systems Information Interoperability using a Linked DataspaceEdward Curry
System of Systems pose significant technical challenges in terms of information interoperability that require overcoming conceptual barriers (both syntax and semantic) and technological barriers. This paper presents an approach to System of Systems information interoperability based on the Dataspace data management abstraction and the Linked Data approach to sharing information on the web. The paper describes the fundamentals of the approach and demonstrates the concept with a System of Systems for enterprise energy management.
Curry E. System of Systems Information Interoperability using a Linked Dataspace. In: IEEE 7th International Conference on System of Systems Engineering (SOSE 2012)
Further Reading:
http://www.edwardcurry.org/publications/Curry_LinkedDataspaceForSOS_SOSE.pdf
Big Data Analytics: A New Business OpportunityEdward Curry
This talk introduces Big Data analytics and how they can be used to deliver value within organisations. The talk will cover the transformational potential of creating data value chains between different sectors. Developing a Big Data analytics capability will be discussed in addition to the challenges facing the emerging data economy.
Crowdsourcing Approaches to Big Data Curation - Rio Big Data MeetupEdward Curry
Data management efforts such as Master Data Management and Data Curation are a popular approach for high quality enterprise data. However, Data Curation can be heavily centralised and labour intensive, where the cost and effort can become prohibitively high. The concentration of data management and stewardship onto a few highly skilled individuals, like developers and data experts, can be a significant bottleneck. This talk explores how to effectively involving a wider community of users within big data management activities. The bottom-up approach of involving crowds in the creation and management of data has been demonstrated by projects like Freebase, Wikipedia, and DBpedia. The talk discusses how crowdsourcing data management techniques can be applied within an enterprise context.
Topics covered include:
- Data Quality And Data Curation
- Crowdsourcing
- Case Studies on Crowdsourced Data Curation
- Setting up a Crowdsourced Data Curation Process
- Linked Open Data Example
- Future Research Challenges
Designing Next Generation Smart City Initiatives:Harnessing Findings And Les...Edward Curry
The proliferation of “Smart Cities” initiatives around the world is part of the strategic response by governments to the challenges and opportunities of increasing urbanization and the rise of cities as the nexus of societal development. As a framework for urban transformation, Smart City initiatives aim to harness Information and Communication Technologies and Knowledge Infrastructures for economic regeneration, social cohesion, better city administration and infrastructure management. However, experiences from earlier Smart City initiatives have revealed several technical, management and governance challenges arising from the inherent nature of a Smart City as a complex “Socio- technical System of Systems”. While these early lessons are informing modest objectives for planned Smart Cities programs, no rigorous developed framework based on careful analysis of existing initiatives is available to guide policymakers, practitioners, and other Smart City stakeholders. In response to this need, this paper presents a “Smart City Initiative Design (SCID) Framework” grounded in the findings from the analysis of ten major Smart Cities programs from Netherlands, Sweden, Malta, United Arab Emirates, Portugal, Singapore, Brazil, South Korea, China and Japan. The findings provide a design space for the objectives, implementation options, strategies, and the enabling institutional and governance mechanisms for Smart City initiatives.
Developing an Sustainable IT Capability: Lessons From Intel's JourneyEdward Curry
Intel Corporation set itself a goal to reduce its global-warming greenhouse gas footprint by 20% by 2012 from 2007 levels. Through the use of sustainable IT, the Intel IT organization is recognized as a significant contributor to the company’s sustainability strategy by transforming its IT operations and overall Intel operations. This article describes how Intel has achieved IT sustainability benefits thus far by developing four key capabilities. These capabilities have been incorporated into the Sustainable ICT Capability Maturity Framework (SICT-CMF), a model developed by an industry consortium in which the authors were key participants. The article ends with lessons learned from Intel’s experiences that can be applied by business and IT executives in other enterprises.
Key Technology Trends for Big Data in EuropeEdward Curry
In this presentation we will discuss some of the results of the BIG project including analysis of foundational Big Data research technologies, technology and strategy roadmaps to enable business to understand the potential of Big Data technologies across different sectors, and the necessary collaboration and dissemination infrastructure to link technology suppliers, integrators and leading user organizations.
Edward Curry is leading the Technical Working Group of the BIG Project with over 30 committed experts along the big data value chain (Acquisition, Analysis, Curation, Storage, Usage). With the help of the other technical leads, he will elaborate on the key technology trends identified in the BIG Project and how they bring data-driven value to industrial sectors.
Improving Policy Coherence and Accessibility through Semantic Web Technologie...Edward Curry
The complexity, volume and diversity of government policies and regulations raises significant burden on both the complying parties and government itself. On the one hand, businesses, civil organizations and other societal entities are required to simultaneously comply with and interpret different and possibly conflicting or inconsistent regulations. On the other hand, government as a whole must ensure policy and regulatory coherence across its various policy domains. While the recent wave of open government initiatives have led to significantly more public access to these documents, features allowing cross-referencing related documents and linking to less formal documents or comments on other media more understandable and accessible to the public are not common if at all available today. As a solution to this challenge, we propose an Open Government-wide Policy and Regulation Information Space consisting of documents that are “semantically” annotated and cross-linked to other documents in the information space as well as to external resources such as interpretations, comments and blogs on the social web.
Our approach is three-fold. First, we identify the requirements for the infrastructure. Second, we eloborate a Reference Architecture identifying the various elements needed within the infrastructure. Third, we show how such infrastructure may be realised as a linked data portal where policies and regulations are published as linked open data. Finally, we present a case study involving environmental policy and regulations; discuss the potential impact of such infrastructure on coherency and accessibility of policies and regulations and concludes with challenges associated with provisioning a linked open policy and regulatory information infrastructure.
Crowdsourcing Approaches for Smart City Open Data ManagementEdward Curry
A wide-scale bottom-up approach to the creation and management of open data has been demonstrated by projects like Freebase, Wikipedia, and DBpedia. This talk explores how to involving a wide community of users in collaborative management of open data activities within a Smart City. The talk discusses how crowdsourcing techniques can be applied within a Smart City context using crowdsourcing and human computation platforms such as Amazon Mechanical Turk, Mobile Works, and Crowd Flower.
Building Optimisation using Scenario Modeling and Linked DataEdward Curry
As buildings become more complex, it becomes more difficult to manage and operate them effectively. The holistic management and maintenance of facilities is a multi-domain problem encompassing financial accounting, building maintenance, human resources, asset management and code compliance, affecting different stakeholders in different ways. One technique, called scenario modelling, customises data-driven decision support for building managers during building operation. However, current implementations of scenario modeling have been limited to data from Building Management Systems with little interaction with other relevant data sources due to interoperability issues. Linked data helps to overcome interoperability challenges to enable data from multiple domains to be merged into holistic scenario models for different stakeholders of the building. The approach is demonstrated using an owner-occupied office building.
Challenges Ahead for Converging Financial DataEdward Curry
Consumers of financial information come in many guises from personal investors looking for that value for money share, to government regulators investigating corporate fraud, to business executives seeking competitive advantage over their competition. While the particular analysis performed by each of these information consumers will vary, they all have to deal with the explosion of information available from multiple sources including, SEC filings, corporate press releases, market press coverage, and expert commentary. Recent economic events have begun to bring sharp focus on the activities and actions of financial markets, institutions and not least regulatory authorities. Calls for enhanced scrutiny will bring increased regulation and information transparency While extracting information from individual filings is relatively easy to perform when a machine readable format is utilized (for example, using XBRL, the eXtensible Business Reporting Language), cross comparison of extracted financial information can be problematic as descriptions and accounting terms vary across companies and jurisdictions. Across multiple sources the problem becomes the classical data integration problem where a common data abstraction is necessary before functional data use can begin. Within this paper we discuss the challenges in converging financial data from multiple sources. We concentrate on integrating data from multiple sources in terms of the abstraction, linking, and consolidation activities needed to consolidate data before more sophisticated analysis algorithms can examine the data for the objectives of particular information consumers (for e.g. competitive analysis, regulatory compliance, or investor analysis). We base our discussion on several years researching and deploying data integration systems in both the web and enterprise environments.
E. Curry, A. Harth, and S. O’Riain, “Challenges Ahead for Converging Financial Data,” in Proceedings of the XBRL/W3C Workshop on Improving Access to Financial Data on the Web, 2009.
Collaborative Data Management: How Crowdsourcing Can Help To Manage DataEdward Curry
Data management efforts such as MDM are a popular approach for high quality enterprise data. However, MDM can be heavily centralized and labour intensive, where the cost and effort can become prohibitively high. The concentration of data management and stewardship onto a few highly skilled individuals, like developers and data experts, can be a significant bottleneck. This talk explores how to effectively involving a wider community of users within collaborative data management activities. The bottom-up approach of involving crowds in the creation and management of data has been demonstrated by projects like Freebase, Wikipedia, and DBpedia. The talk is discusses how collaborative data management can be applied within an enterprise context using platforms such as Amazon Mechanical Turk, Mobile Works, and internal enterprise human computation platforms.
Topics covered include:
- Introduction to Crowdsourcing and Human Computation for Data Management
- Crowds vs. Communities, When to use them and why
- Push vs. Pull methods of crowdsourcing data management
- Setting up and running a collaborative data management process
- Modelling the expertise of communities
Linked Water Data For Water Information ManagementEdward Curry
The management of water consumption is hindered by low general awareness and absence of precise historical and contextual information. Effective and efficiency management of water resources requires a holistic approach considering all the stages of water usage. A decision support tool for water management services requires access to a number of different data domains and different data providers. The design of next-generation water information management systems poses significant technical challenges in terms of information management, integration of heterogeneous data, and real-time processing of dynamic data. Linked Data is a set of web technologies that enables integration of different data sources. This work investigates the usage of Linked Data technologies in the Water Management domain, describes the fundamental concepts of the approach, details an architecture, and discusses possible water management applications.
Querying Heterogeneous Datasets on the Linked Data WebEdward Curry
The growing number of datasets published on the Web as linked data brings both opportunities for high data availability and challenges inherent to querying data in a semantically heterogeneous and distributed environment. Approaches used for querying siloed databases fail at Web-scale because users don't have an a priori understanding of all the available datasets. This article investigates the main challenges in constructing a query and search solution for linked data and analyzes existing approaches and trends.
Interactive Water Services: The Waternomics ApproachEdward Curry
WATERNOMICS focuses on the development of ICT as an enabling technology to manage water as a resource, increase end-user conservation awareness and affect behavioral changes. Unique aspects of WATERNOMICS include personalized feedback about end-user water consumption, the development of systematic and standards-based water resource management systems, new sensor hardware developments, and the introduction of forecasting and fault detection diagnosis to the analysis of water consumption data. These services will be bundled into the WATERNOMICS Water Information Services Platform. This paper presents the overall architectural approach to WATERNOMICS and details the potential interactive services possible based on this novel platform.
Approximate Semantic Matching of Heterogeneous EventsEdward Curry
Event-based systems have loose coupling within space, time and synchronization, providing a scalable infrastructure for information exchange and distributed workflows. However, event-based systems are tightly coupled, via event subscriptions and patterns, to the semantics of the underlying event schema and values. The high degree of semantic heterogeneity of events in large and open deployments such as smart cities and the sensor web makes it difficult to develop and maintain event-based systems. In order to address semantic coupling within event-based systems, we propose vocabulary free subscriptions together with the use of approximate semantic matching of events. This paper examines the requirement of event semantic decoupling and discusses approximate semantic event matching and the consequences it implies for event processing systems. We introduce a semantic event matcher and evaluate the suitability of an approximate hybrid matcher based on both thesauri-based and distributional semantics-based similarity and relatedness measures. The matcher is evaluated over show that the approach matches a representation of Wikipedia and Freebase events. Initial evaluations events structured with maximal combined precision-recall F1 score of 75.89% on average in all experiments with a subscription set of 7 subscriptions. The evaluation shows how a hybrid approach to semantic event matching outperforms a single similarity measure approach.
Hasan S, O'Riain S, Curry E. Approximate Semantic Matching of Heterogeneous Events. In: 6th ACM International Conference on Distributed Event-Based Systems (DEBS 2012).
Wikipedia (DBpedia): Crowdsourced Data CurationEdward Curry
Wikipedia is an open-source encyclopedia, built collaboratively by a large community of web editors. The success of Wikipedia as one of the most important sources of information available today still challenges existing models of content creation. Despite the fact that the term ‘curation’ is not commonly addressed by Wikipedia’s contributors, the task of digital curation is the central activity of Wikipedia editors, who have the responsibility for information quality standards.
Wikipedia, is already widely used as a collaborative environment inside organizations5.
The investigation of the collaboration dynamics behind Wikipedia highlights important features and good practices which can be applied to different organizations. Our analysis focuses on the curation perspective and covers two important dimensions: social organization and artifacts, tools & processes for cooperative work coordination. These are key enablers that support the creation of high quality information products in Wikipedia’s decentralized environment.
An Environmental Chargeback for Data Center and Cloud Computing ConsumersEdward Curry
Government, business, and the general public increasingly agree that the polluter should pay. Carbon dioxide and environmental damage are considered viable chargeable commodities. The net effect of this for data center and cloud computing operators is that they should look to “chargeback” the environmental impacts of their services to the consuming end-users. An environmental chargeback model can have a positive effect on environmental impacts by linking consumers to the indirect impacts of their usage, facilitating clearer understanding of the impact of their actions. In this paper we motivate the need for environmental chargeback mechanisms. The environmental chargeback model is described including requirements, methodology for definition, and environmental impact allocation strategies. The paper details a proof-of-concept within an operational data center together with discussion on experiences gained and future research directions.
Curry, E.; Hasan, S.; White, M.; and Melvin, H. 2012. An Environmental Chargeback for Data Center and Cloud Computing Consumers. In Huusko, J.; de Meer, H.; Klingert, S.; and Somov, A., eds., First International Workshop on Energy-Efficient Data Centers. Madrid, Spain: Springer Berlin / Heidelberg.
Towards Lightweight Cyber-Physical Energy Systems using Linked Data, the Web ...Edward Curry
Cyber-Physical Energy Systems (CPES) exploit the potential of information technology to boost energy efficiency while minimising environmental impacts. CPES can help manage energy more efficiently by providing a functional view of the entire energy system so that energy activities can be understood, changed, and reinvented to better support sustainable practices. CPES can be applied at different scales from Smart Grids and Smart Cities to Smart Enterprises and Smart Buildings. Significant technical challenges exist in terms of information management, leveraging real-time sensor data, coordination of the various stakeholders to optimize energy usage.
In this talk I describe an approach to overcome these challenges by re-using the Web standards to quickly connect the required systems within a CPES. The resulting lightweight architecture leverages Web technologies including Linked Data, the Web of Things, and Social Media. The paper describes the fundamentals of the approach and demonstrates it within an Enterprise Energy Management scenario smart building.
SLUA: Towards Semantic Linking of Users with Actions in CrowdsourcingEdward Curry
Recent advances in web technologies allow people to help solve complex problems by performing online tasks in return for money, learning, or fun. At present, human contribution is limited to the tasks defined on individual crowdsourcing platforms. Furthermore, there is a lack of tools and technologies that support matching of tasks with appropriate users, across multiple systems. A more explicit capture of the semantics of crowdsourcing tasks could enable the design and development of matchmaking services between users and tasks. The paper presents the SLUA ontology that aims to model users and tasks in crowdsourcing systems in terms of the relevant actions, capabilities, and rewards. This model describes different types of human tasks that help in solving complex problems using crowds. The paper provides examples of describing users and tasks in some real world systems, with SLUA ontology.
System of Systems Information Interoperability using a Linked DataspaceEdward Curry
System of Systems pose significant technical challenges in terms of information interoperability that require overcoming conceptual barriers (both syntax and semantic) and technological barriers. This paper presents an approach to System of Systems information interoperability based on the Dataspace data management abstraction and the Linked Data approach to sharing information on the web. The paper describes the fundamentals of the approach and demonstrates the concept with a System of Systems for enterprise energy management.
Curry E. System of Systems Information Interoperability using a Linked Dataspace. In: IEEE 7th International Conference on System of Systems Engineering (SOSE 2012)
Further Reading:
http://www.edwardcurry.org/publications/Curry_LinkedDataspaceForSOS_SOSE.pdf
Big Data Analytics: A New Business OpportunityEdward Curry
This talk introduces Big Data analytics and how they can be used to deliver value within organisations. The talk will cover the transformational potential of creating data value chains between different sectors. Developing a Big Data analytics capability will be discussed in addition to the challenges facing the emerging data economy.
Crowdsourcing Approaches to Big Data Curation - Rio Big Data MeetupEdward Curry
Data management efforts such as Master Data Management and Data Curation are a popular approach for high quality enterprise data. However, Data Curation can be heavily centralised and labour intensive, where the cost and effort can become prohibitively high. The concentration of data management and stewardship onto a few highly skilled individuals, like developers and data experts, can be a significant bottleneck. This talk explores how to effectively involving a wider community of users within big data management activities. The bottom-up approach of involving crowds in the creation and management of data has been demonstrated by projects like Freebase, Wikipedia, and DBpedia. The talk discusses how crowdsourcing data management techniques can be applied within an enterprise context.
Topics covered include:
- Data Quality And Data Curation
- Crowdsourcing
- Case Studies on Crowdsourced Data Curation
- Setting up a Crowdsourced Data Curation Process
- Linked Open Data Example
- Future Research Challenges
Designing Next Generation Smart City Initiatives:Harnessing Findings And Les...Edward Curry
The proliferation of “Smart Cities” initiatives around the world is part of the strategic response by governments to the challenges and opportunities of increasing urbanization and the rise of cities as the nexus of societal development. As a framework for urban transformation, Smart City initiatives aim to harness Information and Communication Technologies and Knowledge Infrastructures for economic regeneration, social cohesion, better city administration and infrastructure management. However, experiences from earlier Smart City initiatives have revealed several technical, management and governance challenges arising from the inherent nature of a Smart City as a complex “Socio- technical System of Systems”. While these early lessons are informing modest objectives for planned Smart Cities programs, no rigorous developed framework based on careful analysis of existing initiatives is available to guide policymakers, practitioners, and other Smart City stakeholders. In response to this need, this paper presents a “Smart City Initiative Design (SCID) Framework” grounded in the findings from the analysis of ten major Smart Cities programs from Netherlands, Sweden, Malta, United Arab Emirates, Portugal, Singapore, Brazil, South Korea, China and Japan. The findings provide a design space for the objectives, implementation options, strategies, and the enabling institutional and governance mechanisms for Smart City initiatives.
Developing an Sustainable IT Capability: Lessons From Intel's JourneyEdward Curry
Intel Corporation set itself a goal to reduce its global-warming greenhouse gas footprint by 20% by 2012 from 2007 levels. Through the use of sustainable IT, the Intel IT organization is recognized as a significant contributor to the company’s sustainability strategy by transforming its IT operations and overall Intel operations. This article describes how Intel has achieved IT sustainability benefits thus far by developing four key capabilities. These capabilities have been incorporated into the Sustainable ICT Capability Maturity Framework (SICT-CMF), a model developed by an industry consortium in which the authors were key participants. The article ends with lessons learned from Intel’s experiences that can be applied by business and IT executives in other enterprises.
Big Data: Beyond the hype, Delivering valueEdward Curry
Big Data: Beyond the hype, Delivering value explains Big Data technology and how it is transforming industry and society to members of the IDEAL-IST project.
IDEAL-IST is an international ICT (Information and Communication Technologies) network, with more than 65 ICT national partners from EU and Non-EU Countries. It assists ICT companies and research organizations worldwide wishing to find project partners for a participation in the Horizon 2020 program of the European Commission.
Sustainable IT for Energy Management: Approaches, Challenges, and TrendsEdward Curry
An invited talk to the Galway-Mayo Institute of Technology on the current state of the art in Sustainable IT for energy management, the challenges, and the emerging trends.
The Role of Community-Driven Data Curation for EnterprisesEdward Curry
With increased utilization of data within their operational and strategic processes, enterprises need to ensure data quality and accuracy. Data curation is a process that can ensure the quality of data and its fitness for use. Traditional approaches to curation are struggling with increased data volumes, and near real-time demands for curated data. In response, curation teams have turned to community crowd-sourcing and semi-automatedmetadata tools for assistance. This chapter provides an overview of data curation, discusses the business motivations for curating data and investigates the role of community-based data curation, focusing on internal communities and pre-competitive data collaborations. The chapter is supported by case studies from Wikipedia, The New York Times, Thomson Reuters, Protein Data Bank and ChemSpider upon which best practices for both social and technical aspects of community-driven data curation are described.
E. Curry, A. Freitas, and S. O’Riáin, “The Role of Community-Driven Data Curation for Enterprises,” in Linking Enterprise Data, D. Wood, Ed. Boston, MA: Springer US, 2010, pp. 25-47.
A Capability Maturity Framework for Sustainable ICTEdward Curry
Researchers estimate that information and communication technology (ICT) is responsible for at least 2 percent of global greenhouse gas (GHG) emissions. Furthermore, in any individual business, ICT is responsible for a much higher percentage of that business's GHG footprint. Yet researchers also estimate that ICT can provide business solutions to reduce its GHG footprint fivefold. However, because the field is new and evolving, few guidelines and best practices are available. To address this issue, a consortium of leading organizations from industry, the nonprofit sector, and academia has developed and tested a framework for systematically assessing and improving SICT capabilities. The Innovation Value Institute (IVI; http://ivi.nuim.ie) consortium used an open-innovation model of collaboration, engaging academia and industry in scholarly work to create the SICT-Capability Maturity Framework (SICT-CMF), which is discussed in this paper.
B. Donnellan, C. Sheridan, and E. Curry, "A Capability Maturity Framework for Sustainable Information and Communication Technology,â" IEEE IT Professional, vol. 13, no. 1, pp. 33-40, Jan. 2011.
The New York Times is the largest metropolitan and the third largest newspaper in the United States. The Times website, nytimes.com, is ranked as the most
popular newspaper website in the United States and is an important source of advertisement revenue for the company. The NYT has a rich history for curation of its articles and its 100 year old curated repository has ultimately defined its participation as one of the first players in the emergingWeb of Data.
Data curation is a process that can ensure the quality of data and its fitness for use. Traditional approaches to curation are struggling with increased data volumes, and near real-time demands for curated data. In response, curation teams have turned to community crowd-sourcing and semi-automatedmetadata tools for assistance.
E. Curry, A. Freitas, and S. O’Riáin, “The Role of Community-Driven Data Curation for Enterprises,” in Linking Enterprise Data, D. Wood, Ed. Boston, MA: Springer US, 2010, pp. 25-47.
New Horizons for a Data-Driven Economy – A Roadmap for Big Data in Europe inside-BigData.com
In this video from the ISC Big Data'14 Conference, Edward Curry from the NUI Galway & Nuria de Lama Sanchez from Atos present: New Horizons for a Data-Driven Economy – A Roadmap for Big Data in Europe.
"In this talk we summarize the results of the BIG project including analysis of foundational Big Data research technologies, technology and strategy roadmaps to enable business to understand the potential of Big Data technologies across different sectors, together with the necessary collaboration and dissemination infrastructure to link technology suppliers, integrators and leading user organizations."
Learn more:
http://www.isc-events.com/bigdata14/schedule.html
and
http://big-project.eu/
Watch the video presentation: http://wp.me/p3RLEV-37G
EDF2014: Marta Nagy-Rothengass, Head of Unit Data Value Chain, Directorate Ge...European Data Forum
PPP on Data & Executive Panel on Big Data, Introduction by Marta Nagy-Rothengass, Head of Unit Data Value Chain, Directorate General for Communications Networks, Content and Technology at the European Data Forum 2014, 20 March 2014 in Athens, Greece: Towards a Data Value Chain Partership in Europe.
"Towards Value-Centric Big Data" e-SIDES Workshop - Slide-decke-SIDES.eu
This is the slide-deck of the workshop held at the BDV Meet-UP on June 27, 2019 in Riga, titled "Towards Value-Centric Big Data". It includes the presentations given by the speakers.
The event presents real-life examples from European organisations that have used the Rulebook for Fair Data Economy to develop data-driven business. The online event was organised on 3 March 2021 by Sitra.
Presentations:
- Jaana Sinipuro, Sitra
- Olli Pitkänen, 1001 Lakes
- Marko Turpeinen, 1001 Lakes
- Lars Nagel, International Data Spaces Association
- Cátia Pinto, Serviços Partilhados do Ministério da Saúde
- Matthias De Bièvre, aNewGovernance
FinTech and InsuranceTech case studies digitally transforming Europe's future with BigData and AI
The new data-driven industrial revolution highlights the need for big data technologies to unlock the potential in various application domains. The insurance and finance services industry is rapidly transformed by data-intensive operations and applications. FinTech and InsuranceTech combine very large datasets from legacy banking systems with other data sources such as financial markets data, regulatory datasets, real-time retail transactions, and more, improving financial services and activities for customers.
Europe needs a clear strategy for leveraging Big Data
Economy in Europe. Our objectives are work at technical, business and policy levels, shaping the future through the positioning of Big Data in Horizon 2020. Bringing the necessary stakeholders into a sustainable industry-led initiative, which will greatly contribute to enhance the EU competitiveness taking full advantage of Big Data technologies.
In the third part of the workshop series Smart Policies for Data, we will focus on two central building blocks – interoperability and balanced data sharing.
The presentations of the event:
- Szymon Lewandowski, DG CONNECT, European Commission
- Marko Turpeinen, CEO, 1001 Lakes
- Lars Nagel, CEO, International Data Spaces Association
Presentation: Data Activities in Austria, Lisbeth Mosnik, BMVIT (AT), at the European Data Economy Workshop taking place back to back to SEMANTiCS2015 on 15 September 2015 in Vienna.
Gaia-X and how to accelerate growth – pathway to EU funding webinar 10 March ...Sitra / Hyvinvointi
The webinar is organised as a part of the Finnish Gaia-X Hub coordination. The webinar trainers from Spinverse Oy are experts in the field and funding.
If you are interested and want to explore the EU funding schemes, existing opportunities, modalities and hints on applying or just refresh your knowledge, join us for this webinar and learn about:
- European programmes focusing on digital technologies
- How to work with EU calls for proposals
- How to identify EU funding opportunities
- How to find project partners and build a successful consortium
- Practical tips on how to create winning applications.
The webinar is open for anyone interested in the topic of EU funding and will bring benefits to everyone, in particular to small and medium-sized enterprises. The focus of the webinar will be Digital Europe Programme, but we will also explore other opportunities. https://www.sitra.fi/en/events/gaia-x-and-how-to-accelerate-growth-pathway-to-eu-funding/
Similar to Towards a BIG Data Public Private Partnership (20)
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
2. European Data Forum 2014
BIG
Big Data Public Private Forum
NEELIE KROES @ EDF 2014….
……A CALL FOR ACTION
“a public private partnership…can
be a powerful way to work together…
Public money is not free money.
Before you can unlock it you need a
very clear plan, showing how any
public investment will work, how it
connects to the activities around it,
and how it will pay off…we need a
Strategic Research and
Innovation Agenda … from a
broad, inclusive and representative
basis, pulling together different
priorities, so they make sense………
we need to do all this quickly, and
to the highest quality”
Neelie Kroes
Vice-President of the
European Commission
3. European Data Forum 2014
BIG
Big Data Public Private Forum
AGENDA
11:20 A Big Data Value Innovation Ecosystem for
Europe: A Business Perspective.
Harald Schöning, Software AG, NESSI Board
11:45 Launch of the PPP/Consultation process
Marta Nagy-Rothengass, Head of Unit Data Value
Chain, DG CONNECT
11:35 Towards a Big Data Public Private Partnership
Nuria De Lama, Atos, NESSI, BIG
11:50 The Big Data Value Chain
Edward Curry, NUIG, BIG
12:00 Big Data Transformations of Sectors
Helen Lippell Press Association Sonja Zillner Siemens
12:10 Question & Answers Session
All
4. BIG
Big Data Public Private Forum
The BIG Project
BIG aims to promote a well-developed EU industrial
landscape in Big Data:
▶ Providing a clear picture of existing technology trends and
their maturity
▶ Acquiring a sharp understanding of how Big Data can be
applied to concrete environments / use cases
▶ Pushing European Big Data research and innovation to
contribute in increasing European competitiveness
▶ Building a self-sustainable, industry-led initiative
Overall Objective
Work at technical, business and policy levels, shaping
the future through the positioning of IIM and Big Data
specifically in Horizon 2020.
Bringing the necessary stakeholders into a self-
sustainable industry-led initiative, which will greatly
contribute to enhance the EU competitiveness taking
full advantage of Big Data technologies.
5. „NESSI is the European Technology Platform for the new
Digital Information Society and Economy 2.0, powered by
software and services and data“
The Mission
" Provide thought leadership in Europe on the convergence of the networks of data,
things and services
" Drive convergence and transformation by strengthening and advancing the software
and service-based economies in Europe
" Stimulate the creation of ecosystems around software, services, and data
" Provide visionary, comprehensive input to the European Commission, thereby
shaping the European research and innovation agenda
The Challenge
" To promote the long-term importance of Software and Services
eco-system and innovations for Europe’s competitiveness
The Software & Services Focus Areas
" Big Data Value
" Cloud Computing
" Cyber Physical Systems
" All underpinned by Software Engineering
What is NESSI?
European Data Forum 2014
6. European Data Forum 2014
BIG
Big Data Public Private Forum
The role of BIG & NESSI
• Both BIG and NESSI have aligned interests in
fostering Big Data Value creation for Europe
through Research & Innovation activities
• BIG and NESSI are facilitators in the process
of establishing an PPP
www.bigdatavalue.eu
7. 7European Data Forum 2014 BIG 318062
BIG
Big Data Public Private Forum
THE BIG DATA VALUE CHAIN
Edward Curry, National University of Ireland Galway
European Data Forum, Athens, 20 February 2014
8. 8European Data Forum 2014 BIG 318062
BIG
Big Data Public Private Forum
8 BIG 318062
SECTORIAL FORUMS AND TECHNICAL
WORKING GROUPS
Health Public Sector
Finance &
Insurance
Telco, Media&
Entertainment
Manufacturing,
Retail, Energy,
Transport
Needs Offerings
Big Data Value Chain
Technical Working Groups
Industry Driven Sectorial Forums
Data
Acquisition
Data
Analysis
Data
Curation
Data
Storage
Data
Usage
• Structured data
• Unstructured data
• Event processing
• Sensor networks
• Protocols
• Real-time
• Data streams
• Multimodality
• Stream mining
• Semantic analysis
• Machine learning
• Information
extraction
• Linked Data
• Data discovery
• ‘Whole world’
semantics
• Ecosystems
• Community data
analysis
• Cross-sectorial data
analysis
• Data Quality
• Trust / Provenance
• Annotation
• Data validation
• Human-Data
Interaction
• Top-down/Bottom-up
• Community / Crowd
• Human Computation
• Curation at scale
• Incentivisation
• Automation
• Interoperability
• In-Memory DBs
• NoSQL DBs
• NewSQL DBs
• Cloud storage
• Query Interfaces
• Scalability and
Performance
• Data Models
• Consistency,
Availability, Partition-
tolerance
• Security and Privacy
• Standardization
• Decision support
• Prediction
• In-use analytics
• Simulation
• Exploration
• Visualisation
• Modeling
• Control
• Domain-specific
usage
9. 9European Data Forum 2014 BIG 318062
BIG
Big Data Public Private Forum
TECHNICAL WORKGROUP APPROACH
Senior
Academic
Senior
Management
Middle
Researcher
Middle
Management
Position
in
Organisation
University
MNC
SME
Other
Types
of
Organisations
1. Literature & Technical Survey
2. Subject Matter Expert Interviews
3. Stakeholder Workshops
4. Online Questionnaire (with
NESSI)
• Early adopters
• Business enablement
• Technical maturity
• Key Opinion Leaders
Methodology
Interviewee Breakdown
Target Interviewee
10. 10European Data Forum 2014 BIG 318062
BIG
Big Data Public Private Forum
10 BIG 318062
SUBJECT MATTER EXPERT INTERVIEWS
11. 11European Data Forum 2014 BIG 318062
BIG
Big Data Public Private Forum
11 BIG 318062
WORKING GROUP RESULTS
Interviews and Technical White Papers available on:
http://www.big-project.eu
Expert Interviews Technical Whitepapers
▶ Executive Overview
▶ Key Insights
▶ Social & Economic
Impact
▶ Concise State of the Art
▶ Future Requirements &
Emerging Trends
▶ Sector-specific Case
Studies
Next versions released
in April
12. 12European Data Forum 2014 BIG 318062
BIG
Big Data Public Private Forum
12 BIG 318062
KEY INSIGHTS
Key Trends
▶ Lower usability barrier for data tools
▶ Blended human and algorithmic data processing for coping with
for data quality
▶ Leveraging large communities (crowds)
▶ Need for semantic standardized data representation
▶ Significant increase in use of new data models (i.e. graph)
(expressivity and flexibility)
▶ Much of (Big Data) technology
is evolving evolutionary
▶ But business processes change
must be revolutionary
▶ Data variety and verifiability
are key opportunities
▶ Long tail of data variety is a
major shift in the data landscape
The Data Landscape
▶ Lack of Business-driven Big
Data strategies
▶ Need for format and data
storage technology standards
▶ Data exchange between
companies, institutions,
individuals, etc.
▶ Regulations & markets for data
access
▶ Human resources: Lack of
skilled data scientists
Biggest Blockers
Technical White Papers available on:
http://www.big-project.eu
13. 13European Data Forum 2014 BIG 318062
BIG
Big Data Public Private Forum
TECHNICAL WORKING GROUP LEADERS
Data
Acquisition
Data
Analysis
Data
Curation
Data
Storage
Data
Usage
Axel Ngonga
ngonga@infor
matik.uni-
leipzig.de
John Domingue
john.domingue
@open.ac.uk
Edward Curry
ed.curry@der
i.org
Martin Strohbach
MStrohbach@agtin
ternational.com
Tilman Becker
becker@dfki.de
▶ Join the bigdatavalue.eu
mailing list
@BIG_FP7
http://big-project.eu/
info@big-project.eu