The goal of the MonetDB/DataCell project is to exploit the power of Relational DBMS (RDBMS) for efficient processing of continues queries over streaming data. This presentation first identifies the essential differences between processing one-time queries and continues queries. It then presents the current archtecture of MonetDB/DataCell and some ideas of how to extend an existing RDBMS with just a handful of new components to handle continues queries.
The presentation was presented by Ying Zhang (Centrum Wiskunde & Informatica) at the PlanetData project Meeting on February 28 - March 4, 2011 in Innsbruck, Austria.
DBMS is used to manage stored data in databases while DSMS is used to manage continuous, real-time data streams. Key differences are that DBMS works with stored, persistent data that can be randomly accessed, while DSMS works with volatile data streams that arrive sequentially and must be processed in limited memory. DBMS supports one-time queries on stored data, while DSMS supports continuous queries that must adapt to the unpredictable nature and high update rate of streaming data.
Does Current Advertising Cause Future Sales?Trieu Nguyen
findings from a large-scale field experiment that allows us to study whether
there is a causal relationship between current advertising and future sales. The
experimental design overcomes limitations that have affected previous investigations of
this issue. We find that current advertising does affect future sales but the sign of the
effect varies depending on the customers targeted. For the firm’s best customers the
long-run effect of increases in current advertising is actually negative, while for other
customers the effect is positive. We argue that these outcomes reflect two competing
effects: brand-switching and inter-temporal substitution. Furthermore, our data suggest a way to distinguish between the informative and persuasive roles of advertising, providing insight into the mechanism by which advertising differentially affects various customer subsets
This document discusses using RFX (Reactive Function X), a design pattern and collection of open source tools, to solve fast data problems. It presents an example of using RFX for web analytics to count pageviews and unique users and detect DDOS attacks. The RFX approach applies the BEAM methodology for agile data warehousing. It demonstrates RFX concepts like event data actors, agents, collectors, routers, processors, storage and reactors using a pageview analytics demo with source code on GitHub.
Being able to analyze data in real-time will be a very hot topic for sure in near future. Not only for IoT-related tasks but as a general approach to user-to-machine or machine-to-machine interaction. From product recommendations to fraud detection alarms, a lot of stuff would be perfect if it could happen in real time. Now, with Azure Event Hubs and Stream Analytics, it’s possible. In this session, Davide will demonstrate how to use Event Hubs to quickly ingest new real-time data and Stream Analytics to query on-the-fly data, in order to do a real-time analysis of what’s happening right now.
In-memory databases (IMDBs) store data primarily in RAM for faster access than disk-based databases. While an older concept, IMDBs have become more practical due to lower RAM costs, multi-core CPUs, and 64-bit systems allowing more memory. IMDBs have different architectures, data representations, indexing, and query processing optimized for memory versus disk. They also face challenges in providing durability without disk and scaling to very large data sizes.
Optimization of Continuous Queries in Federated Database and Stream Processin...Zbigniew Jerzak
The constantly increasing number of connected devices and sensors results in increasing volume and velocity of sensor-based streaming data. Traditional approaches for processing high velocity sensor data rely on stream processing engines. However, the increasing complexity of continuous queries executed on top of high velocity data has resulted in growing demand for federated systems composed of data stream processing engines and database engines. One of major challenges for such systems is to devise the optimal query execution plan to maximize the throughput of continuous queries.
In this paper we present a general framework for federated database and stream processing systems, and introduce the design and implementation of a cost-based optimizer for optimizing relational continuous queries in such systems. Our optimizer uses characteristics of continuous queries and source data streams to devise an optimal placement for each operator of a continuous query. This fine level of optimization, combined with the estimation of the feasibility of query plans, allows our optimizer to devise query plans which result in 8 times higher throughput as compared to the baseline approach which uses only stream processing engines. Moreover, our experimental results showed that even for simple queries, a hybrid execution plan can result in 4 times and 1.6 times higher throughput than a pure stream processing engine plan and a pure database engine plan, respectively.
This document describes a Contextualized Knowledge Repository (CKR) framework that allows for representing and reasoning with contextual knowledge on the Semantic Web. The CKR extends the description logic SROIQ-RL to include defeasible axioms in the global context. Defeasible axioms can be overridden by local contexts, allowing exceptions. The CKR is composed of two layers - a global context containing metadata and defeasible axioms, and local contexts containing object knowledge with references. An interpretation of a CKR maps local contexts to descriptions logic interpretations over the object vocabulary, respecting references between contexts.
DBMS is used to manage stored data in databases while DSMS is used to manage continuous, real-time data streams. Key differences are that DBMS works with stored, persistent data that can be randomly accessed, while DSMS works with volatile data streams that arrive sequentially and must be processed in limited memory. DBMS supports one-time queries on stored data, while DSMS supports continuous queries that must adapt to the unpredictable nature and high update rate of streaming data.
Does Current Advertising Cause Future Sales?Trieu Nguyen
findings from a large-scale field experiment that allows us to study whether
there is a causal relationship between current advertising and future sales. The
experimental design overcomes limitations that have affected previous investigations of
this issue. We find that current advertising does affect future sales but the sign of the
effect varies depending on the customers targeted. For the firm’s best customers the
long-run effect of increases in current advertising is actually negative, while for other
customers the effect is positive. We argue that these outcomes reflect two competing
effects: brand-switching and inter-temporal substitution. Furthermore, our data suggest a way to distinguish between the informative and persuasive roles of advertising, providing insight into the mechanism by which advertising differentially affects various customer subsets
This document discusses using RFX (Reactive Function X), a design pattern and collection of open source tools, to solve fast data problems. It presents an example of using RFX for web analytics to count pageviews and unique users and detect DDOS attacks. The RFX approach applies the BEAM methodology for agile data warehousing. It demonstrates RFX concepts like event data actors, agents, collectors, routers, processors, storage and reactors using a pageview analytics demo with source code on GitHub.
Being able to analyze data in real-time will be a very hot topic for sure in near future. Not only for IoT-related tasks but as a general approach to user-to-machine or machine-to-machine interaction. From product recommendations to fraud detection alarms, a lot of stuff would be perfect if it could happen in real time. Now, with Azure Event Hubs and Stream Analytics, it’s possible. In this session, Davide will demonstrate how to use Event Hubs to quickly ingest new real-time data and Stream Analytics to query on-the-fly data, in order to do a real-time analysis of what’s happening right now.
In-memory databases (IMDBs) store data primarily in RAM for faster access than disk-based databases. While an older concept, IMDBs have become more practical due to lower RAM costs, multi-core CPUs, and 64-bit systems allowing more memory. IMDBs have different architectures, data representations, indexing, and query processing optimized for memory versus disk. They also face challenges in providing durability without disk and scaling to very large data sizes.
Optimization of Continuous Queries in Federated Database and Stream Processin...Zbigniew Jerzak
The constantly increasing number of connected devices and sensors results in increasing volume and velocity of sensor-based streaming data. Traditional approaches for processing high velocity sensor data rely on stream processing engines. However, the increasing complexity of continuous queries executed on top of high velocity data has resulted in growing demand for federated systems composed of data stream processing engines and database engines. One of major challenges for such systems is to devise the optimal query execution plan to maximize the throughput of continuous queries.
In this paper we present a general framework for federated database and stream processing systems, and introduce the design and implementation of a cost-based optimizer for optimizing relational continuous queries in such systems. Our optimizer uses characteristics of continuous queries and source data streams to devise an optimal placement for each operator of a continuous query. This fine level of optimization, combined with the estimation of the feasibility of query plans, allows our optimizer to devise query plans which result in 8 times higher throughput as compared to the baseline approach which uses only stream processing engines. Moreover, our experimental results showed that even for simple queries, a hybrid execution plan can result in 4 times and 1.6 times higher throughput than a pure stream processing engine plan and a pure database engine plan, respectively.
This document describes a Contextualized Knowledge Repository (CKR) framework that allows for representing and reasoning with contextual knowledge on the Semantic Web. The CKR extends the description logic SROIQ-RL to include defeasible axioms in the global context. Defeasible axioms can be overridden by local contexts, allowing exceptions. The CKR is composed of two layers - a global context containing metadata and defeasible axioms, and local contexts containing object knowledge with references. An interpretation of a CKR maps local contexts to descriptions logic interpretations over the object vocabulary, respecting references between contexts.
The document describes a Contextualized Knowledge Repository (CKR) framework for representing and reasoning with contextual knowledge on the Semantic Web. It discusses the need to make context explicit in the Semantic Web in order to represent knowledge that holds in specific contextual spaces like time, location, or topic. The CKR is presented as a formalism based on description logics that defines contexts as first-class objects and allows associating knowledge with contexts. It describes a prototype CKR implementation and outlines how a CKR could be used to represent open data about the Trentino region with contextual metadata.
This document discusses leveraging crowdsourcing techniques and consistency constraints to optimize the reconciliation of schema matching networks. It proposes:
1) Defining consistency constraints within schema matching networks and designing validation questions for crowdsourced workers.
2) Using consistency constraints to reduce reconciliation error rates and the monetary cost of asking additional validation questions.
3) Modeling a crowdsourcing process for schema matching networks that aims to minimize cost while maximizing accuracy through the application of consistency constraints.
This document discusses privacy-preserving schema reuse. It introduces the challenges of defining privacy constraints, generating an anonymized schema from multiple schemas while satisfying privacy constraints, defining a utility function for anonymized schemas, and solving the optimization problem of finding the anonymized schema with the highest utility that satisfies all privacy constraints. Experimental results demonstrate the trade-off between privacy enforcement and utility loss. The solution presents an approach for generating anonymized schemas from multiple schemas in a privacy-preserving manner.
Authros: Nguyen Quoc Viet Hung (1), Nguyen Thanh Tam (1), Zoltán Miklós (2), Karl Aberer (1),
Avigdor Gal (3), and Matthias Weidlich (4)
1 École Polytechnique Fédérale de Lausanne
2 Université de Rennes 1
3 Technion – Israel Institute of Technology
4 Imperial College London
This document summarizes a demo of using SPARQLstream and Morphstreams to visualize transport data from Madrid's public transport company (EMT) in a tablet application. Static EMT data like bus stop locations are extracted and mapped to RDF, while live bus waiting time data streams are transformed and queried in real-time. This allows a Map4RDF iOS app to retrieve bus stop information and lookup estimated arrival times using SPARQL and SPARQLstream queries. The demo illustrates how standards like SSN and R2RML can integrate static and streaming sensor data for web-based applications.
The document discusses the need for a W3C community group on RDF stream processing. It notes there is currently heterogeneity in RDF stream models, query languages, implementations, and operational semantics. The speaker proposes creating a W3C community group to better understand these differences, requirements, and potentially develop recommendations. The group's mission would be to define common models for producing, transmitting, and continuously querying RDF streams. The presentation provides examples of use cases and outlines a template for describing them to collect more cases to understand requirements.
by Irene Celino, Simone Contessa, Marta Corubolo, Daniele Dell’Aglio, Emanuele Della Valle, Stefano Fumeo and Thorsten Krüger
CEFRIEL – Politecnico di Milano – SIEMENS
This document describes SciQL, a language that bridges the gap between science and relational database management systems (DBMS). SciQL allows for the seamless integration of relational and array paradigms within DBMSs. It defines arrays and tables as first-class citizens and supports named dimensions, flexible structure-based grouping, and the distinction between arrays and tables. SciQL aims to lower the barrier for scientists to use DBMSs for array-based data while revealing new optimization opportunities for databases.
by G. Larkou, J. Metochi, G. Chatzimilioudis and D. Zeinalipour-Yazti
Presented at: 1st IEEE International Workshop on Mobile Data Management Mining and Computing on Social Networks, collocated with IEEE MDM'13
This document summarizes research on implementing defeasible logic, a non-monotonic reasoning method, in a distributed manner using the MapReduce framework. Defeasible logic allows commonsense reasoning over low-quality data and has low computational complexity. However, existing implementations did not scale to huge datasets. The researchers developed a multi-argument MapReduce implementation of defeasible logic that distributes the reasoning process. Experimental evaluation on large datasets showed this approach provides scalable defeasible reasoning over distributed data. Future work will address challenges with non-stratified rulesets and test the approach on additional real-world applications and knowledge representation methods.
This document discusses data and knowledge evolution on the semantic web. It begins by explaining the limitations of the current web in representing semantic content and introduces the semantic web as a way to give data well-defined meaning. It then discusses how ontologies and datasets are used to describe semantic data and how datasets are dynamic and change over time. It also introduces linked open data as a way to interconnect datasets and the challenges this presents. Finally, it outlines the scope of the talk, which is to survey research areas related to managing dynamic linked datasets, including remote change management, repair, and data/knowledge evolution.
This document discusses evolving workflow provenance information in the presence of custom inference rules. It presents three inference rules for provenance data, including that actors are associated with all subactivities if one activity, objects and their parts are used together, and information objects are present where physical objects carrying them are. It examines handling updates to provenance knowledge bases using these rules either by deleting all inferred facts or only as needed, and considers complexity of different approaches.
This document discusses access control for RDF graphs using abstract models. It presents an abstract access control model defined using abstract tokens and operators to model the computation of access labels for inferred RDF triples. The model supports dynamic datasets and policies. Experiments show that annotation time increases with the number of implied triples, while evaluation time increases linearly with the total number of triples. The abstract model approach allows different concrete access control policies to be applied to the same dataset.
Here are a few ways SciQL could help with this seismology use case:
1. The mseed array allows storing and querying the large seismic data in an efficient columnar format.
2. Window-based aggregation with dimensional grouping enables filtering signals by station/LTA ratios over time windows.
3. Views and queries on dimensional groups facilitate removing false positives by comparing signals across nearby stations over time.
4. Further window-based grouping and UDFs can extract signal windows for additional heuristic analysis.
By integrating the array and relational models, SciQL provides a declarative way to analyze large multidimensional scientific datasets like seismic signals interactively.
This talk was given by FORTH, Greece, at the European Data Forum (EDF) 2012 took place on June 6-7, 2012 in Copenhagen (Denmark) at the Copenhagen Business School (CBS).
Abstract:
Given the increasing amount of sensitive RDF data available on the Web, it becomes increasingly critical to guarantee secure access to this content. Access control is complicated when RDFS inference rules and other dependencies between access permissions of triples need to be considered; this is necessary, e.g., when we want to associate the access permissions of inferred triples with the ones that implied it. In this paper we advocate the use of abstract provenance models that are defined by means of abstract tokens operators to support fine grained access control for RDF graphs. The access label of a triple is a complex expression that encodes how said label was produced (i.e., the triples that contributed to its computation). This feature allows us to know exactly the effects of any possible change, thereby avoiding a complete recomputation of the labels when a change occurs. In addition, the same application can choose to enforce different access control policies or, different applications can enforce different policies on the same data, avoiding the recomputation of the label of a triple. Preliminary experiments have shown the applicability and benefits of our approach.
This talk has been given at the 13th International Conference on Principles of Knowledge Representation and Reasoning (KR 2012) to be held in Rome, Italy, June 10-14, 2012 by Ilias Tahmazidis (FORTH).
Abstract:
We are witnessing an explosion of available data from the Web, government authorities, scientific databases, sensors and more. Such datasets could benefit from the introduction of rule sets encoding commonly accepted rules or facts, application- or domain-specific rules, commonsense knowledge etc. This raises the question of whether, how, and to what extent knowledge representation methods are capable of handling the vast amounts of data for these applications. In this paper, we consider nonmonotonic reasoning, which has traditionally focused on rich knowledge structures. In particular, we consider defeasible logic, and analyze how parallelization, using the MapReduce framework, can be used to reason with defeasible rules over huge data sets. Our experimental results demonstrate that defeasible reasoning with billions of data is performant, and has the potential to scale to trillions of facts.
The presentation was delivered during the 1st International Conference on Health Information Science (HIS 2012) on April 9th, 2012 in Beijing, China.
Abstract:
In cytomics bookkeeping of the data generated during lab experiments is crucial. The current approach in cytomics is to conduct High-Throughput Screening (HTS) experiments so that cells can be tested under many different experimental conditions. Given the large amount of different conditions and the readout of the conditions through images, it is clear that the HTS approach requires a proper data management system to reduce the time needed for experiments and the chance of man-made errors. As different types of data exist, the experimental conditions need to be linked to the images produced by the HTS experiments with their metadata and the results of further analysis. Moreover, HTS experiments never stand by themselves, as more experiments are lined up, the amount of data and computations needed to analyze these increases rapidly. To that end cytomic experiments call for automated and systematic solutions that provide convenient and robust features for scientists to manage and analyze their data. In this paper, we propose a platform for managing and analyzing HTS images resulting from cytomics screens taking the automated HTS workflow as a starting point. This platform seamlessly integrates the whole HTS workflow into a single system. The platform relies on a modern relational database system to store user data and process user requests, while providing a convenient web interface to end-users. By implementing this platform, the overall workload of HTS experiments, from experiment design to data analysis, is reduced significantly. Additionally, the platform provides the potential for data integration to accomplish genotype-to-phenotype modeling studies.
The talk was given at the 15th International Conference on Extending Database Technology (EDBT 2012) on March 29, 2012 in Berlin, Germany.
Abstract:
Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
The document describes a Contextualized Knowledge Repository (CKR) framework for representing and reasoning with contextual knowledge on the Semantic Web. It discusses the need to make context explicit in the Semantic Web in order to represent knowledge that holds in specific contextual spaces like time, location, or topic. The CKR is presented as a formalism based on description logics that defines contexts as first-class objects and allows associating knowledge with contexts. It describes a prototype CKR implementation and outlines how a CKR could be used to represent open data about the Trentino region with contextual metadata.
This document discusses leveraging crowdsourcing techniques and consistency constraints to optimize the reconciliation of schema matching networks. It proposes:
1) Defining consistency constraints within schema matching networks and designing validation questions for crowdsourced workers.
2) Using consistency constraints to reduce reconciliation error rates and the monetary cost of asking additional validation questions.
3) Modeling a crowdsourcing process for schema matching networks that aims to minimize cost while maximizing accuracy through the application of consistency constraints.
This document discusses privacy-preserving schema reuse. It introduces the challenges of defining privacy constraints, generating an anonymized schema from multiple schemas while satisfying privacy constraints, defining a utility function for anonymized schemas, and solving the optimization problem of finding the anonymized schema with the highest utility that satisfies all privacy constraints. Experimental results demonstrate the trade-off between privacy enforcement and utility loss. The solution presents an approach for generating anonymized schemas from multiple schemas in a privacy-preserving manner.
Authros: Nguyen Quoc Viet Hung (1), Nguyen Thanh Tam (1), Zoltán Miklós (2), Karl Aberer (1),
Avigdor Gal (3), and Matthias Weidlich (4)
1 École Polytechnique Fédérale de Lausanne
2 Université de Rennes 1
3 Technion – Israel Institute of Technology
4 Imperial College London
This document summarizes a demo of using SPARQLstream and Morphstreams to visualize transport data from Madrid's public transport company (EMT) in a tablet application. Static EMT data like bus stop locations are extracted and mapped to RDF, while live bus waiting time data streams are transformed and queried in real-time. This allows a Map4RDF iOS app to retrieve bus stop information and lookup estimated arrival times using SPARQL and SPARQLstream queries. The demo illustrates how standards like SSN and R2RML can integrate static and streaming sensor data for web-based applications.
The document discusses the need for a W3C community group on RDF stream processing. It notes there is currently heterogeneity in RDF stream models, query languages, implementations, and operational semantics. The speaker proposes creating a W3C community group to better understand these differences, requirements, and potentially develop recommendations. The group's mission would be to define common models for producing, transmitting, and continuously querying RDF streams. The presentation provides examples of use cases and outlines a template for describing them to collect more cases to understand requirements.
by Irene Celino, Simone Contessa, Marta Corubolo, Daniele Dell’Aglio, Emanuele Della Valle, Stefano Fumeo and Thorsten Krüger
CEFRIEL – Politecnico di Milano – SIEMENS
This document describes SciQL, a language that bridges the gap between science and relational database management systems (DBMS). SciQL allows for the seamless integration of relational and array paradigms within DBMSs. It defines arrays and tables as first-class citizens and supports named dimensions, flexible structure-based grouping, and the distinction between arrays and tables. SciQL aims to lower the barrier for scientists to use DBMSs for array-based data while revealing new optimization opportunities for databases.
by G. Larkou, J. Metochi, G. Chatzimilioudis and D. Zeinalipour-Yazti
Presented at: 1st IEEE International Workshop on Mobile Data Management Mining and Computing on Social Networks, collocated with IEEE MDM'13
This document summarizes research on implementing defeasible logic, a non-monotonic reasoning method, in a distributed manner using the MapReduce framework. Defeasible logic allows commonsense reasoning over low-quality data and has low computational complexity. However, existing implementations did not scale to huge datasets. The researchers developed a multi-argument MapReduce implementation of defeasible logic that distributes the reasoning process. Experimental evaluation on large datasets showed this approach provides scalable defeasible reasoning over distributed data. Future work will address challenges with non-stratified rulesets and test the approach on additional real-world applications and knowledge representation methods.
This document discusses data and knowledge evolution on the semantic web. It begins by explaining the limitations of the current web in representing semantic content and introduces the semantic web as a way to give data well-defined meaning. It then discusses how ontologies and datasets are used to describe semantic data and how datasets are dynamic and change over time. It also introduces linked open data as a way to interconnect datasets and the challenges this presents. Finally, it outlines the scope of the talk, which is to survey research areas related to managing dynamic linked datasets, including remote change management, repair, and data/knowledge evolution.
This document discusses evolving workflow provenance information in the presence of custom inference rules. It presents three inference rules for provenance data, including that actors are associated with all subactivities if one activity, objects and their parts are used together, and information objects are present where physical objects carrying them are. It examines handling updates to provenance knowledge bases using these rules either by deleting all inferred facts or only as needed, and considers complexity of different approaches.
This document discusses access control for RDF graphs using abstract models. It presents an abstract access control model defined using abstract tokens and operators to model the computation of access labels for inferred RDF triples. The model supports dynamic datasets and policies. Experiments show that annotation time increases with the number of implied triples, while evaluation time increases linearly with the total number of triples. The abstract model approach allows different concrete access control policies to be applied to the same dataset.
Here are a few ways SciQL could help with this seismology use case:
1. The mseed array allows storing and querying the large seismic data in an efficient columnar format.
2. Window-based aggregation with dimensional grouping enables filtering signals by station/LTA ratios over time windows.
3. Views and queries on dimensional groups facilitate removing false positives by comparing signals across nearby stations over time.
4. Further window-based grouping and UDFs can extract signal windows for additional heuristic analysis.
By integrating the array and relational models, SciQL provides a declarative way to analyze large multidimensional scientific datasets like seismic signals interactively.
This talk was given by FORTH, Greece, at the European Data Forum (EDF) 2012 took place on June 6-7, 2012 in Copenhagen (Denmark) at the Copenhagen Business School (CBS).
Abstract:
Given the increasing amount of sensitive RDF data available on the Web, it becomes increasingly critical to guarantee secure access to this content. Access control is complicated when RDFS inference rules and other dependencies between access permissions of triples need to be considered; this is necessary, e.g., when we want to associate the access permissions of inferred triples with the ones that implied it. In this paper we advocate the use of abstract provenance models that are defined by means of abstract tokens operators to support fine grained access control for RDF graphs. The access label of a triple is a complex expression that encodes how said label was produced (i.e., the triples that contributed to its computation). This feature allows us to know exactly the effects of any possible change, thereby avoiding a complete recomputation of the labels when a change occurs. In addition, the same application can choose to enforce different access control policies or, different applications can enforce different policies on the same data, avoiding the recomputation of the label of a triple. Preliminary experiments have shown the applicability and benefits of our approach.
This talk has been given at the 13th International Conference on Principles of Knowledge Representation and Reasoning (KR 2012) to be held in Rome, Italy, June 10-14, 2012 by Ilias Tahmazidis (FORTH).
Abstract:
We are witnessing an explosion of available data from the Web, government authorities, scientific databases, sensors and more. Such datasets could benefit from the introduction of rule sets encoding commonly accepted rules or facts, application- or domain-specific rules, commonsense knowledge etc. This raises the question of whether, how, and to what extent knowledge representation methods are capable of handling the vast amounts of data for these applications. In this paper, we consider nonmonotonic reasoning, which has traditionally focused on rich knowledge structures. In particular, we consider defeasible logic, and analyze how parallelization, using the MapReduce framework, can be used to reason with defeasible rules over huge data sets. Our experimental results demonstrate that defeasible reasoning with billions of data is performant, and has the potential to scale to trillions of facts.
The presentation was delivered during the 1st International Conference on Health Information Science (HIS 2012) on April 9th, 2012 in Beijing, China.
Abstract:
In cytomics bookkeeping of the data generated during lab experiments is crucial. The current approach in cytomics is to conduct High-Throughput Screening (HTS) experiments so that cells can be tested under many different experimental conditions. Given the large amount of different conditions and the readout of the conditions through images, it is clear that the HTS approach requires a proper data management system to reduce the time needed for experiments and the chance of man-made errors. As different types of data exist, the experimental conditions need to be linked to the images produced by the HTS experiments with their metadata and the results of further analysis. Moreover, HTS experiments never stand by themselves, as more experiments are lined up, the amount of data and computations needed to analyze these increases rapidly. To that end cytomic experiments call for automated and systematic solutions that provide convenient and robust features for scientists to manage and analyze their data. In this paper, we propose a platform for managing and analyzing HTS images resulting from cytomics screens taking the automated HTS workflow as a starting point. This platform seamlessly integrates the whole HTS workflow into a single system. The platform relies on a modern relational database system to store user data and process user requests, while providing a convenient web interface to end-users. By implementing this platform, the overall workload of HTS experiments, from experiment design to data analysis, is reduced significantly. Additionally, the platform provides the potential for data integration to accomplish genotype-to-phenotype modeling studies.
The talk was given at the 15th International Conference on Extending Database Technology (EDBT 2012) on March 29, 2012 in Berlin, Germany.
Abstract:
Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
UiPath Test Automation using UiPath Test Suite series, part 5
MonetDB/DataCell - Exploiting the Power of Relational Databases for Efficient Stream Processing
1. MonetDB/DataCell
Exploiting the Power of Relational
Databases for Efficient Stream
Processing
CWI
Project Meeting@Innsbruck
Feb 28 - Mar 04, 2011
Wednesday, March 02, 2011
2. DBMS versus DSMS
1
2
One-time query
Incoming data
DB
answer
4
1 Store incoming tuples
2 Submit one-time query 3
3 Query processing on the already stored data
4 Create answer Disk storage
Wednesday, March 02, 2011
3. DBMS versus DSMS
1
2
One-time query
Incoming data
DB
answer
4
1 Store incoming tuples
2 Submit one-time query 3
3 Query processing on the already stored data
4 Create answer Disk storage
4 3
2
Input stream
Continuous queries
notification 1
Memory
1 Submit continuous queries
2 Incoming streams
A data stream is a never
3 Input stream is processed on the fly ending sequence of tuples
4 The produced results are continuously delivered to the clients
Wednesday, March 02, 2011
4. One-time Queries versus Continuous Queries
arrival time of q
One-time Continuous
query query
t of data
tn t n+1
One-time query
q Evaluated once over the already stored tuples
Continuous query
q Waits for future incoming tuples
q Evaluated continuously as new tuples arrive
Wednesday, March 02, 2011
5. One-time Queries versus Continuous Queries
arrival time of q
One-time Continuous
query query
t of data
tn t n+1
One-time query
q Evaluated once over the already stored tuples
Continuous query
q Waits for future incoming tuples
q Evaluated continuously as new tuples arrive
Wednesday, March 02, 2011
6. One-time Queries versus Continuous Queries
arrival time of q
One-time Continuous
query query
t of data
tn t n+1
One-time query
q Evaluated once over the already stored tuples
Continuous query
q Waits for future incoming tuples
q Evaluated continuously as new tuples arrive
Wednesday, March 02, 2011
7. One-time Queries versus Continuous Queries
arrival time of q
One-time Continuous
query query
t of data
tn t n+1
One-time query
q Evaluated once over the already stored tuples
Continuous query
q Waits for future incoming tuples
q Evaluated continuously as new tuples arrive
Wednesday, March 02, 2011
8. One-time Queries versus Continuous Queries
arrival time of q
One-time Continuous
query query
t of data
tn t n+1
One-time query
q Evaluated once over the already stored tuples
Continuous query
q Waits for future incoming tuples
www
q Evaluated continuously as new tuples arrive
Wednesday, March 02, 2011
9. Observation
• Nowadays stream systems are built from scratch
• Redesign operators and optimizations
• Relational Databases are considered inefficient and too complex
• Modern stream applications require both management of
stored and streaming data
Wednesday, March 02, 2011
10. Goals
• We design the DataCell on top of an existing DataBase Kernel
• Exploit database techniques, query optimization and operators
• Provide full language functionalities (SQL’03)
• Research questions
• is it viable?
• multi-query processing/scheduling
• real-time processing
Wednesday, March 02, 2011
11. The Basic Idea of DataCell
• Stream tuples are first stored in (appended to) baskets.
• We evaluate the continuous queries over the baskets.
Instead of throwing each incoming tuple against the waiting queries (Data Streams)
tuple
Query
Set
first collect the data and then throw the queries against the tuples (DataBase)
tuple Query
Set
• Once a tuple is seen, it is dropped from its basket.
Wednesday, March 02, 2011
12. The MonetDB/DataCell stack
SQL Query
SQL
Query parser
Query Optimizer
MAL
MAL Interpreter
Query Executor
Wednesday, March 02, 2011
13. The MonetDB/DataCell stack
SQL Query
SQL
Query parser + CQ
Query Optimizer + DC opt
Continuous Query Scheduler
MAL
MAL Interpreter
Query Executor
Wednesday, March 02, 2011
14. DataCell Components
Receptor <=> Listens to a stream
Emitter <=> Delivers events to the clients
Factory <=> Continuous query
Basket <=> Holds events
Input Stream Output Stream
R Q E
Wednesday, March 02, 2011
15. DataCell Architecture
SQL Compiler
Data Columns MAL Optimizer
DataCell
R1 id a
a E1
id c Continuous Query Scheduler
id b id a’
id k’
R2 id k
E2
id b’
R3
E3
id k’’
id m
Legend id n id n’
Basket
Receptor
Disk Storage
Emitter
Factory
Wednesday, March 02, 2011
16. DataCell Architecture
SQL Compiler
Data Columns MAL Optimizer
DataCell
R1 id a
a E1
id c Continuous Query Scheduler
id b id a’
id k’
R2 id k
E2
id b’
R3
E3
id k’’
id m
Legend id n id n’
Basket
Receptor
Disk Storage
Emitter
Factory
Wednesday, March 02, 2011
17. DataCell Architecture
SQL Compiler
Data Columns MAL Optimizer
DataCell
R1 id a
a E1
id c Continuous Query Scheduler
id b id a’
id k’
R2 id k
E2
id b’
R3
E3
id k’’
id m
Legend id n id n’
Basket
Receptor
Disk Storage
Emitter
Factory
Wednesday, March 02, 2011
18. DataCell Architecture
SQL Compiler
Data Columns MAL Optimizer
DataCell
R1 id a
a E1
id c Continuous Query Scheduler
id b id a’
id k’
R2 id k
E2
id b’
R3
E3
id k’’
id m
Legend id n id n’
Basket
Receptor
Disk Storage
Emitter
Factory
Wednesday, March 02, 2011
19. DataCell Architecture
SQL Compiler SPARQL Compiler
Data Columns MAL Optimizer
DataCell
R1 id a
a E1
id c Continuous Query Scheduler
id b id a’
id k’
R2 id k
E2
id b’
R3
E3
id k’’
id m
Legend id n id n’
Basket
Receptor
Disk Storage
Emitter
Factory
Wednesday, March 02, 2011
20. Basket Expressions
q Syntax:
It is an SQL sub-query surrounded by square brackets
q Semantics:
All qualifying tuples in a basket expression are removed by the factories
Tumbling window
Q1: Select * From [Select * from X top 3] as S where S.a>10;
Sliding window
Q2: SELECT * FROM (
[Select * From X top 1]
Union
Select * From X top 2 offset 1) as S
WHERE S.a>10;
q Flexible/expressive continuous queries, by selectively picking the data to
process from a basket
q Allow to process predicate windows on a stream.
q out of order processing
Wednesday, March 02, 2011
21. Basket Expressions
q Syntax:
It is an SQL sub-query surrounded by square brackets
q Semantics:
All qualifying tuples in a basket expression are removed by the factories
12
Tumbling window 3
Q1
100
Q1: Select * From [Select * from X top 3] as S where S.a>10;
14
Sliding window
Q2: SELECT * FROM (
[Select * From X top 1]
Union
Select * From X top 2 offset 1) as S
WHERE S.a>10;
q Flexible/expressive continuous queries, by selectively picking the data to
process from a basket
q Allow to process predicate windows on a stream.
q out of order processing
Wednesday, March 02, 2011
22. Basket Expressions
q Syntax:
It is an SQL sub-query surrounded by square brackets
q Semantics:
All qualifying tuples in a basket expression are removed by the factories
12
Tumbling window 3
Q1
100
Q1: Select * From [Select * from X top 3] as S where S.a>10;
14
Sliding window
Q2: SELECT * FROM (
[Select * From X top 1]
Union
Select * From X top 2 offset 1) as S
WHERE S.a>10;
q Flexible/expressive continuous queries, by selectively picking the data to
process from a basket
q Allow to process predicate windows on a stream.
q out of order processing
Wednesday, March 02, 2011
23. Basket Expressions
q Syntax:
It is an SQL sub-query surrounded by square brackets
q Semantics:
All qualifying tuples in a basket expression are removed by the factories
12
Tumbling window 3
Q1
12
100 100
Q1: Select * From [Select * from X top 3] as S where S.a>10;
14
Sliding window
Q2: SELECT * FROM (
[Select * From X top 1]
Union
Select * From X top 2 offset 1) as S
WHERE S.a>10;
q Flexible/expressive continuous queries, by selectively picking the data to
process from a basket
q Allow to process predicate windows on a stream.
q out of order processing
Wednesday, March 02, 2011
24. Basket Expressions
q Syntax:
It is an SQL sub-query surrounded by square brackets
q Semantics:
All qualifying tuples in a basket expression are removed by the factories
12
Tumbling window 3
Q1
12
100 100
Q1: Select * From [Select * from X top 3] as S where S.a>10;
14
Sliding window
Q2: SELECT * FROM (
12
[Select * From X top 1] 3
Union Q2
100
Select * From X top 2 offset 1) as S
14
WHERE S.a>10;
q Flexible/expressive continuous queries, by selectively picking the data to
process from a basket
q Allow to process predicate windows on a stream.
q out of order processing
Wednesday, March 02, 2011
25. Basket Expressions
q Syntax:
It is an SQL sub-query surrounded by square brackets
q Semantics:
All qualifying tuples in a basket expression are removed by the factories
12
Tumbling window 3
Q1
12
100 100
Q1: Select * From [Select * from X top 3] as S where S.a>10;
14
Sliding window
Q2: SELECT * FROM (
12
[Select * From X top 1] 3 12
Union Q2
100 100
Select * From X top 2 offset 1) as S
14
WHERE S.a>10;
q Flexible/expressive continuous queries, by selectively picking the data to
process from a basket
q Allow to process predicate windows on a stream.
q out of order processing
Wednesday, March 02, 2011
26. Basket Expressions
q Syntax:
It is an SQL sub-query surrounded by square brackets
q Semantics:
All qualifying tuples in a basket expression are removed by the factories
12
Tumbling window 3
Q1
12
100 100
Q1: Select * From [Select * from X top 3] as S where S.a>10;
14
Sliding window
Q2: SELECT * FROM (
12
[Select * From X top 1] 3 12
Union Q2
100 100
Select * From X top 2 offset 1) as S
14
WHERE S.a>10;
q Flexible/expressive continuous queries, by selectively picking the data to
process from a basket
q Allow to process predicate windows on a stream.
q out of order processing
Wednesday, March 02, 2011
27. Query processing strategies
Separate Baskets
• Each continuous query is encapsulated within a single factory
• Each factory f has it own input baskets, that are accessed only by f
• If more than one factory are interested for the same data, we create
multiple copies of this data
• Factories are completely independent
• Exploit column-store to minimize the overhead of replication
bcopy1
Q1
b bcopy2
Qcopy Q2
bcopy3
Q3
Wednesday, March 02, 2011
28. Query processing strategies
Shared Baskets
• Exploit query similarities to avoid replication
• Baskets are shared among factories
• Two new (cheap) factories Locker, Unlocker
Q1
b
Q2
Q3
Wednesday, March 02, 2011
29. Query processing strategies
Shared Baskets
• Exploit query similarities to avoid replication
• Baskets are shared among factories
• Two new (cheap) factories Locker, Unlocker
FL1 Q1
b
Lock FL2 Q2
FL3 Q3
Wednesday, March 02, 2011
30. Query processing strategies
Shared Baskets
• Exploit query similarities to avoid replication
• Baskets are shared among factories
• Two new (cheap) factories Locker, Unlocker
FL1 Q1 FU1
b
Lock FL2 Q2 FU2
FL3 Q3 FU3
Wednesday, March 02, 2011
31. Query processing strategies
Shared Baskets
• Exploit query similarities to avoid replication
• Baskets are shared among factories
• Two new (cheap) factories Locker, Unlocker
FL1 Q1 FU1
b
Lock FL2 Q2 FU2 Unlock
FL3 Q3 FU3
Wednesday, March 02, 2011
32. Query processing strategies
Shared Baskets
• Exploit query similarities to avoid replication
• Baskets are shared among factories
• Two new (cheap) factories Locker, Unlocker
FL1 Q1 FU1
b
Lock FL2 Q2 FU2 Unlock
FL3 Q3 FU3
Wednesday, March 02, 2011
33. Summary
+ = DataCell
Wednesday, March 02, 2011