An Inference Sharing Architecture for a More Efficient Context Reasoning
Upcoming SlideShare
Loading in...5
×
 

An Inference Sharing Architecture for a More Efficient Context Reasoning

on

  • 590 views

Slides fo the paper "An Inference Sharing Architecture for a More Efficient Context Reasoning" presented in SENAMI 2012

Slides fo the paper "An Inference Sharing Architecture for a More Efficient Context Reasoning" presented in SENAMI 2012

Statistics

Views

Total Views
590
Views on SlideShare
589
Embed Views
1

Actions

Likes
0
Downloads
2
Comments
0

1 Embed 1

http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

An Inference Sharing Architecture for a More Efficient Context Reasoning An Inference Sharing Architecture for a More Efficient Context Reasoning Presentation Transcript

  • An Inference Sharing Architecture for a More Efficient Context Reasoning Aitor Almeida, Diego López-de-Ipiña Deusto Institute of Technology - DeustoTech University of Deusto Bilbao, Spain {aitor.almeida, dipina}@deusto.es
  • Semantic Inference in Context Management• Ontologies are regarded as one of the best approaches to model the context information [Strang04]• OWL ontologies offer a rich expression framework to represent the context data and its relations.• Problem  as the number of triples in the ontology increases the inference time for environment actions become unsustainable.
  • Semantic Inference in Context Management• Modern smart environments host a diverse ecosystem of computationally enabled devices• Smart environments have at their disposal nowadays more pervasive computing capabilities than ever.• The existence of a large number of devices available in the environment offers several advantages to help context management systems overcome the inference problem.
  • Semantic Inference in Context Management• We have decided to split the inference problem among different reasoning engines or context consumers according to the interests stated by each of them.• Inference is no longer performed by a central reasoning engine – Is divided into a peer-to-peer network of context producers and context consumers.• Dividing the inference problem into smaller sub-units makes it more computationally affordable.
  • Inference Sharing Process• Objetives 1. Attain the temporal decoupling of the different inference units. • Allow the inference to be done concurrently in various reasoning engines. 2. Attain the spatial decoupling of the inference process. • Increase the general robustness of the system making it more fault-tolerant. 3. Reduce the number of triples and rules that each reasoning engine has to manage. • Use of more computationally constrained devices to carry out the inference process. 4. Compartmentalize the information according to the different interests of the reasoning engines.. 5. Allow the dynamic modification of the created organization. • The created hierarchy must change to adapt itself to these modifications.
  • Inference Sharing Process• This organization also allows the creation of a reasoning engine hierarchy that provides different abstraction levels in the inferred facts.
  • Inference Sharing Process
  • Inference Sharing Process• To divide the inference process three factors are taken into account: 1. The ontological concepts that will be used by the reasoner. – Concepts are organized in a taxonomy depicting the class relations of the ontology. – We use the AMBI2ONT ontology. – The context consumer can search for specific concepts (in our example “Luminosity”) or broader ones (as “Electrical Consumption” that encompass “LightStatus”, “AirConditioning” and “ApplianceStatus”).
  • Inference Sharing Process
  • Inference Sharing Process2. The location where the measures originate from. – We have a taxonomy of the locations extracted from the domain ontology. – This taxonomy models the “contains” relations between the different rooms, floors and buildings – The context consumer can search for an specific location (the Smartlab laboratory in our example) or for a set of related locations (for example all the rooms in the first floor of the engineering building)
  • Inference Sharing Process
  • Inference Sharing Process3. The certainty factor (CF) associated to each ontological concept. – In our ontology we model the uncertain nature of the environments. – Sensors and devices are not perfect and their measures carry a degree of uncertainty. – The context consumer can specify a minimum CF in its searches.
  • System Architecture• We use an agent based architecture.• Two type of agents: – Context Providers: These agents represent those elements in the architecture that can provide any context data. – Context Consumers: These agents represent those elements that need to consume context data.• The differentiation between providers and consumers is simply logical. – The same device can act as a provider and a consumer at the same time, receiving data from sensors and supplying the inferred facts to another device. – While the resulting inference process follows somewhat a hierarchy, the agent structure is purely a peer-to-peer architecture where there is no central element.
  • System Architecture• To be able to take part in the negotiation process all participants have to have information about the three factors described before.• The negotiation follows these steps (Contract Net Protocol): 1. The Context Consumer Agent (CCA) sends a Call For Proposals (CFP) stating the context type and location he is interested in and the minimum certainty factor (CF) expected from the context. 2. The Context Provider Agents (CPA) replies to the CFP with individual proposals stating what they can offer to the CCA. 3. The CCA checks the received proposals. Some the context consumers have a maximum number of context sources they can subscribe to concurrently. This is dictated by the computational capabilities of the device running the CCA. If the number of received proposals is above this limit the CCA select those that it considers better (comparing their CF). 4. The CCA sends an Accept to the selected CCPs and subscribes to the context updates.
  • System Architecture
  • Validation• Using the OWL-DL reasoner that comes with Jena Framework• OWL entailent + domain rules – Each measure of the context provider is related to one rule. – 50% of the measures fire a rule.• Average network latency of 50 ms• 2 scenarios 1. A centralized reasoning process 2. Inference sharing among several reasoners.
  • Validation
  • Validation
  • Validation
  • Validation
  • Validation
  • Validation
  • Future Work• As future work we intend to create a more complex negotiation process where the computational capabilities of each device are taken into account to better divide the inference according to them.• We would like to dynamically reassign the rules from one reasoning engine to another according to their capabilities. – This will allow us to create the most efficient inference structure for any given domain.
  • Thanks for your attention Questions?This work has been supported by project grant TIN2010-20510-C04-03(TALIS+ENGINE), funded by the Spanish Ministerio de Ciencia e Innovación