An Inference Sharing Architecture for a  More Efficient Context Reasoning      Aitor Almeida, Diego López-de-Ipiña     Deu...
Semantic Inference in Context             Management• Ontologies are regarded as one of the best approaches to  model the ...
Semantic Inference in Context             Management• Modern smart environments host a diverse ecosystem of  computational...
Semantic Inference in Context             Management• We have decided to split the inference problem among  different reas...
Inference Sharing Process• Objetives   1.   Attain the temporal decoupling of the different inference units.        •    A...
Inference Sharing Process• This organization also allows the creation of a reasoning  engine hierarchy that provides diffe...
Inference Sharing Process
Inference Sharing Process• To divide the inference process three factors are taken into   account: 1. The ontological conc...
Inference Sharing Process
Inference Sharing Process2. The location where the measures originate from.   –   We have a taxonomy of the locations extr...
Inference Sharing Process
Inference Sharing Process3. The certainty factor (CF) associated to each ontological   concept.   –   In our ontology we m...
System Architecture•   We use an agent based architecture.•   Two type of agents:    –   Context Providers: These agents r...
System Architecture•   To be able to take part in the negotiation process all participants    have to have information abo...
System Architecture
Validation•   Using the OWL-DL reasoner that comes with Jena    Framework•   OWL entailent + domain rules    –    Each mea...
Validation
Validation
Validation
Validation
Validation
Validation
Future Work• As future work we intend to create a more complex  negotiation process where the computational capabilities o...
Thanks for your attention                       Questions?This work has been supported by project grant TIN2010-20510-C04-...
Upcoming SlideShare
Loading in …5
×

An Inference Sharing Architecture for a More Efficient Context Reasoning

531
-1

Published on

Slides fo the paper "An Inference Sharing Architecture for a More Efficient Context Reasoning" presented in SENAMI 2012

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
531
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

An Inference Sharing Architecture for a More Efficient Context Reasoning

  1. 1. An Inference Sharing Architecture for a More Efficient Context Reasoning Aitor Almeida, Diego López-de-Ipiña Deusto Institute of Technology - DeustoTech University of Deusto Bilbao, Spain {aitor.almeida, dipina}@deusto.es
  2. 2. Semantic Inference in Context Management• Ontologies are regarded as one of the best approaches to model the context information [Strang04]• OWL ontologies offer a rich expression framework to represent the context data and its relations.• Problem  as the number of triples in the ontology increases the inference time for environment actions become unsustainable.
  3. 3. Semantic Inference in Context Management• Modern smart environments host a diverse ecosystem of computationally enabled devices• Smart environments have at their disposal nowadays more pervasive computing capabilities than ever.• The existence of a large number of devices available in the environment offers several advantages to help context management systems overcome the inference problem.
  4. 4. Semantic Inference in Context Management• We have decided to split the inference problem among different reasoning engines or context consumers according to the interests stated by each of them.• Inference is no longer performed by a central reasoning engine – Is divided into a peer-to-peer network of context producers and context consumers.• Dividing the inference problem into smaller sub-units makes it more computationally affordable.
  5. 5. Inference Sharing Process• Objetives 1. Attain the temporal decoupling of the different inference units. • Allow the inference to be done concurrently in various reasoning engines. 2. Attain the spatial decoupling of the inference process. • Increase the general robustness of the system making it more fault-tolerant. 3. Reduce the number of triples and rules that each reasoning engine has to manage. • Use of more computationally constrained devices to carry out the inference process. 4. Compartmentalize the information according to the different interests of the reasoning engines.. 5. Allow the dynamic modification of the created organization. • The created hierarchy must change to adapt itself to these modifications.
  6. 6. Inference Sharing Process• This organization also allows the creation of a reasoning engine hierarchy that provides different abstraction levels in the inferred facts.
  7. 7. Inference Sharing Process
  8. 8. Inference Sharing Process• To divide the inference process three factors are taken into account: 1. The ontological concepts that will be used by the reasoner. – Concepts are organized in a taxonomy depicting the class relations of the ontology. – We use the AMBI2ONT ontology. – The context consumer can search for specific concepts (in our example “Luminosity”) or broader ones (as “Electrical Consumption” that encompass “LightStatus”, “AirConditioning” and “ApplianceStatus”).
  9. 9. Inference Sharing Process
  10. 10. Inference Sharing Process2. The location where the measures originate from. – We have a taxonomy of the locations extracted from the domain ontology. – This taxonomy models the “contains” relations between the different rooms, floors and buildings – The context consumer can search for an specific location (the Smartlab laboratory in our example) or for a set of related locations (for example all the rooms in the first floor of the engineering building)
  11. 11. Inference Sharing Process
  12. 12. Inference Sharing Process3. The certainty factor (CF) associated to each ontological concept. – In our ontology we model the uncertain nature of the environments. – Sensors and devices are not perfect and their measures carry a degree of uncertainty. – The context consumer can specify a minimum CF in its searches.
  13. 13. System Architecture• We use an agent based architecture.• Two type of agents: – Context Providers: These agents represent those elements in the architecture that can provide any context data. – Context Consumers: These agents represent those elements that need to consume context data.• The differentiation between providers and consumers is simply logical. – The same device can act as a provider and a consumer at the same time, receiving data from sensors and supplying the inferred facts to another device. – While the resulting inference process follows somewhat a hierarchy, the agent structure is purely a peer-to-peer architecture where there is no central element.
  14. 14. System Architecture• To be able to take part in the negotiation process all participants have to have information about the three factors described before.• The negotiation follows these steps (Contract Net Protocol): 1. The Context Consumer Agent (CCA) sends a Call For Proposals (CFP) stating the context type and location he is interested in and the minimum certainty factor (CF) expected from the context. 2. The Context Provider Agents (CPA) replies to the CFP with individual proposals stating what they can offer to the CCA. 3. The CCA checks the received proposals. Some the context consumers have a maximum number of context sources they can subscribe to concurrently. This is dictated by the computational capabilities of the device running the CCA. If the number of received proposals is above this limit the CCA select those that it considers better (comparing their CF). 4. The CCA sends an Accept to the selected CCPs and subscribes to the context updates.
  15. 15. System Architecture
  16. 16. Validation• Using the OWL-DL reasoner that comes with Jena Framework• OWL entailent + domain rules – Each measure of the context provider is related to one rule. – 50% of the measures fire a rule.• Average network latency of 50 ms• 2 scenarios 1. A centralized reasoning process 2. Inference sharing among several reasoners.
  17. 17. Validation
  18. 18. Validation
  19. 19. Validation
  20. 20. Validation
  21. 21. Validation
  22. 22. Validation
  23. 23. Future Work• As future work we intend to create a more complex negotiation process where the computational capabilities of each device are taken into account to better divide the inference according to them.• We would like to dynamically reassign the rules from one reasoning engine to another according to their capabilities. – This will allow us to create the most efficient inference structure for any given domain.
  24. 24. Thanks for your attention Questions?This work has been supported by project grant TIN2010-20510-C04-03(TALIS+ENGINE), funded by the Spanish Ministerio de Ciencia e Innovación
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×