Our presentation of the paper at the 7th international Workshop on the Web of Things http://webofthings.org/wp-content/uploads/2016/07/WoT_2016_Paper_6_ConstrainedSemanticWoT.pdf
Always-On Web of Things Infrastructure Dynamic Software UpdatingTECO Research Group
We report on our experiences with updating the moquette MQTT message broker[1,2] live/hot/at-runtime using our Dynamic Software Updating system Lusagent.
[1] https://github.com/teco-kit/moquette-lusagent
[2] https://github.com/teco-kit/moquette-lusagent-updatecode
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk covers the latest concepts in observability and then demonstrates how to configure and deploy various OpenTelemetry components to effectively meet your SLO's.
OpenNebulaConf2017EU: Transforming an Old Supercomputer into a Cloud Platform...OpenNebula Project
Currently, typical supercomputers have an expected useful life of 3 or 4 years. One way of another, after this time period, infrastructure is typically replaced or upgraded to face the increasing resource demand by users and companies. This always gives rise to the same question, namely, what should be done with the old hardware if it was replaced? Possible solutions come in the form of decommissioning, splitting up and using it for spare parts, or donating it, but in several cases, the hardware can still provide value when used for different tasks. In this talk we will describe how we have converted the old Tier1 Flemish supercomputer https://www.ugent.be/hpc/en/infrastructure/tier1 into a cloud platform using OpenNebula. During this conversion process, we faced several technical challenges. The first and foremost of these was how to recycle hardware that was designed for use in a classical HPC environment to use in a private cloud. We will describe which steps were taken to address isolating VM traffic through the existing InfiniBand interconnect using the VXLAN network technology. We will also address how we managed mapping our internal (university) and external (industry) users using the OpenNebula “remote” authentication plugin. Finally, we will discuss how we used the InfiniBand interconnect to share the Ceph storage backend and VM traffic in a secure manner. After a testbed phase, in which only pilot users are given access and provide feedback, the new UGent HPC Cloud platform called “Grimer” will be available in production.
YouTube: https://youtu.be/jHchktxIZnM
The Libre-SOC Project aims to create an entirely Libre-Licensed, transparently-developed fully auditable Hybrid 3D CPU-GPU-VPU, using the Supercomputer-class OpenPOWER ISA as the foundation.
Our first test ASIC is a 180nm "Fixed-Point" Power ISA v3.0B processor, 5.1mm x 5.9mm, as a proof-of-concept for the team, whose primary expertise is in Software Engineering. Software Engineering training brings a radically different approach to Hardware development: extensive unit tests, source code revision control, automated development tools are normal. Libre Project Management brings even more: bug trackers, mailing lists, auditable IRC logs and a wiki are standard fare for Libre Projects that are simply not normal Industry-Standard practice.
This talk therefore goes through the workflow, from the original HDL through to the GDS-II layout, showing how we were able to keep track of the development that led to the IMEC 180nm tape-out in July 2021. In particular, by following a parallel development process involving "Real" and "Symbolic" Cell Libraries, developed by Chips4Makers, will be shown how our developers did not need to sign a Foundry NDA, but were still able to work side-by-side with a University that did. With this parallel development process, the University upheld their NDA obligations, and Libre-SOC were simultaneously able to honour its Transparency Objectives.
Always-On Web of Things Infrastructure Dynamic Software UpdatingTECO Research Group
We report on our experiences with updating the moquette MQTT message broker[1,2] live/hot/at-runtime using our Dynamic Software Updating system Lusagent.
[1] https://github.com/teco-kit/moquette-lusagent
[2] https://github.com/teco-kit/moquette-lusagent-updatecode
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk covers the latest concepts in observability and then demonstrates how to configure and deploy various OpenTelemetry components to effectively meet your SLO's.
OpenNebulaConf2017EU: Transforming an Old Supercomputer into a Cloud Platform...OpenNebula Project
Currently, typical supercomputers have an expected useful life of 3 or 4 years. One way of another, after this time period, infrastructure is typically replaced or upgraded to face the increasing resource demand by users and companies. This always gives rise to the same question, namely, what should be done with the old hardware if it was replaced? Possible solutions come in the form of decommissioning, splitting up and using it for spare parts, or donating it, but in several cases, the hardware can still provide value when used for different tasks. In this talk we will describe how we have converted the old Tier1 Flemish supercomputer https://www.ugent.be/hpc/en/infrastructure/tier1 into a cloud platform using OpenNebula. During this conversion process, we faced several technical challenges. The first and foremost of these was how to recycle hardware that was designed for use in a classical HPC environment to use in a private cloud. We will describe which steps were taken to address isolating VM traffic through the existing InfiniBand interconnect using the VXLAN network technology. We will also address how we managed mapping our internal (university) and external (industry) users using the OpenNebula “remote” authentication plugin. Finally, we will discuss how we used the InfiniBand interconnect to share the Ceph storage backend and VM traffic in a secure manner. After a testbed phase, in which only pilot users are given access and provide feedback, the new UGent HPC Cloud platform called “Grimer” will be available in production.
YouTube: https://youtu.be/jHchktxIZnM
The Libre-SOC Project aims to create an entirely Libre-Licensed, transparently-developed fully auditable Hybrid 3D CPU-GPU-VPU, using the Supercomputer-class OpenPOWER ISA as the foundation.
Our first test ASIC is a 180nm "Fixed-Point" Power ISA v3.0B processor, 5.1mm x 5.9mm, as a proof-of-concept for the team, whose primary expertise is in Software Engineering. Software Engineering training brings a radically different approach to Hardware development: extensive unit tests, source code revision control, automated development tools are normal. Libre Project Management brings even more: bug trackers, mailing lists, auditable IRC logs and a wiki are standard fare for Libre Projects that are simply not normal Industry-Standard practice.
This talk therefore goes through the workflow, from the original HDL through to the GDS-II layout, showing how we were able to keep track of the development that led to the IMEC 180nm tape-out in July 2021. In particular, by following a parallel development process involving "Real" and "Symbolic" Cell Libraries, developed by Chips4Makers, will be shown how our developers did not need to sign a Foundry NDA, but were still able to work side-by-side with a University that did. With this parallel development process, the University upheld their NDA obligations, and Libre-SOC were simultaneously able to honour its Transparency Objectives.
Cloud database vendors tend to report performance numbers for the sweet spot or running in highly optimized hardware with specific workload parameters.
Moreover, many of these systems are not tested under different failure scenarios that may appear in the public cloud.
At Netflix, as a cloud-native enterprise, our focus is on high availability. We achieve high availability by deploying at multiple regions.
Hence, our data store system performance is highly affected by our global deployment model, instance types and workload patterns.
Hence, we were interested in a Cloud database benchmark tool that could be deployed in a loosely-coupled fashion, as a microservice, and with the ability to dynamically change the configurations parameters at run time. In this paper, we present Netflix Data Benchmark (NDBench). NDBench offers pluggable patterns and loads and support for different client APIs. It offers the ability to deploy, manage and monitor multiple instances from a single point. NDBench was designed to run for infinite time. This nature offered us with the ability to test long-running database maintenance jobs, test database systems under conditions that affect the performance, such as compactions, repairs etc., and also gives us a view on client side issues like memory leaks and heap pressure. We have been running NDBench for almost 3 years having validated multiple database versions, tested numerous NoSQL systems running on the Cloud, and tested new functionalities. NDBench is major component of our testing and validation pipelines.
This presentation covers various partners and collaborators who are currently working with OpenPOWER foundation ,Use cases of OpenPOWER systems in multiple Industries , OpenPOWER Workgroups and OpenCAPI features .
OSMC 2018 | Stream connector: Easily sending events and/or metrics from the C...NETWAYS
Since Centreon 2.8.18, Centreon broker provides a new connector called “Stream connector”. With it, users have the possibility to create an output to any tool of their choice. The topic of this talk is to present this connector and its use through several examples.
OSMC 2018 | Visualization of your distributed infrastructure by Nicolai BuchwitzNETWAYS
In times of industrial IoT devices and cloud providers like AWS, which allow small and medium-sized companies to distribute their infrastructure around the world, it is becoming increasingly important to keep an overview. Here a few instances in Amsterdam, there a few in Tokyo and not to forget the satellites in Moscow. The Map Addon with its filters and dashboards helps to keep the overview in this constantly changing landscape and to recognize patterns and anomalies at an early stage.
OpenNebulaConf2017EU: Enabling Dev and Infra teams by Lodewijk De Schuyter,De...OpenNebula Project
At the departement of environment and spatial planning we started 2 projects. The first was to replace our vmware based hosting environment with an open, hardware-vendor neutral, hypervisor environment. The second project’s goal was to enable our dev-teams more. This is the story of the second project. What we built and how it works using opennebula and ceph and our existing tooling.
At the time of writing of this abstract, our opennebula environment is used by 4 dev-teams (almost 30 developers) and an infra team, hosting 700 virtual servers and counting. We are executing 300 deploys (as part of the development cycle) per week and counting …
I will be talking about the setup we realized, the choices we made and the deployment tool we ended up with, integrating the toolset we already used. I.e. svn, ansible, opennebula, f5, jfrog, ubuntu/centos, zabbix, bareos, barman, opennebula, …
YouTube: https://youtu.be/OEftbpJ_lSY
Tracing 2000+ polyglot microservices at Uber with Jaeger and OpenTracingYuri Shkuro
Slides from my talk & demo at Go NYC Meeetup 19-Jan-2017.
We present Jaeger, Uber’s open source distributed tracing system, featuring Go backend, React based UI, and OpenTracing API support. We show examples of instrumenting application code for tracing and using distributed context propagation to attribute backend resource usage to top level consumers.
Introducing MagnetoDB, a key-value storage sevice for OpenStackMirantis
Introducing MagnetoDB, NoSQL database as a service for OpenStack. MagnetoDB acts as a key-value store, is tightly integrated with OpenStack, and yet is compatible with the Amazon DynamoDB API, and can be used as a drop-in replacement.
Netty Notes Part 3 - Channel Pipeline and EventLoopsRick Hightower
Learning more about Netty helps me understand Vert.x better. Netty in Action is a great book. The threading model of Netty is very important to understanding event loops and reactive programming.
Scaling the logging pipeline requires better understanding of each phase behind the scenes.
Everything about Fluentd as an aggregator and Fluent Bit as it Log Forwarder
In this slide, we go through the Google Dapper, OpenTracing, Jaeger to OpenTelemetry. By reading and studying the history of Dapper, we could lean the experience and design theory of a large-scale distributed tracing system and then know how it affects other solutions, like OpenTracing and Jaeger.
We also discuss the difference between the OpenTracing and Jaeger and also demonstrate how Jaeger works and looks like.
After, we talked about the future of OpenTracing, the new organization called OpenTelemetry, what's its goal and how to do that.
Presentation of CLIF open source project at OW2con'19, June 12-13, Paris. OW2
CLIF Use Case by Orange presented at OW2con'19, June 12-13 2019 in Paris: "How CLIF saved the benchmarking campaign of Orange's new domestic IoT service".
This presentation is an overview of the API design and management solutions suitable for Cloud Native Environments. This main focus lies on synchronous API design and micro services.
Cloud database vendors tend to report performance numbers for the sweet spot or running in highly optimized hardware with specific workload parameters.
Moreover, many of these systems are not tested under different failure scenarios that may appear in the public cloud.
At Netflix, as a cloud-native enterprise, our focus is on high availability. We achieve high availability by deploying at multiple regions.
Hence, our data store system performance is highly affected by our global deployment model, instance types and workload patterns.
Hence, we were interested in a Cloud database benchmark tool that could be deployed in a loosely-coupled fashion, as a microservice, and with the ability to dynamically change the configurations parameters at run time. In this paper, we present Netflix Data Benchmark (NDBench). NDBench offers pluggable patterns and loads and support for different client APIs. It offers the ability to deploy, manage and monitor multiple instances from a single point. NDBench was designed to run for infinite time. This nature offered us with the ability to test long-running database maintenance jobs, test database systems under conditions that affect the performance, such as compactions, repairs etc., and also gives us a view on client side issues like memory leaks and heap pressure. We have been running NDBench for almost 3 years having validated multiple database versions, tested numerous NoSQL systems running on the Cloud, and tested new functionalities. NDBench is major component of our testing and validation pipelines.
This presentation covers various partners and collaborators who are currently working with OpenPOWER foundation ,Use cases of OpenPOWER systems in multiple Industries , OpenPOWER Workgroups and OpenCAPI features .
OSMC 2018 | Stream connector: Easily sending events and/or metrics from the C...NETWAYS
Since Centreon 2.8.18, Centreon broker provides a new connector called “Stream connector”. With it, users have the possibility to create an output to any tool of their choice. The topic of this talk is to present this connector and its use through several examples.
OSMC 2018 | Visualization of your distributed infrastructure by Nicolai BuchwitzNETWAYS
In times of industrial IoT devices and cloud providers like AWS, which allow small and medium-sized companies to distribute their infrastructure around the world, it is becoming increasingly important to keep an overview. Here a few instances in Amsterdam, there a few in Tokyo and not to forget the satellites in Moscow. The Map Addon with its filters and dashboards helps to keep the overview in this constantly changing landscape and to recognize patterns and anomalies at an early stage.
OpenNebulaConf2017EU: Enabling Dev and Infra teams by Lodewijk De Schuyter,De...OpenNebula Project
At the departement of environment and spatial planning we started 2 projects. The first was to replace our vmware based hosting environment with an open, hardware-vendor neutral, hypervisor environment. The second project’s goal was to enable our dev-teams more. This is the story of the second project. What we built and how it works using opennebula and ceph and our existing tooling.
At the time of writing of this abstract, our opennebula environment is used by 4 dev-teams (almost 30 developers) and an infra team, hosting 700 virtual servers and counting. We are executing 300 deploys (as part of the development cycle) per week and counting …
I will be talking about the setup we realized, the choices we made and the deployment tool we ended up with, integrating the toolset we already used. I.e. svn, ansible, opennebula, f5, jfrog, ubuntu/centos, zabbix, bareos, barman, opennebula, …
YouTube: https://youtu.be/OEftbpJ_lSY
Tracing 2000+ polyglot microservices at Uber with Jaeger and OpenTracingYuri Shkuro
Slides from my talk & demo at Go NYC Meeetup 19-Jan-2017.
We present Jaeger, Uber’s open source distributed tracing system, featuring Go backend, React based UI, and OpenTracing API support. We show examples of instrumenting application code for tracing and using distributed context propagation to attribute backend resource usage to top level consumers.
Introducing MagnetoDB, a key-value storage sevice for OpenStackMirantis
Introducing MagnetoDB, NoSQL database as a service for OpenStack. MagnetoDB acts as a key-value store, is tightly integrated with OpenStack, and yet is compatible with the Amazon DynamoDB API, and can be used as a drop-in replacement.
Netty Notes Part 3 - Channel Pipeline and EventLoopsRick Hightower
Learning more about Netty helps me understand Vert.x better. Netty in Action is a great book. The threading model of Netty is very important to understanding event loops and reactive programming.
Scaling the logging pipeline requires better understanding of each phase behind the scenes.
Everything about Fluentd as an aggregator and Fluent Bit as it Log Forwarder
In this slide, we go through the Google Dapper, OpenTracing, Jaeger to OpenTelemetry. By reading and studying the history of Dapper, we could lean the experience and design theory of a large-scale distributed tracing system and then know how it affects other solutions, like OpenTracing and Jaeger.
We also discuss the difference between the OpenTracing and Jaeger and also demonstrate how Jaeger works and looks like.
After, we talked about the future of OpenTracing, the new organization called OpenTelemetry, what's its goal and how to do that.
Presentation of CLIF open source project at OW2con'19, June 12-13, Paris. OW2
CLIF Use Case by Orange presented at OW2con'19, June 12-13 2019 in Paris: "How CLIF saved the benchmarking campaign of Orange's new domestic IoT service".
This presentation is an overview of the API design and management solutions suitable for Cloud Native Environments. This main focus lies on synchronous API design and micro services.
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kevin Lynch
In this presentation I talk about our motivation to converting our microservices to run on Kubernetes. I discuss many of the technical challenges we encountered along the way, including networking issues, Java issues, monitoring and alerting, and managing all of our resources!
As ODP enters its third year we are seeing increased maturity in its capabilities as well as increased adoption by application writers. This talk highlights ODP developments since SFO15 and discusses what’s ahead for ODP in 2016 as it enters production use.
Kubernetes @ Squarespace: Kubernetes in the DatacenterKevin Lynch
This talk was presented at SRE NYC Meetup on August 16, 2017 at Squarespace HQ.
https://www.youtube.com/watch?v=UJ1QAKprVr4
As the engineering teams at Squarespace grow, we have been building more and more microservices. However, this has added operational strain as we try to shoehorn a growing, complex dynamic environment into our static data center infrastructure. We needed to rethink how we handle deployments, dependency management, resource allocation, monitoring, and alerting. Docker containerization and Kubernetes orchestration helps us tackle many of these problems, but the journey has been challenging. In this talk, we’ll discuss the challenges of running Kubernetes in a datacenter and how we switched to a more SLA-focused alert structure than per instance health with Prometheus and AlertManager.
First steps into developing an application as a suite of small services, and analysis of tools and architecture approaches to be used.
Topics covered:
1) What is a micro service architecture
2)Advantages in code procedures, team dynamics and scaling
3) How container services such as docker assist in its implementation
4) How to deploy code in a micro services architecture
5) Container Management tools and resource efficiency (mesos, kubernetes, aws container service)
6) Scaling up
By PeoplePerHour team
presented by CTO Spyros Lambrinidis & Senior DevOps Panagiotis Moustafellos @ Docker Athens Meetup 18/02/2015
HPC control systems are evolving into the future. This presentation looks at where this evolution may lead, and describes how the control system of the future might be constructed.
Building high performance microservices in finance with Apache ThriftRX-M Enterprises LLC
Apache Roadshow Chicago Talk on May 14, 2019
In this talk we’ll look at the ways Apache Thrift can solve performance problems commonly facing next generation applications deployed in performance sensitive capital markets and banking environments. The talk will include practical examples illustrating the construction, performance and resource utilization benefits of Apache Thrift. Apache Thrift is a high-performance cross platform RPC and serialization framework designed to make it possible for organizations to specify interfaces and application wide data structures suitable for serialization and transport over a wide variety of schemes. Due to the unparalleled set of languages supported by Apache Thrift, these interfaces and structs have similar interoperability to REST type services with an order of magnitude improvement in performance. Apache Thrift services are also a perfect fit for container technology, using considerably fewer resources than traditional application server style deployments. Decomposing applications into microservices, packaging them into containers and orchestrating them on systems like Kubernetes can bring great value to an organization; however, it can also take a very fast monolithic application and turn it into a high latency web of slow, resource hungry services. Apache Thrift is a perfect solution to the performance and resource ills of many microservice based endeavors.
Big data Argentina meetup 2020-09: Intro to presto on dockerFederico Palladoro
We will talk about how we are migrating our Presto clusters from AWS EMR to Docker using production-grade orchestrators
considering cluster management, configuration and monitoring. We will discuss between Hashicorp Nomad and Kubernetes as a base solution
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a MonthNicolas Brousse
TubeMogul grew from few servers to over two thousands servers and handling over one trillion http requests a month, processed in less than 50ms each. To keep up with the fast growth, the SRE team had to implement an efficient Continuous Delivery infrastructure that allowed to do over 10,000 puppet deployment and 8,500 application deployment in 2014. In this presentation, we will cover the nuts and bolts of the TubeMogul operations engineering team and how they overcome challenges.
Netflix Open Source Meetup Season 4 Episode 2aspyker
In this episode, we will take a close look at 2 different approaches to high-throughput/low-latency data stores, developed by Netflix.
The first, EVCache, is a battle-tested distributed memcached-backed data store, optimized for the cloud. You will also hear about the road ahead for EVCache it evolves into an L1/L2 cache over RAM and SSDs.
The second, Dynomite, is a framework to make any non-distributed data-store, distributed. Netflix's first implementation of Dynomite is based on Redis.
Come learn about the products' features and hear from Thomson and Reuters, Diego Pacheco from Ilegra and other third party speakers, internal and external to Netflix, on how these products fit in their stack and roadmap.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
3. Context: WoT assumptions
● Servient
○ Part of the Web of Things Architecture
○ Represents a Thing on the Web
○ Component-based architecture that addresses different concerns
● Avatar
○ Defined in the French ASAWoO project
○ One possible implementation of a servient
○ Defines a vocabulary that distinguishes
■ Capabilities
■ Functionalities
● Embedded WoT server
○ Standard compliant
○ Partly implement servient / avatar architecture
○ Expose RESTful capabilities 3
4. Requirements
● Be based on Web standards
○ Protocols: HTTP, CoAP
○ Communication scheme: REST
● Ensure application-level Interoperability and Discoverability
○ Provide Semantic resource descriptions
○ RDF, RDF-S, OWL
● Fit in constrained devices
○ Ease of Deployment and Scalability
○ Constraints:
1. Computing power: CPU, Memory, Storage
2. Network
3. Energy Efficiency 4
9. Constrained Device software components
● Thing Configuration: A non-volatile memory space where the state of the
device’s services is stored.
● Semantic Server Repository: A space where the device can store and
read its Semantic descriptions.
● Service Model: In memory representation of the services currently
provided by the sensors and actuators.
● Processing buffers: Handle requests and generate responses
○ CoAP headers
○ JSON payloads
9
Constrained Thing
Thing
Configuration
Semantic
Server
Repository
RAM
e
Service
Model
Processing
buffers
10. Application Protocol
CoAP(Constrained Application Protocol)
- Analogous to HTTP request semantics (GET/POST/PUT…)
- Binary Headers
➔ Support for REST implementations
- On top of UDP (but also 6LoWPAN)
- Blockwise transfer: fixed size packet exchange
➔ Suitable for constrained devices
10
11. Constrained Device processing workflow
Mode 1: Startup
Mode 2: Requesting
Boot
Device
Configuration
Reader
Thing Config.
List of
Enabled
Services
Service
Parser
Sem. Server Repo.
Service
TemplateService
TemplateService
Model
CoAP
Request Buffer AnalyserService URI
. . .
Completeness
Detection
Sensor/Act.
Sensor/Act.
CoAP
Response
Results
Buffer
Serializing CoAP
Buffer
Blockwise
Transfer
11
13. A Thing exposes a list of Capabilities.
● Capability Properties
○ Enabled Status
○ Command
○ Value
Each device describes its API through a complete Hydra API documentation
➔ Resource Expensive
➔ Use Linked Data to reference common parts of the API → ASAWoO
ontology
Hosted @ http://liris.cnrs.fr/asawoo/ontology/asawoo-hydra.jsonld
Semantic API documentation
13
14. Hydra-based documentation example
{
"@id":"vocab:temperatureSensor",
"@type" : ["hydra:Resource","asawoo:Capability"],
"description": "Retrieves a temperature.",
"hydra:supportedOperation" : [
{
"@id": "_:senseTemp",
"@type": "asawoo:capability_retrieve",
"returns": "vocab:Temperature"
},[...]
]
},[...]
{
"@id": "vocab:Temperature",
"@type": ["http://ontology.tno.nl/saref/#Temperature","asawoo:Value"],
"description": "The value returned for a Temperature reading, as specified in SAREF."
},[...]
14
Hosted @ https://github.com/ucbl/arduinoRdfServer/blob/master/example/arduino.jsonld
19. Results
Feasibility is achieved on a Constrained Device using the CoAP standard with a
REST architecture:
● Several services can be instantiated at once.
● Embedded RDF graphs are used by both client and server to describe an
API
● Configuration persists through power cuts 19
RAM Footprint Cumulative Available RAM
Algorithm & Class Instantiation 968B 1080B (52%)
Service Template (x3) 279B 725B (35.4%)
Static JSON Buffers 200B 525B (25.63%)
Package-processing variables 24B 501B (24.46%)
21. Conclusion
● POC about semantic interoperability in the WoT on constrained devices
● Next step: intelligent clients that can discover and use such WoT servers
● A step toward decentralized semantic WoT?
○ Aggregating such servers can result in complex applications by pushing hypermedia
possibilities further
21