Juraci Paixão Kröhling - All you need to know about OpenTelemetryJuliano Costa
OpenTelemetry is one of the newest projects in the realm of Observability at the CNCF and is already the second most active project there. In this session, Juraci Paixão Kröhling will talk about the different subprojects and how to get started using them. Even if you heard about OpenTelemetry before, you'll leave this session with a better understanding of what this is all about, the several faces of OpenTelemetry, and what you can do to make your projects more observable.
Jaeger and OpenTracing Cloud Native Computing (CNCF) meetup Zurich⛑ Pavol Loffay
The document summarizes Jaeger and OpenTracing. It discusses distributed tracing concepts and how Jaeger is a distributed tracing system that uses the OpenTracing API. Jaeger uses technologies like Go, Cassandra, and React and can trace services in architectures like Istio and microservices. It also demonstrates traces using Jaeger's UI.
Tracing 2000+ polyglot microservices at Uber with Jaeger and OpenTracingYuri Shkuro
Slides from my talk & demo at Go NYC Meeetup 19-Jan-2017.
We present Jaeger, Uber’s open source distributed tracing system, featuring Go backend, React based UI, and OpenTracing API support. We show examples of instrumenting application code for tracing and using distributed context propagation to attribute backend resource usage to top level consumers.
Monitoring to the Nth tier: The state of distributed tracing in 2016AppNeta
This document discusses distributed tracing and monitoring of distributed systems. It provides examples of common distributed architectures and outlines challenges with instrumentation, trace ID propagation, and extracting valuable monitoring information. Distributed tracing allows understanding request flows, latencies, errors and more across tiers. While initially for performance, tracing provides a rich data set for analyzing individual components and custom metrics. Future directions include context propagation for things beyond performance like authentication and flow control.
Open Tracing, to order and understand your mess. - ApiConf 2017Gianluca Arbezzano
This about how many api calls your applications were doing 3-4 years ago, and think about how many integration and difference services your requests is crossing before to come back to the final destination. How do you know this step of your pipeline is taking too much time? What is taking 2 seconds to answer? Is it the authentication service? Maybe it's the invoice generation service or the notification platform. Open Tracing is a distributed tracing cross vendor and open source that help you to understand bottleneck and to profile the requests from where they arrive at the final user. In an ecosystem where microservices and as a service concept are growing this can be a real challenge. During this presentation, we will see how it works from a general point of view to land in some real implementation, examples, and demo.
Distributed tracing - get a grasp on your productionnklmish
Slides from my presentation on distributed tracing, explaining what is latency and why it matters. We took a look at openzipkin and its concepts like how the core annotations works, what are tags/logs, etc. Followed by a demo application created using golang and java (spring boot , spring cloud sleuth zipkin) . You can find source code here
https://github.com/nklmish/go-distributed-tracing-demo
https://github.com/nklmish/java-distributed-tracing-demo
In this slide, we go through the Google Dapper, OpenTracing, Jaeger to OpenTelemetry. By reading and studying the history of Dapper, we could lean the experience and design theory of a large-scale distributed tracing system and then know how it affects other solutions, like OpenTracing and Jaeger.
We also discuss the difference between the OpenTracing and Jaeger and also demonstrate how Jaeger works and looks like.
After, we talked about the future of OpenTracing, the new organization called OpenTelemetry, what's its goal and how to do that.
Tracing Micro Services with OpenTracingHemant Kumar
Tracing in the world of micro services has become a standard with people using distributed tracers like Zipkin, Jaeger, Appdash etc. But, with so many different tracers, its confusing to choose one tracer and then painful to replace a tracer. That's where OpenTracing comes in. OT provides a consistent, vendor-neutral API to allow users to choose whatever distributed tracer they need and can change the tracer with just an O(1) operation.
Juraci Paixão Kröhling - All you need to know about OpenTelemetryJuliano Costa
OpenTelemetry is one of the newest projects in the realm of Observability at the CNCF and is already the second most active project there. In this session, Juraci Paixão Kröhling will talk about the different subprojects and how to get started using them. Even if you heard about OpenTelemetry before, you'll leave this session with a better understanding of what this is all about, the several faces of OpenTelemetry, and what you can do to make your projects more observable.
Jaeger and OpenTracing Cloud Native Computing (CNCF) meetup Zurich⛑ Pavol Loffay
The document summarizes Jaeger and OpenTracing. It discusses distributed tracing concepts and how Jaeger is a distributed tracing system that uses the OpenTracing API. Jaeger uses technologies like Go, Cassandra, and React and can trace services in architectures like Istio and microservices. It also demonstrates traces using Jaeger's UI.
Tracing 2000+ polyglot microservices at Uber with Jaeger and OpenTracingYuri Shkuro
Slides from my talk & demo at Go NYC Meeetup 19-Jan-2017.
We present Jaeger, Uber’s open source distributed tracing system, featuring Go backend, React based UI, and OpenTracing API support. We show examples of instrumenting application code for tracing and using distributed context propagation to attribute backend resource usage to top level consumers.
Monitoring to the Nth tier: The state of distributed tracing in 2016AppNeta
This document discusses distributed tracing and monitoring of distributed systems. It provides examples of common distributed architectures and outlines challenges with instrumentation, trace ID propagation, and extracting valuable monitoring information. Distributed tracing allows understanding request flows, latencies, errors and more across tiers. While initially for performance, tracing provides a rich data set for analyzing individual components and custom metrics. Future directions include context propagation for things beyond performance like authentication and flow control.
Open Tracing, to order and understand your mess. - ApiConf 2017Gianluca Arbezzano
This about how many api calls your applications were doing 3-4 years ago, and think about how many integration and difference services your requests is crossing before to come back to the final destination. How do you know this step of your pipeline is taking too much time? What is taking 2 seconds to answer? Is it the authentication service? Maybe it's the invoice generation service or the notification platform. Open Tracing is a distributed tracing cross vendor and open source that help you to understand bottleneck and to profile the requests from where they arrive at the final user. In an ecosystem where microservices and as a service concept are growing this can be a real challenge. During this presentation, we will see how it works from a general point of view to land in some real implementation, examples, and demo.
Distributed tracing - get a grasp on your productionnklmish
Slides from my presentation on distributed tracing, explaining what is latency and why it matters. We took a look at openzipkin and its concepts like how the core annotations works, what are tags/logs, etc. Followed by a demo application created using golang and java (spring boot , spring cloud sleuth zipkin) . You can find source code here
https://github.com/nklmish/go-distributed-tracing-demo
https://github.com/nklmish/java-distributed-tracing-demo
In this slide, we go through the Google Dapper, OpenTracing, Jaeger to OpenTelemetry. By reading and studying the history of Dapper, we could lean the experience and design theory of a large-scale distributed tracing system and then know how it affects other solutions, like OpenTracing and Jaeger.
We also discuss the difference between the OpenTracing and Jaeger and also demonstrate how Jaeger works and looks like.
After, we talked about the future of OpenTracing, the new organization called OpenTelemetry, what's its goal and how to do that.
Tracing Micro Services with OpenTracingHemant Kumar
Tracing in the world of micro services has become a standard with people using distributed tracers like Zipkin, Jaeger, Appdash etc. But, with so many different tracers, its confusing to choose one tracer and then painful to replace a tracer. That's where OpenTracing comes in. OT provides a consistent, vendor-neutral API to allow users to choose whatever distributed tracer they need and can change the tracer with just an O(1) operation.
Nagios Conference 2013 - Nick Scott - Nagios Network AnalyzerNagios
Nick Scott's presentation on Nagios Network Analyzer.
The presentation was given during the Nagios World Conference North America held Sept 20-Oct 2nd, 2013 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Distributed Tracing at UBER Scale: Creating a treasure map for your monitori...Yuri Shkuro
This document discusses distributed tracing at Uber scale. It explains why companies implement distributed tracing, including for distributed transaction monitoring, performance optimization, root cause analysis, and service dependency analysis. It also discusses challenges with instrumentation and context propagation across services. Finally, it outlines strategies Uber used to encourage adoption of distributed tracing across many teams, such as appealing to developers' interests and measuring adoption through trace quality scores.
Distributed tracing with OpenTracing and Jaeger @ getstream.ioMax Klyga
Build Scalable Newsfeeds & Activity Streams - https://getstream.io
Distributed tracing solves problems of identifying performance bottlenecks in distributed system.
This presentation describes how Stream uses OpenTracing and Jaeger for distributed tracing in our system.
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk covers the fundamental concepts of observability and then demonstrates how to instrument your applications using the OpenTelemetry libraries.
This document lists several SDN projects, controllers, simulators, and research areas. It describes Python and C++ based OpenFlow controllers like SNAC and HELIOS. It also lists simulators that support SDN including POX with GEPHI for SDN virtualization and OpenFlow VM simulation. Finally, it outlines significant research areas for SDN PhD projects such as software-defined wireless networking and software-defined mobile networks.
This document provides an overview of OpenTelemetry for operators. It discusses some of the limitations of current observability platforms and how OpenTelemetry addresses these issues. It introduces the OpenTelemetry project which combines distributed tracing, metrics, and logging APIs. It describes the OpenTelemetry Collector which receives, processes, and exports telemetry data. It provides examples of Collector configuration and running it in production. It also discusses some innovations in the observability space from vendors like Dynatrace, New Relic, Splunk SignalFX, and others.
This document discusses distributed tracing and how it can help solve problems caused by microservices. It covers what distributed tracing is, how it works, popular implementations like OpenTracing and Zipkin, and best practices for using distributed tracing. OpenTracing is introduced as a vendor-neutral standard that helps library developers implement tracing and defines common formats for propagating traces between services. Code examples are provided for collecting trace data using OpenTracing and Zipkin.
Recent Advances in Machine Learning: Bringing a New Level of Intelligence to ...Brocade
Presentation by Brocade Chief Scientist and Fellow, David Meyer, given at Orange Gardens July 2016. What is Machine Learning and what is all the excitement about?
An associated blog is available here: http://community.brocade.com/t5/CTO-Corner/Networking-Meets-Artificial-Intelligence-A-Glimpse-into-the-Very/ba-p/88196
- OpenTelemetry is a unified observability library that provides APIs, SDKs, and middleware for generating and exporting traces, metrics, and logs. It combines OpenCensus and OpenTracing into a single open source project.
- The OpenTelemetry architecture includes APIs, SDKs, a collector, and exporters. The SDK implements the API to generate traces and metrics. The collector receives telemetry data from the SDK and exports it to backends like Jaeger and Prometheus.
- The document demonstrates how to instrument a Go application with OpenTelemetry tracing by creating spans from HTTP requests and exporting them to Jaeger for analysis.
Cryptography Challenges for Computational Privacy in Public CloudsSashank Dara
Cryptography challenges for computational privacy in public clouds.
Major setbacks include privacy concerns that are a barrier to cloud adoption. Computational privacy concerns how to keep data private during processing in untrusted clouds. Traditional cryptography cannot be used. Theoretical approaches exist like fully homomorphic encryption and secure multiparty computation but have limitations preventing practical use. An overarching architecture is needed to blanket solve computational privacy requirements.
Redfine aims to extend OpenRefine to make use of the Redlink APIs for a more advance and performance solution for publishing Linked Data.
A proposal by Redlink GmbH to the Fusepool Open Call for Developers at the Data|Hack|Award 2014.
Microsoft advises that .NET Core 3.0 is the future of the ecosystem and that programmers should use it for all new development projects. They’re making a ton of innovative changes to .NET Core, adding more workloads, helping you in your applications, be more productive, quicker, faster and better. In this presentation we look forward to what's new in .NET and C# for 2019
ETCDEV is developing a solution called Orbita to scale Ethereum Classic through sidechains. Orbita will allow separate sidechains called "orbitas" to run applications and transactions more cheaply and quickly than the mainnet while maintaining security through periodic checkpoints syncing state to the mainnet. Key components of Orbita will include programming interfaces, documentation, base tools, separate orbitas per application, checkpointing for security between orbitas, oracle contracts for reading across orbitas, and atomic swaps for writing across orbitas. The current prototype uses Geth with checkpointing and ETCDEV plans to develop Orbita further and do a pilot project integrating it with OpenStack for decentralized user authentication.
Solving the Hidden Costs of Kubernetes with ObservabilityDevOps.com
Kubernetes has enabled software organizations to realize the benefits of microservices through its convenient and powerful abstractions. Deploying, scaling, and running distributed software at scale is much easier through the use of Kubernetes.
However, these benefits have not come without costs compared to traditional software operations. Spiraling monitoring expenses, the creation of single points of human failure, and a lack of understanding of service dependencies all contribute to significant hidden costs associated with running software with Kubernetes.
In this talk, we’ll discuss how observability addresses these costs and helps you quantify and understand them. You’ll learn how new open source tools such as OpenTelemetry can help you understand performance of cloud-native software, and how you can easily get started using them today. Come be a part of the future of cloud-native observability!
This document discusses tools and techniques for swarm mobile robot navigation in fenced areas. It describes using MATLAB software and the KIKS simulator to design a fuzzy logic controller for navigation. The project technique involves building an environment with obstacles, designing the fuzzy logic controller with input and output variables and rules, implementing a navigation system based on the controller, and testing navigation with and without obstacles.
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk gives an overview of the OpenTelemetry project and then outlines some production-proven architectures for improving the observability of your applications and systems.
The road ahead for scientific computing with PythonRalf Gommers
1) The PyData ecosystem, including NumPy, SciPy, and scikit-learn, faces technical challenges related to fragmentation of array libraries, lack of parallelism, packaging constraints, and performance issues for non-vectorized algorithms.
2) There are also social challenges around sustainability of key projects due to limited funding and maintainers, tensions with proprietary involvement from large tech companies, and academia's role in supporting open-source scientific software.
3) NumPy is working to address these issues through efforts like the Array API standardization, improved extensibility and performance, and growing autonomous teams and diversity within the community.
Summit 16: Applying Machine Learning to Intent-based Networking and Nfv Scali...OPNFV
The talk will highlight how Machine Learning techniques can be used to address different aspects of the operation and control of NFV and propose future OPNFV activities in this area. First, Diego will introduce how Machine Learning is being applied by the CogNet project to address intent-based networking, and discuss the architecture defined there as a potential framework for future ML integration. Glen will demonstrate a policy-based system for automating VNF scaling using performance data collection and analytics with machine learning (ML), based on OPNFV Brahmaputra and the underlying OpenStack telemetry system (Ceilometer), as well as the open-source Apache Kafka, Apache Zookeeper and Apache Spark streaming and MLlib libraries. Available as open-source, it combines predictive and reactive inputs to make the VNF scaling decision and trigger action in the MANO stack. The presentation will provide an overview of the system, demonstrate the VNF auto-scaling use case and discuss how this system will fit into a future OPNFV release.
This document discusses IoT application development and the key steps involved: sensor data collection, communication and networking, processing, and analysis. It outlines some of the most popular programming languages used for IoT development in 2015 and 2017, including Python. Several essential Python libraries for IoT are then described, such as mraa for GPIO access, sockets for networking, mysqldb for databases, Numpy for scientific computing, and matplotlib, pandas, opencv, tkinter, tensorflow, requests, and paho-mqtt for various functions like data visualization, analysis, image processing, GUI development, machine learning, HTTP calls, and MQTT protocol usage. The document concludes with an explanation of collecting data from a DHT sensor.
Deep learning libraries TensorFlow and PyTorch are commonly used for machine learning. TensorFlow was developed by Google and has a faster compilation time than Keras or PyTorch. It supports CPUs and GPUs and uses data flow graphs with nodes and edges. PyTorch was originally developed as a Python wrapper for Torch and is pythonic in nature with dynamic computation graphs. Both support tensor computations and automatic differentiation, with PyTorch having richer APIs but fewer built-in tools than TensorFlow.
Nagios Conference 2013 - Nick Scott - Nagios Network AnalyzerNagios
Nick Scott's presentation on Nagios Network Analyzer.
The presentation was given during the Nagios World Conference North America held Sept 20-Oct 2nd, 2013 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Distributed Tracing at UBER Scale: Creating a treasure map for your monitori...Yuri Shkuro
This document discusses distributed tracing at Uber scale. It explains why companies implement distributed tracing, including for distributed transaction monitoring, performance optimization, root cause analysis, and service dependency analysis. It also discusses challenges with instrumentation and context propagation across services. Finally, it outlines strategies Uber used to encourage adoption of distributed tracing across many teams, such as appealing to developers' interests and measuring adoption through trace quality scores.
Distributed tracing with OpenTracing and Jaeger @ getstream.ioMax Klyga
Build Scalable Newsfeeds & Activity Streams - https://getstream.io
Distributed tracing solves problems of identifying performance bottlenecks in distributed system.
This presentation describes how Stream uses OpenTracing and Jaeger for distributed tracing in our system.
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk covers the fundamental concepts of observability and then demonstrates how to instrument your applications using the OpenTelemetry libraries.
This document lists several SDN projects, controllers, simulators, and research areas. It describes Python and C++ based OpenFlow controllers like SNAC and HELIOS. It also lists simulators that support SDN including POX with GEPHI for SDN virtualization and OpenFlow VM simulation. Finally, it outlines significant research areas for SDN PhD projects such as software-defined wireless networking and software-defined mobile networks.
This document provides an overview of OpenTelemetry for operators. It discusses some of the limitations of current observability platforms and how OpenTelemetry addresses these issues. It introduces the OpenTelemetry project which combines distributed tracing, metrics, and logging APIs. It describes the OpenTelemetry Collector which receives, processes, and exports telemetry data. It provides examples of Collector configuration and running it in production. It also discusses some innovations in the observability space from vendors like Dynatrace, New Relic, Splunk SignalFX, and others.
This document discusses distributed tracing and how it can help solve problems caused by microservices. It covers what distributed tracing is, how it works, popular implementations like OpenTracing and Zipkin, and best practices for using distributed tracing. OpenTracing is introduced as a vendor-neutral standard that helps library developers implement tracing and defines common formats for propagating traces between services. Code examples are provided for collecting trace data using OpenTracing and Zipkin.
Recent Advances in Machine Learning: Bringing a New Level of Intelligence to ...Brocade
Presentation by Brocade Chief Scientist and Fellow, David Meyer, given at Orange Gardens July 2016. What is Machine Learning and what is all the excitement about?
An associated blog is available here: http://community.brocade.com/t5/CTO-Corner/Networking-Meets-Artificial-Intelligence-A-Glimpse-into-the-Very/ba-p/88196
- OpenTelemetry is a unified observability library that provides APIs, SDKs, and middleware for generating and exporting traces, metrics, and logs. It combines OpenCensus and OpenTracing into a single open source project.
- The OpenTelemetry architecture includes APIs, SDKs, a collector, and exporters. The SDK implements the API to generate traces and metrics. The collector receives telemetry data from the SDK and exports it to backends like Jaeger and Prometheus.
- The document demonstrates how to instrument a Go application with OpenTelemetry tracing by creating spans from HTTP requests and exporting them to Jaeger for analysis.
Cryptography Challenges for Computational Privacy in Public CloudsSashank Dara
Cryptography challenges for computational privacy in public clouds.
Major setbacks include privacy concerns that are a barrier to cloud adoption. Computational privacy concerns how to keep data private during processing in untrusted clouds. Traditional cryptography cannot be used. Theoretical approaches exist like fully homomorphic encryption and secure multiparty computation but have limitations preventing practical use. An overarching architecture is needed to blanket solve computational privacy requirements.
Redfine aims to extend OpenRefine to make use of the Redlink APIs for a more advance and performance solution for publishing Linked Data.
A proposal by Redlink GmbH to the Fusepool Open Call for Developers at the Data|Hack|Award 2014.
Microsoft advises that .NET Core 3.0 is the future of the ecosystem and that programmers should use it for all new development projects. They’re making a ton of innovative changes to .NET Core, adding more workloads, helping you in your applications, be more productive, quicker, faster and better. In this presentation we look forward to what's new in .NET and C# for 2019
ETCDEV is developing a solution called Orbita to scale Ethereum Classic through sidechains. Orbita will allow separate sidechains called "orbitas" to run applications and transactions more cheaply and quickly than the mainnet while maintaining security through periodic checkpoints syncing state to the mainnet. Key components of Orbita will include programming interfaces, documentation, base tools, separate orbitas per application, checkpointing for security between orbitas, oracle contracts for reading across orbitas, and atomic swaps for writing across orbitas. The current prototype uses Geth with checkpointing and ETCDEV plans to develop Orbita further and do a pilot project integrating it with OpenStack for decentralized user authentication.
Solving the Hidden Costs of Kubernetes with ObservabilityDevOps.com
Kubernetes has enabled software organizations to realize the benefits of microservices through its convenient and powerful abstractions. Deploying, scaling, and running distributed software at scale is much easier through the use of Kubernetes.
However, these benefits have not come without costs compared to traditional software operations. Spiraling monitoring expenses, the creation of single points of human failure, and a lack of understanding of service dependencies all contribute to significant hidden costs associated with running software with Kubernetes.
In this talk, we’ll discuss how observability addresses these costs and helps you quantify and understand them. You’ll learn how new open source tools such as OpenTelemetry can help you understand performance of cloud-native software, and how you can easily get started using them today. Come be a part of the future of cloud-native observability!
This document discusses tools and techniques for swarm mobile robot navigation in fenced areas. It describes using MATLAB software and the KIKS simulator to design a fuzzy logic controller for navigation. The project technique involves building an environment with obstacles, designing the fuzzy logic controller with input and output variables and rules, implementing a navigation system based on the controller, and testing navigation with and without obstacles.
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk gives an overview of the OpenTelemetry project and then outlines some production-proven architectures for improving the observability of your applications and systems.
The road ahead for scientific computing with PythonRalf Gommers
1) The PyData ecosystem, including NumPy, SciPy, and scikit-learn, faces technical challenges related to fragmentation of array libraries, lack of parallelism, packaging constraints, and performance issues for non-vectorized algorithms.
2) There are also social challenges around sustainability of key projects due to limited funding and maintainers, tensions with proprietary involvement from large tech companies, and academia's role in supporting open-source scientific software.
3) NumPy is working to address these issues through efforts like the Array API standardization, improved extensibility and performance, and growing autonomous teams and diversity within the community.
Summit 16: Applying Machine Learning to Intent-based Networking and Nfv Scali...OPNFV
The talk will highlight how Machine Learning techniques can be used to address different aspects of the operation and control of NFV and propose future OPNFV activities in this area. First, Diego will introduce how Machine Learning is being applied by the CogNet project to address intent-based networking, and discuss the architecture defined there as a potential framework for future ML integration. Glen will demonstrate a policy-based system for automating VNF scaling using performance data collection and analytics with machine learning (ML), based on OPNFV Brahmaputra and the underlying OpenStack telemetry system (Ceilometer), as well as the open-source Apache Kafka, Apache Zookeeper and Apache Spark streaming and MLlib libraries. Available as open-source, it combines predictive and reactive inputs to make the VNF scaling decision and trigger action in the MANO stack. The presentation will provide an overview of the system, demonstrate the VNF auto-scaling use case and discuss how this system will fit into a future OPNFV release.
This document discusses IoT application development and the key steps involved: sensor data collection, communication and networking, processing, and analysis. It outlines some of the most popular programming languages used for IoT development in 2015 and 2017, including Python. Several essential Python libraries for IoT are then described, such as mraa for GPIO access, sockets for networking, mysqldb for databases, Numpy for scientific computing, and matplotlib, pandas, opencv, tkinter, tensorflow, requests, and paho-mqtt for various functions like data visualization, analysis, image processing, GUI development, machine learning, HTTP calls, and MQTT protocol usage. The document concludes with an explanation of collecting data from a DHT sensor.
Deep learning libraries TensorFlow and PyTorch are commonly used for machine learning. TensorFlow was developed by Google and has a faster compilation time than Keras or PyTorch. It supports CPUs and GPUs and uses data flow graphs with nodes and edges. PyTorch was originally developed as a Python wrapper for Torch and is pythonic in nature with dynamic computation graphs. Both support tensor computations and automatic differentiation, with PyTorch having richer APIs but fewer built-in tools than TensorFlow.
This document provides an overview of architectural considerations for smart object networking. It discusses the history behind the document and parallel work done in other standards bodies. It then covers four common communication patterns for smart objects (device-to-device, device-to-cloud, device-to-application layer gateway, and back-end data sharing). The document summarizes key areas that lack standardization and discusses security recommendations from the IETF.
SiLA: Making the standard fit for the future and adapting an open-source coll...Gáspár Incze
Standards in Laboratory Automation (SiLA) is a vertical industry standard that promotes easy setup and interoperability between laboratory devices and management systems. To cope with the increasing market pressures and rapid changes in the technological landscape, the development of standards needs to become more agile. Furthermore, standards themselves must become increasingly modular to provide adequate flexibility. By analysing the SiLA 1.3 specifications and engaging a wide range of stakeholders in the life science industry, this Master’s thesis aims to present valuable insights and proposals regarding the improvement of the standard and its development. This improvement is
achieved through a holistic approach that encompasses technical features, business motivations and processes, focusing on collaboration aspects. Using design science, an existing open-source solution has been extended to improve slow, non-transparent collaboration and engage more stakeholders in the standardisation process using a single common platform.
DOI: 10.13140/RG.2.2.24278.65606
More information about SiLA:
http://www.sila-standard.org/about-sila/
From leading IoT Protocols to Python Dashboarding_finalLukas Ott
First i like to give an overview on common IoT Protocols:
#CoAP (Constrained Application Protocol -> Close to HTTP / REST ) #MQTT ( Message Queue Telemetry Transport -> Pub/Sub with Broker -> Well defined Quality of Service -> Newest addition Eclipse Amlem (formerly the core of IBM Watson IoT platform) -> Eclipse Sparkplug -> Standardization of the topics and payloads -> Interoperability!) , #DDS (Data Distribution Service -> Pub/Sub without Broker -> Drones / Robotics) #LwM2M (Lightweight M2M -> Runs on Top of CoAP or MQTT -> standard sets of payloads for sensors) #zenoh (https://zenoh.io/ Pub/Sub Protocol -> combines the advantages of #DDS and #MQTT) #eclipsefoundation #apache #opensource #lightweight (+ some comments that this is not complete and does not encompass Industrial and Building Automation)
Then I would like to show the leading edge IoT protocol Zenoh. Saving Zenoh Payload to Apache IoTDB. After that I would like to dive into Panel and the awesome capabilities of Apache ECharts.
Revamping Mailjet API documentation @ ParisAPI meetupMailjet
Mailjet recently release a new version of its API documentation, fully revisited. This talk is a return of experience on what we've learnt building it.
Python is a popular programming language that can be used for a variety of tasks such as web development, software development, mathematics, and system scripting. It is an interpreted, object-oriented, high-level programming language with dynamic semantics. Python has a simple syntax and is easy to learn, which has contributed to its popularity among developers. It has a large standard library and supports many third-party libraries for specialized tasks.
An Incomplete Data Tools Landscape for Hackers in 2015Wes McKinney
Wes McKinney gives an overview of the current data analysis tools landscape in Python and R. He discusses essential Python packages like NumPy, pandas, and scikit-learn. For R, he covers packages in the "Hadley stack" like dplyr and ggplot2. IPython/Jupyter notebooks are also mentioned as a platform for interactive data analysis across languages. The talk aims to highlight trends, opportunities, and challenges in the open source data science tool ecosystem.
【EPN Seminar Nov.10. 2015】 Key note – Open innovation and Engineering communityシスコシステムズ合同会社
The document summarizes how the IETF process works to develop Internet standards and protocols. It discusses that the IETF is a community that works to describe and solve issues on the Internet, often publishing documents called RFCs. It provides examples of how technologies like Secure Shell and TCP congestion control were developed through discussions, Internet-Drafts, and published RFCs. It emphasizes that the IETF process relies on open discussion and consensus among participants to select solutions that work in practice for standards.
This document provides an introduction to Filecoin, including:
1) Core concepts of Filecoin such as using IPFS for data retrieval and Filecoin for data persistence and verifiability on a decentralized storage network.
2) Examples of how storage helpers can simplify storing and retrieving data on Filecoin by handling dealmaking and verification.
3) An overview of the different layers that make up a Web3-enabled architecture using Filecoin and IPFS for decentralized storage.
Labmeeting - 20150831 - Overhead and Performance of Low Latency Live Streamin...Syuan Wang
This document summarizes research into reducing latency for live video streaming using MPEG-DASH. It introduces MPEG-DASH and how using HTTP chunked transfer encoding and Gradual Decoding Refresh encoding can help lower latency compared to basic DASH. The paper describes experiments conducted to generate and distribute live content using these techniques and evaluate latency, finding they were able to achieve latency as low as 240ms.
The document discusses plans to update the News Industry Text Format (NITF) schema to version 4.0. Key points include:
1) NITF 4.0 will make the schema more flexible and open by allowing "foreign namespaces" to integrate other standards like geospatial data.
2) An experimental NITF 4.0 XSD adds documentation and allows namespaces in elements like <head>, <body.head>, and <media>.
3) Feedback is requested on the experimental schema before finalizing NITF 4.0 by the end of 2010. Integration with the NewsML-G2 framework may come later.
Data science in ruby, is it possible? is it fast? should we use it?Rodrigo Urubatan
Slides used in my presentation at http://thedevelopersconference.com.br in the #ruby track this year in são Paulo,
Talking a little about data science, what are the alternatives to do it in ruby, how to integrate ruby and python and what are the best solutions available.
This document provides an introduction to Python programming basics for beginners. It discusses Python features like being easy to learn and cross-platform. It covers basic Python concepts like variables, data types, operators, conditional statements, loops, functions, OOPs, strings and built-in data structures like lists, tuples, and dictionaries. The document provides examples of using these concepts and recommends Python tutorials, third-party libraries, and gives homework assignments on using functions like range and generators.
DevOps Fest 2020. Даніель Яворович. Data pipelines: building an efficient ins...DevOps_Fest
Я розповім про досвід будування системи для роботи з великими даними на базі відкритої технологіі Apache Nifi та Kubernetes на прикладі аналізу ресурсів новин з використанням NLP аналізом.
PyData: The Next Generation | Data Day Texas 2015Cloudera, Inc.
This document discusses the past, present, and future of Python for big data analytics. It provides background on the rise of Python as a data analysis tool through projects like NumPy, pandas, and scikit-learn. However, as big data systems like Hadoop became popular, Python was not initially well-suited for problems at that scale. Recent projects like PySpark, Blaze, and Spartan aim to bring Python to big data, but challenges remain around data formats, distributed computing interfaces, and competing with Scala. The document calls for continued investment in high performance Python tools for big data to ensure its relevance in coming years.
Stream Data Processing at Big Data Landscape by Oleksandr Fedirko GlobalLogic Ukraine
This document provides an overview of stream data processing and common stream processing tools. It discusses streaming basics like stateful and stateless operations. It also covers microbatch vs realtime streaming and compositional vs declarative stream processing engines. Typical stream processing architectures and use cases are presented. Main considerations for projects using stream processing are outlined. An overview of popular stream processing tools like Apache Spark, Storm, Flink, and cloud services from AWS, GCP and Azure is provided. The document concludes with a case study example and questions for discussion.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
11. What is TAPIRE ?
Tool for Assisting Protocol Inference and Reverse Engineering
8
12. What is TAPIRE ?
A Command line interface for Netzob
• Fast learning curve
• Allows to save projects for later use
• Simple documentation (TODO)
• Assisting functionalities
9