Presentation given at EDW 2018 FIBO by Mike Bennett on the Object Management Group Blockchain (Distributed Ledger) proof of Concept using FIBO as a concept model for Smart Contracts development.
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Databricks
The increasing challenge to serve ever-growing data driven by AI and analytics workloads makes disaggregated storage and compute more attractive as it enables companies to scale their storage and compute capacity independently to match data & compute growth rate. Cloud based big data services is gaining momentum as it provides simplified management, elasticity, and pay-as-you-go model.
Talk for SCaLE13x. Video: https://www.youtube.com/watch?v=_Ik8oiQvWgo . Profiling can show what your Linux kernel and appliacations are doing in detail, across all software stack layers. This talk shows how we are using Linux perf_events (aka "perf") and flame graphs at Netflix to understand CPU usage in detail, to optimize our cloud usage, solve performance issues, and identify regressions. This will be more than just an intro: profiling difficult targets, including Java and Node.js, will be covered, which includes ways to resolve JITed symbols and broken stacks. Included are the easy examples, the hard, and the cutting edge.
Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of Linux performance tools, touring common problems with system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. You might discover that tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many talks on tools that work, including giving the Linux PerformanceTools talk originally at SCALE. This is an anti-version of that talk, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice for verifying new performance tools, understanding how they work, and using them successfully.
VictoriaLogs: Open Source Log Management System - PreviewVictoriaMetrics
VictoriaLogs Preview - Aliaksandr Valialkin
* Existing open source log management systems
- ELK (ElasticSearch) stack: Pros & Cons
- Grafana Loki: Pros & Cons
* What is VictoriaLogs
- Open source log management system from VictoriaMetrics
- Easy to setup and operate
- Scales vertically and horizontally
- Optimized for low resource usage (CPU, RAM, disk space)
- Accepts data from Logstash and Fluentbit in Elasticsearch format
- Accepts data from Promtail in Loki format
- Supports stream concept from Loki
- Provides easy to use yet powerful query language - LogsQL
* LogsQL Examples
- Search by time
- Full-text search
- Combining search queries
- Searching arbitrary labels
* Log Streams
- What is a log stream?
- LogsQL examples: querying log streams
- Stream labels vs log labels
* LogsQL: stats over access logs
* VictoriaLogs: CLI Integration
* VictoriaLogs Recap
Delivered as plenary at USENIX LISA 2013. video here: https://www.youtube.com/watch?v=nZfNehCzGdw and https://www.usenix.org/conference/lisa13/technical-sessions/plenary/gregg . "How did we ever analyze performance before Flame Graphs?" This new visualization invented by Brendan can help you quickly understand application and kernel performance, especially CPU usage, where stacks (call graphs) can be sampled and then visualized as an interactive flame graph. Flame Graphs are now used for a growing variety of targets: for applications and kernels on Linux, SmartOS, Mac OS X, and Windows; for languages including C, C++, node.js, ruby, and Lua; and in WebKit Web Inspector. This talk will explain them and provide use cases and new visualizations for other event types, including I/O, memory usage, and latency.
Apache Spark Listeners: A Crash Course in Fast, Easy MonitoringDatabricks
The Spark Listener interface provides a fast, simple and efficient route to monitoring and observing your Spark application - and you can start using it in minutes. In this talk, we'll introduce the Spark Listener interfaces available in core and streaming applications, and show a few ways in which they've changed our world for the better at SpotX. If you're looking for a "Eureka!" moment in monitoring or tracking of your Spark apps, look no further than Spark Listeners and this talk!
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsDataWorks Summit
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics - Apache Spark’s in memory capabilities catapulted it as the premier processing framework for Hadoop. Apache Ignite and Alluxio, both high-performance, integrated and distributed in-memory platform, takes Apache Spark to the next level by providing an even more powerful, faster and scalable platform to the most demanding data processing and analytic environments.
Speaker
Irfan Elahi, Consultant, Deloitte
Getting Started: Intro to Telegraf - July 2021InfluxData
In this training webinar, Samantha Wang will walk you through the basics of Telegraf. Telegraf is the open source server agent which is used to collect metrics from your stacks, sensors and systems. It is InfluxDB’s native data collector that supports nearly 300 inputs and outputs. Learn how to send data from a variety of systems, apps, databases and services in the appropriate format to InfluxDB. Discover tips and tricks on how to write your own plugins. The know-how learned here can be applied to a multitude of use cases and sectors. This one-hour session will include the training and time for live Q&A.
Join this training as Samantha Wang dives into:
Types of Telegraf plugins (i.e. input, output, aggregator and processor)
Specific plugins including Execd input plugins and the Starlark processor plugin
How to install and start using Telegraf
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Databricks
The increasing challenge to serve ever-growing data driven by AI and analytics workloads makes disaggregated storage and compute more attractive as it enables companies to scale their storage and compute capacity independently to match data & compute growth rate. Cloud based big data services is gaining momentum as it provides simplified management, elasticity, and pay-as-you-go model.
Talk for SCaLE13x. Video: https://www.youtube.com/watch?v=_Ik8oiQvWgo . Profiling can show what your Linux kernel and appliacations are doing in detail, across all software stack layers. This talk shows how we are using Linux perf_events (aka "perf") and flame graphs at Netflix to understand CPU usage in detail, to optimize our cloud usage, solve performance issues, and identify regressions. This will be more than just an intro: profiling difficult targets, including Java and Node.js, will be covered, which includes ways to resolve JITed symbols and broken stacks. Included are the easy examples, the hard, and the cutting edge.
Broken benchmarks, misleading metrics, and terrible tools. This talk will help you navigate the treacherous waters of Linux performance tools, touring common problems with system tools, metrics, statistics, visualizations, measurement overhead, and benchmarks. You might discover that tools you have been using for years, are in fact, misleading, dangerous, or broken.
The speaker, Brendan Gregg, has given many talks on tools that work, including giving the Linux PerformanceTools talk originally at SCALE. This is an anti-version of that talk, to focus on broken tools and metrics instead of the working ones. Metrics can be misleading, and counters can be counter-intuitive! This talk will include advice for verifying new performance tools, understanding how they work, and using them successfully.
VictoriaLogs: Open Source Log Management System - PreviewVictoriaMetrics
VictoriaLogs Preview - Aliaksandr Valialkin
* Existing open source log management systems
- ELK (ElasticSearch) stack: Pros & Cons
- Grafana Loki: Pros & Cons
* What is VictoriaLogs
- Open source log management system from VictoriaMetrics
- Easy to setup and operate
- Scales vertically and horizontally
- Optimized for low resource usage (CPU, RAM, disk space)
- Accepts data from Logstash and Fluentbit in Elasticsearch format
- Accepts data from Promtail in Loki format
- Supports stream concept from Loki
- Provides easy to use yet powerful query language - LogsQL
* LogsQL Examples
- Search by time
- Full-text search
- Combining search queries
- Searching arbitrary labels
* Log Streams
- What is a log stream?
- LogsQL examples: querying log streams
- Stream labels vs log labels
* LogsQL: stats over access logs
* VictoriaLogs: CLI Integration
* VictoriaLogs Recap
Delivered as plenary at USENIX LISA 2013. video here: https://www.youtube.com/watch?v=nZfNehCzGdw and https://www.usenix.org/conference/lisa13/technical-sessions/plenary/gregg . "How did we ever analyze performance before Flame Graphs?" This new visualization invented by Brendan can help you quickly understand application and kernel performance, especially CPU usage, where stacks (call graphs) can be sampled and then visualized as an interactive flame graph. Flame Graphs are now used for a growing variety of targets: for applications and kernels on Linux, SmartOS, Mac OS X, and Windows; for languages including C, C++, node.js, ruby, and Lua; and in WebKit Web Inspector. This talk will explain them and provide use cases and new visualizations for other event types, including I/O, memory usage, and latency.
Apache Spark Listeners: A Crash Course in Fast, Easy MonitoringDatabricks
The Spark Listener interface provides a fast, simple and efficient route to monitoring and observing your Spark application - and you can start using it in minutes. In this talk, we'll introduce the Spark Listener interfaces available in core and streaming applications, and show a few ways in which they've changed our world for the better at SpotX. If you're looking for a "Eureka!" moment in monitoring or tracking of your Spark apps, look no further than Spark Listeners and this talk!
Apache Ignite vs Alluxio: Memory Speed Big Data AnalyticsDataWorks Summit
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics - Apache Spark’s in memory capabilities catapulted it as the premier processing framework for Hadoop. Apache Ignite and Alluxio, both high-performance, integrated and distributed in-memory platform, takes Apache Spark to the next level by providing an even more powerful, faster and scalable platform to the most demanding data processing and analytic environments.
Speaker
Irfan Elahi, Consultant, Deloitte
Getting Started: Intro to Telegraf - July 2021InfluxData
In this training webinar, Samantha Wang will walk you through the basics of Telegraf. Telegraf is the open source server agent which is used to collect metrics from your stacks, sensors and systems. It is InfluxDB’s native data collector that supports nearly 300 inputs and outputs. Learn how to send data from a variety of systems, apps, databases and services in the appropriate format to InfluxDB. Discover tips and tricks on how to write your own plugins. The know-how learned here can be applied to a multitude of use cases and sectors. This one-hour session will include the training and time for live Q&A.
Join this training as Samantha Wang dives into:
Types of Telegraf plugins (i.e. input, output, aggregator and processor)
Specific plugins including Execd input plugins and the Starlark processor plugin
How to install and start using Telegraf
Originally presented, February 16th, 2017 at eBay for the NYC Scrum User Group, the “Definition of Done Meets Infrastructure in the Cloud” is an overview of using Scrum's Definition of Done within a DevOps application management system environment, in this case GitLab.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Application Monitoring using Open Source: VictoriaMetrics - ClickHouseVictoriaMetrics
Monitoring is the key to successful operation of any software service, but commercial solutions are complex, expensive, and slow. Let us show you how to build monitoring that is simple, cost-effective, and fast using open source stacks easily accessible to any developer.
We’ll start with the elements of monitoring systems: data ingest, query engine, visualization, and alerting. We’ll then explain and contrast two implementation approaches. The first uses VictoriaMetrics, a fast growing, high performance time series database that uses PromQL for queries. The second is based on ClickHouse, a popular real-time analytics database that speaks SQL. Fast, affordable monitoring is within reach. This webinar provides designs and working code to get you there.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
Video: https://www.youtube.com/watch?v=FJW8nGV4jxY and https://www.youtube.com/watch?v=zrr2nUln9Kk . Tutorial slides for O'Reilly Velocity SC 2015, by Brendan Gregg.
There are many performance tools nowadays for Linux, but how do they all fit together, and when do we use them? This tutorial explains methodologies for using these tools, and provides a tour of four tool types: observability, benchmarking, tuning, and static tuning. Many tools will be discussed, including top, iostat, tcpdump, sar, perf_events, ftrace, SystemTap, sysdig, and others, as well observability frameworks in the Linux kernel: PMCs, tracepoints, kprobes, and uprobes.
This tutorial is updated and extended on an earlier talk that summarizes the Linux performance tool landscape. The value of this tutorial is not just learning that these tools exist and what they do, but hearing when and how they are used by a performance engineer to solve real world problems — important context that is typically not included in the standard documentation.
VictoriaMetrics 15/12 Meet Up: 2022 Features HighlightsVictoriaMetrics
2022 Features Highlights - Speaker
* MetricsQL Features
- Support for @ modifier
- keep_metric_names modifier
- Advanced label filters’ propagation
- Automatic label filters’ propagation
- Support for short numeric constants
- Distributed query tracing!
- New functions
* vmui Features
- Cardinality explorer!
- Top queries
- Significantly improved usability and stability!
* vmagent Features
- Fetch target response on behalf of vmagent
- Filter targets by url and labels
- /service-discovery page
- Relabel debugging!
- support for absolute _address_
- New service discovery mechanisms
- Multi-tenant support
- Performance improvements
* Relabeling Features
- Conditional relabeling
- Named label placeholders
- Graphite-style relabeling
* vmalert Features
- Better integration with Grafana alerts
- Reusable templates for annotations
- Debugging of alerting rules
- Improved compatibility with Prometheus
* vmctl Features
- Migrate all the data between clusters
- Data migration via Prometheus remote_read protocol
Enterprise Features
- mTLS support
- vmgateway JWT token enhancements
- Automatic restore from backups
- Automatic vmstorage discovery
- Multiple retentions
Various Enhancements
- Environment vars can be referred in command-line flags
- Performance improvements
- Deduplication and downsampling improvements
- Support for metrics push
- Support for Pushgateway data ingestion format
- VictoriaMetrics cluster: multitenancy enhancements
- vmbackup / vmrestore: Azure blob storage
- Official MacOS builds
- Official FreeBSD and OpenBSD builds
- Raspberry PI optimizations
- LTS releases
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
Pipelines have become ubiquitous, as the need for stringing multiple functions to compose applications has gained adoption and popularity. Common pipeline abstractions such as “fit” and “transform” are even shared across divergent platforms such as Python Scikit-Learn and Apache Spark.
Scaling pipelines at the level of simple functions is desirable for many AI applications, however is not directly supported by Ray’s parallelism primitives. In this talk, Raghu will describe a pipeline abstraction that takes advantage of Ray’s compute model to efficiently scale arbitrarily complex pipeline workflows. He will demonstrate how this abstraction cleanly unifies pipeline workflows across multiple platforms such as Scikit-Learn and Spark, and achieves nearly optimal scale-out parallelism on pipelined computations.
Attendees will learn how pipelined workflows can be mapped to Ray’s compute model and how they can both unify and accelerate their pipelines with Ray.
HBaseCon 2012 | Lessons learned from OpenTSDB - Benoit Sigoure, StumbleUponCloudera, Inc.
OpenTSDB was built on the belief that, through HBase, a new breed of monitoring systems could be created, one that can store and serve billions of data points forever without the need for destructive downsampling, one that could scale to millions of metrics, and where plotting real-time graphs is easy and fast. In this presentation we’ll review some of the key points of OpenTSDB’s design, some of the mistakes that were made, how they were or will be addressed, and what were some of the lessons learned while writing and running OpenTSDB as well as asynchbase, the asynchronous high-performance thread-safe client for HBase. Specific topics discussed will be around the schema, how it impacts performance and allows concurrent writes without need for coordination in a distributed cluster of OpenTSDB instances.
Apache Kafka’s Transactions in the Wild! Developing an exactly-once KafkaSink...HostedbyConfluent
Apache Kafka is one of the most commonly used connectors with Apache Flink for exactly-once streaming use cases. The combination of both systems allows you to build mission-critical systems that require low end-to-end latency and exactly-once processing eg. banks processing transactions. In Apache Flink 1.14, we released a new KafkaSink based on Apache Flink’s unified Sink interface that natively supports streaming and batch executions.
However, we needed to stretch Kafka’s transactions API to fully support exactly-once processing in Flink. In this talk, we will start with a quick recap of Apache Kafka’s transactions and Flink’s checkpointing mechanism. Then, we describe the two-phase commit protocol implemented in KafkaSink in-depth and emphasize the difficulties we have overcome when applying Kafka’s transaction API to longer-lasting transactions.
We explain how we ensure performant writing to Apache Kafka and how the KafkaSink recovery works.
In summary, this talk should give users a deep dive into how Apache Flink leverages Apache Kafka’s transactions and developers an overview of what they have to consider when using Apache Kafka’s transactions.
Analysing Data from Blockchains - Keynote @ SOCCA 2020Ingo Weber
Keynote at the Symposium on Cryptocurrency Analysis (SOCCA 2020). Content:
In order to analyse how concrete blockchain systems as well as blockchain applications are used, data must be extracted from these systems. Due to various complexities inherent in blockchain, the question how to interpret such data is non-trivial. Such interpretation should often be shared among parties, e.g., if they collaborate via a blockchain. To this end, we devised an approach codify the interpretation of blockchain data, to extract data from blockchains accordingly, and to output it in suitable formats -- see https://arxiv.org/abs/2001.10281.
In addition, application developers and users of blockchain applications may want to estimate the cost of using or op- erating a blockchain application. In the keynote, I will also discuss our cost estimation method.
Originally presented, February 16th, 2017 at eBay for the NYC Scrum User Group, the “Definition of Done Meets Infrastructure in the Cloud” is an overview of using Scrum's Definition of Done within a DevOps application management system environment, in this case GitLab.
Video: https://www.youtube.com/watch?v=JRFNIKUROPE . Talk for linux.conf.au 2017 (LCA2017) by Brendan Gregg, about Linux enhanced BPF (eBPF). Abstract:
A world of new capabilities is emerging for the Linux 4.x series, thanks to enhancements that have been included in Linux for to Berkeley Packet Filter (BPF): an in-kernel virtual machine that can execute user space-defined programs. It is finding uses for security auditing and enforcement, enhancing networking (including eXpress Data Path), and performance observability and troubleshooting. Many new open source tools that have been written in the past 12 months for performance analysis that use BPF. Tracing superpowers have finally arrived for Linux!
For its use with tracing, BPF provides the programmable capabilities to the existing tracing frameworks: kprobes, uprobes, and tracepoints. In particular, BPF allows timestamps to be recorded and compared from custom events, allowing latency to be studied in many new places: kernel and application internals. It also allows data to be efficiently summarized in-kernel, including as histograms. This has allowed dozens of new observability tools to be developed so far, including measuring latency distributions for file system I/O and run queue latency, printing details of storage device I/O and TCP retransmits, investigating blocked stack traces and memory leaks, and a whole lot more.
This talk will summarize BPF capabilities and use cases so far, and then focus on its use to enhance Linux tracing, especially with the open source bcc collection. bcc includes BPF versions of old classics, and many new tools, including execsnoop, opensnoop, funcccount, ext4slower, and more (many of which I developed). Perhaps you'd like to develop new tools, or use the existing tools to find performance wins large and small, especially when instrumenting areas that previously had zero visibility. I'll also summarize how we intend to use these new capabilities to enhance systems analysis at Netflix.
Application Monitoring using Open Source: VictoriaMetrics - ClickHouseVictoriaMetrics
Monitoring is the key to successful operation of any software service, but commercial solutions are complex, expensive, and slow. Let us show you how to build monitoring that is simple, cost-effective, and fast using open source stacks easily accessible to any developer.
We’ll start with the elements of monitoring systems: data ingest, query engine, visualization, and alerting. We’ll then explain and contrast two implementation approaches. The first uses VictoriaMetrics, a fast growing, high performance time series database that uses PromQL for queries. The second is based on ClickHouse, a popular real-time analytics database that speaks SQL. Fast, affordable monitoring is within reach. This webinar provides designs and working code to get you there.
BPF of Berkeley Packet Filter mechanism was first introduced in linux in 1997 in version 2.1.75. It has seen a number of extensions of the years. Recently in versions 3.15 - 3.19 it received a major overhaul which drastically expanded it's applicability. This talk will cover how the instruction set looks today and why. It's architecture, capabilities, interface, just-in-time compilers. We will also talk about how it's being used in different areas of the kernel like tracing and networking and future plans.
Video: https://www.youtube.com/watch?v=FJW8nGV4jxY and https://www.youtube.com/watch?v=zrr2nUln9Kk . Tutorial slides for O'Reilly Velocity SC 2015, by Brendan Gregg.
There are many performance tools nowadays for Linux, but how do they all fit together, and when do we use them? This tutorial explains methodologies for using these tools, and provides a tour of four tool types: observability, benchmarking, tuning, and static tuning. Many tools will be discussed, including top, iostat, tcpdump, sar, perf_events, ftrace, SystemTap, sysdig, and others, as well observability frameworks in the Linux kernel: PMCs, tracepoints, kprobes, and uprobes.
This tutorial is updated and extended on an earlier talk that summarizes the Linux performance tool landscape. The value of this tutorial is not just learning that these tools exist and what they do, but hearing when and how they are used by a performance engineer to solve real world problems — important context that is typically not included in the standard documentation.
VictoriaMetrics 15/12 Meet Up: 2022 Features HighlightsVictoriaMetrics
2022 Features Highlights - Speaker
* MetricsQL Features
- Support for @ modifier
- keep_metric_names modifier
- Advanced label filters’ propagation
- Automatic label filters’ propagation
- Support for short numeric constants
- Distributed query tracing!
- New functions
* vmui Features
- Cardinality explorer!
- Top queries
- Significantly improved usability and stability!
* vmagent Features
- Fetch target response on behalf of vmagent
- Filter targets by url and labels
- /service-discovery page
- Relabel debugging!
- support for absolute _address_
- New service discovery mechanisms
- Multi-tenant support
- Performance improvements
* Relabeling Features
- Conditional relabeling
- Named label placeholders
- Graphite-style relabeling
* vmalert Features
- Better integration with Grafana alerts
- Reusable templates for annotations
- Debugging of alerting rules
- Improved compatibility with Prometheus
* vmctl Features
- Migrate all the data between clusters
- Data migration via Prometheus remote_read protocol
Enterprise Features
- mTLS support
- vmgateway JWT token enhancements
- Automatic restore from backups
- Automatic vmstorage discovery
- Multiple retentions
Various Enhancements
- Environment vars can be referred in command-line flags
- Performance improvements
- Deduplication and downsampling improvements
- Support for metrics push
- Support for Pushgateway data ingestion format
- VictoriaMetrics cluster: multitenancy enhancements
- vmbackup / vmrestore: Azure blob storage
- Official MacOS builds
- Official FreeBSD and OpenBSD builds
- Raspberry PI optimizations
- LTS releases
Scaling and Unifying SciKit Learn and Apache Spark PipelinesDatabricks
Pipelines have become ubiquitous, as the need for stringing multiple functions to compose applications has gained adoption and popularity. Common pipeline abstractions such as “fit” and “transform” are even shared across divergent platforms such as Python Scikit-Learn and Apache Spark.
Scaling pipelines at the level of simple functions is desirable for many AI applications, however is not directly supported by Ray’s parallelism primitives. In this talk, Raghu will describe a pipeline abstraction that takes advantage of Ray’s compute model to efficiently scale arbitrarily complex pipeline workflows. He will demonstrate how this abstraction cleanly unifies pipeline workflows across multiple platforms such as Scikit-Learn and Spark, and achieves nearly optimal scale-out parallelism on pipelined computations.
Attendees will learn how pipelined workflows can be mapped to Ray’s compute model and how they can both unify and accelerate their pipelines with Ray.
HBaseCon 2012 | Lessons learned from OpenTSDB - Benoit Sigoure, StumbleUponCloudera, Inc.
OpenTSDB was built on the belief that, through HBase, a new breed of monitoring systems could be created, one that can store and serve billions of data points forever without the need for destructive downsampling, one that could scale to millions of metrics, and where plotting real-time graphs is easy and fast. In this presentation we’ll review some of the key points of OpenTSDB’s design, some of the mistakes that were made, how they were or will be addressed, and what were some of the lessons learned while writing and running OpenTSDB as well as asynchbase, the asynchronous high-performance thread-safe client for HBase. Specific topics discussed will be around the schema, how it impacts performance and allows concurrent writes without need for coordination in a distributed cluster of OpenTSDB instances.
Apache Kafka’s Transactions in the Wild! Developing an exactly-once KafkaSink...HostedbyConfluent
Apache Kafka is one of the most commonly used connectors with Apache Flink for exactly-once streaming use cases. The combination of both systems allows you to build mission-critical systems that require low end-to-end latency and exactly-once processing eg. banks processing transactions. In Apache Flink 1.14, we released a new KafkaSink based on Apache Flink’s unified Sink interface that natively supports streaming and batch executions.
However, we needed to stretch Kafka’s transactions API to fully support exactly-once processing in Flink. In this talk, we will start with a quick recap of Apache Kafka’s transactions and Flink’s checkpointing mechanism. Then, we describe the two-phase commit protocol implemented in KafkaSink in-depth and emphasize the difficulties we have overcome when applying Kafka’s transaction API to longer-lasting transactions.
We explain how we ensure performant writing to Apache Kafka and how the KafkaSink recovery works.
In summary, this talk should give users a deep dive into how Apache Flink leverages Apache Kafka’s transactions and developers an overview of what they have to consider when using Apache Kafka’s transactions.
Analysing Data from Blockchains - Keynote @ SOCCA 2020Ingo Weber
Keynote at the Symposium on Cryptocurrency Analysis (SOCCA 2020). Content:
In order to analyse how concrete blockchain systems as well as blockchain applications are used, data must be extracted from these systems. Due to various complexities inherent in blockchain, the question how to interpret such data is non-trivial. Such interpretation should often be shared among parties, e.g., if they collaborate via a blockchain. To this end, we devised an approach codify the interpretation of blockchain data, to extract data from blockchains accordingly, and to output it in suitable formats -- see https://arxiv.org/abs/2001.10281.
In addition, application developers and users of blockchain applications may want to estimate the cost of using or op- erating a blockchain application. In the keynote, I will also discuss our cost estimation method.
As an enterprise IT professional, Service provider, ISP or Systems Integrator you may be wondering where all the hype is going with blockchain?
The session will cover topics such as.
• What exactly is Enterprise Blockchain technology and why is so disruptive?
• Why are companies are embracing Blockchain technologies?
• Overview of major Enterprise Blockchains (Hyperledger, Ethereum, Quorum and R3 Corda)
• What are the industries that are ripe for disruption from Blockchain Technology?
• What is Blockchain as a Service (BaaS) and why as an IT Professional you should understand this technology.
• The top five areas that IT professionals and should learn to profit from Blockchain
MyBlockChainExperts
Blockchain Application Design and Development, and the Case of Programmable M...Ingo Weber
Slides from my CLOSER 2021 keynote ( https://www.insticc.org/node/TechnicalProgram/closer/2021/presentationDetails/1390 )
Blockchain has emerged as a decentralized platform for managing digital assets and executing 'smart contracts', i.e., user-defined code. While blockchain's suitability for a given use case should always be scrutinized, it does have the potential to disrupt many of the connection points between individuals, companies, and government entities. In this keynote talk, I will provide an overview of what architects and developers need to know in order to build blockchain-based applications, and how it relates to the cloud and software services. Among others, I will cover blockchain-as-a-service concepts, as well as architectural concerns and model-driven engineering for blockchain applications, the latter also in relation to collaborative business processes. To highlight some of the challenges, I will discuss insights from a project on "programmable money", i.e., blockchain-based money for conditional payments where the money itself checks whether it can be spent in a certain way at the point of payment. Finally, I will touch on insights into current adoption of blockchain.
By the end of this webinar you should be able to understand
The concepts, use cases and basics of smart contracts
How Blockchain and smart contracts work and developer success
How smart contracts work on both the Ethereum and Hyperledger platforms from a practical level
The constructs of smart contract, common coding requirements and demos
What are the most in demand Blockchain Certifications?
How do these certification meet the needs of todays Enterprises?
What about Blockchain Career Demand?
Web 3 and IP: Cryptocurrencies, Blockchain, and NFTsAurora Consulting
“Metaverse” is the buzziest of the buzzwords in tech and will soon be joining the ranks of “AI” and “ML” as requisite keywords in the next generation of pitch decks and patent applications. But what are the core components of the Metaverse? And what are their implications in the world of intellectual property? The Patently Strategic Podcast will be exploring this topic over the course of several upcoming episodes.
** Web 3.0: Metaverse Building Block **
We begin our exploration with Web 3.0. While it may prove to be the next great tech revolution, the broad shape and definition of the Metaverse is still more firmly baked in science fiction than in commercial tech reality. Many of its core building blocks, however, are likely right in front of our eyes (or headsets, perhaps). History shows that most major technology revolutions are rarely leaps, but instead evolutionary products of incremental steps, composed of many existing building blocks, met with market readiness. The Web 3.0 innovations of blockchain, cryptocurrency, and NFTs that are taking shape in front of us will no doubt be among these essential building blocks. With an ability to touch both our physical and virtual worlds, cryptocurrencies could form the monetary basis for all economic activity in the Metaverse. NFTs make it possible for unique items to exist and assets to be exclusively owned in the digital realm. The very foundations and infrastructure for the Metaverse and its virtual worlds could be built on blockchain.
Perhaps the Metaverse is simply how we experience the third major phase of the web – or maybe in its purest, most decentralized form, the Metaverse is built entirely on top of it. In any case, it's hard to imagine a future where the two are not inextricably linked.
** IP Implications **
This third phase of the internet also poses some of the most interesting questions for the world of IP. What will the impact be on digital property rights in a secure marketplace, governed by smart contracts? How will copyrights play in digital worlds with their own art and governance? Is there merit in considering a new type of protection category outside of patents and copyrights?
** Episode Overview **
In our inaugural IPWatchdog episode, Kristen Hansen, Patent Strategist and software patent guru, leads a discussion along with our all star patent panel, digging into:
* The fundamentals of blockchain, cryptocurrencies, and NFTs – and why the hype
* The state of the technology
* Questions around what web evolution, blockchain, and NFT technology means for IP ownership
Strategies for protecting blockchain and cryptocurrency innovations
Podcast Link: https://patentlystrategic.buzzsprout.com/1734511/10694308-into-the-patentverse-web-3-0-blockchain-cryptocurrency-and-nfts
Blog post: https://www.aurorapatents.com/blog/new-podcast-into-the-patentverse-vol-1
By the end of this webinar you should be able to understand
The concepts, use cases and basics of smart contracts
How Blockchain and smart contracts work
How smart contracts work on both the Ethereum and Hyperledger platforms from a practical level
The constructs of smart contract, common coding requirements and demos
What are the most in demand Blockchain Certifications?
How do these certification meet the needs of todays Enterprises?
What about Blockchain Career Demand?
What is a Blockchain?
A cryptographically secure, shared, distributed ledger.
Immutable transactions are written on this distributed ledger on distributed nodes
Transformational technology in which business and government invest in.
It’s a decentralized database which stores information in the form of transactions.
By the end of this webinar you should be able to understand
What exactly is Blockchain technology?
Why are companies are embracing Blockchain technologies?
Overview of major Enterprise Blockchains (Hyperledger, Ethereum, Quorum and R3 Corda)
What are the most in demand Blockchain Certifications?
How do these certification meet the needs of todays Enterprises?
What about Blockchain Career Demand?
What is a Blockchain?
A cryptographically secure, shared, distributed ledger.
Immutable transactions are written on this distributed ledger on distributed nodes
Transformational technology in which business and government invest in.
It’s a decentralized database which stores information in the form of transactions.
Blockchain in enterprise - Challenges, Considerations and DesignsMichael Chi
What are challenges you will be facing while working on an enterprise Blockchain solution ? What are possible services, solutions we can leverage to create an enterprise blockchain solution ? Here we share our experience and walk you step by step through an on-production blockchain project process.
Advanced Blockchain Technologies on Privacy & Scalability (All Things Open) Kaleido
(View presentation in full-screen mode for compatibility)
2019 is shaping up to be the pivotal point of broad adoption of blockchain technologies, thanks to the large amount of projects in the enterprise space. Among the top concerns of blockchain projects in the private sector and government alike are privacy and scalability. This talk will cover various technologies such as identity masking, data isolation, zero-knowledge proof, homomorphic encryption that helps keep private data protected from unintended parties, and technologies for improving scalability such as state/payment channels, sharding, and novel consensus algorithms.
Blockchain and BPM - Reflections on Four Years of Research and ApplicationsIngo Weber
In this keynote, delivered at the Blockchain Forum of BPM 2019, I summarized and reflected on research on BPM and blockchain over the last four years, including model-driven engineering, process execution, and analysis and process mining. I also covered selected use cases and applications, as well as recent insights on adoption. The keynote closed with a discussion of open research questions.
Blockchain technology is increasingly being considered for applications in business contexts due to its key properties. It is also very much hyped for its potential to transform existing industries and business models. In Part 1, we will introduce the key properties of blockchain, its limitations, the field and the relevance for SAP and enterprises in general. In Part 2, we will focus on one of the prominent suites available today and provide an demonstration of the POC we’ve developed.
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
Techniques for extracting operational ontologies and data models from a well formed concept ontology (explanatory ontology).
This deck includes an overview and taxonomy of different styles of ontology.
Takes an example of a real world mortgage risk semantics issue and shows how a concept ontology would set out the appropriate distinctions in meanings of similar data. Then goes on to describe 2 techniques for managing this process:
1. Squashing (telescoping): data ontology has all locally defined properties for what were inherited properties in the concept ontology;
2. Squeezing (concertinaing): data ontology has fewer, simpler properties representing a longer path across blank nodes in concept ontology.
Also shows how these may be rendered in conventional data model versus OWL application ontology as a data model.
Ontology for Knowledge and Data Strategies.pptxMike Bennett
Ontology suffers from an adoption problem. If we are to describe the benefits of ontologies and knowledge graphs, we need to demonstrate how these can contribute to the business. That means addressing the knowledge and data management strategies of the organization.
A knowledge management strategy addresses a range of concerns, including terminology, business semantics, data provenance and quality, information availability and a rigorous treatment of context. Ontology is just one tool among many in the overall strategy for managing knowledge assets and their use.
In this seminar we will unpack the components of an organizational knowledge strategy and show in terms that both business and IT can understand, how different types of ontology fit in to the firm’s wider data management and knowledge strategies, alongside a range of other tools and techniques.
Attendees do not need any prior knowledge of ontology, knowledge graphs or semantic technology, but should ideally have an appreciation of data and knowledge management issues.
Mike Bennett's presentation on Ontology for Knowledge and Data Strategies, as presented at University of Westminster in December 2022.
This covers how ontologies may be used as part of a broader business strategy for knowledge and data management, including how different styles of ontology are needed or different parts of such a strategy.
This is my end to end description of what Hypercube can do for clients.
This deck explains semantics from first principles in a non technical way, before showing how these principles may be applied in managing business semantics. Finally a number of routes to technical deployment of semantics (ontology) solutions are shown.
This was presented to a workshop of the American Accounting Association in San Francisco in August 2019
Webinar in which Mike Bennett describes the unique approach Hypercube applies to modeling business semantics (the method used in creating the EDM Council's FIBO Business Conceptual Ontology). The end result of creating this kind of business conceptual ontology is that a firm will have a single, canonical source of meaning across all its data resources, like a golden copy but in the semantics space - so we sometimes refer to this a "Golden Ontology".
Mike explains the principles for creating an enterprise conceptual ontology. From this webinar you will learn:
3 things you need to know about ontologies
- Words are not Concepts
- Meaning is not Truth
- Syntax is not Semantics
3 things you need to do to build a Golden reference ontology:
- Classification
- Abstraction
- Partitioning
3 ways to use a Golden Ontology
- Querying across legacy data sources
- Mapping and data integration
- Reasoning with Semantic Web applications
Financial Industry Semantics and OntologiesMike Bennett
Overview of the principles of semantic modelling as applied to the Financial Industry Business Ontology (FIBO), with history and development of FIBO, strategies for deployment and potential use in financial reporting.
Update on Financial Industry Business Ontology status, as presented to the Open Financial Data Group. Includes description of the canonical reference model (business conceptual ontology) and the principles by which this was built
how can I sell pi coins after successfully completing KYCDOT TECH
Pi coins is not launched yet in any exchange 💱 this means it's not swappable, the current pi displaying on coin market cap is the iou version of pi. And you can learn all about that on my previous post.
RIGHT NOW THE ONLY WAY you can sell pi coins is through verified pi merchants. A pi merchant is someone who buys pi coins and resell them to exchanges and crypto whales. Looking forward to hold massive quantities of pi coins before the mainnet launch.
This is because pi network is not doing any pre-sale or ico offerings, the only way to get my coins is from buying from miners. So a merchant facilitates the transactions between the miners and these exchanges holding pi.
I and my friends has sold more than 6000 pi coins successfully with this method. I will be happy to share the contact of my personal pi merchant. The one i trade with, if you have your own merchant you can trade with them. For those who are new.
Message: @Pi_vendor_247 on telegram.
I wouldn't advise you selling all percentage of the pi coins. Leave at least a before so its a win win during open mainnet. Have a nice day pioneers ♥️
#kyc #mainnet #picoins #pi #sellpi #piwallet
#pinetwork
US Economic Outlook - Being Decided - M Capital Group August 2021.pdfpchutichetpong
The U.S. economy is continuing its impressive recovery from the COVID-19 pandemic and not slowing down despite re-occurring bumps. The U.S. savings rate reached its highest ever recorded level at 34% in April 2020 and Americans seem ready to spend. The sectors that had been hurt the most by the pandemic specifically reduced consumer spending, like retail, leisure, hospitality, and travel, are now experiencing massive growth in revenue and job openings.
Could this growth lead to a “Roaring Twenties”? As quickly as the U.S. economy contracted, experiencing a 9.1% drop in economic output relative to the business cycle in Q2 2020, the largest in recorded history, it has rebounded beyond expectations. This surprising growth seems to be fueled by the U.S. government’s aggressive fiscal and monetary policies, and an increase in consumer spending as mobility restrictions are lifted. Unemployment rates between June 2020 and June 2021 decreased by 5.2%, while the demand for labor is increasing, coupled with increasing wages to incentivize Americans to rejoin the labor force. Schools and businesses are expected to fully reopen soon. In parallel, vaccination rates across the country and the world continue to rise, with full vaccination rates of 50% and 14.8% respectively.
However, it is not completely smooth sailing from here. According to M Capital Group, the main risks that threaten the continued growth of the U.S. economy are inflation, unsettled trade relations, and another wave of Covid-19 mutations that could shut down the world again. Have we learned from the past year of COVID-19 and adapted our economy accordingly?
“In order for the U.S. economy to continue growing, whether there is another wave or not, the U.S. needs to focus on diversifying supply chains, supporting business investment, and maintaining consumer spending,” says Grace Feeley, a research analyst at M Capital Group.
While the economic indicators are positive, the risks are coming closer to manifesting and threatening such growth. The new variants spreading throughout the world, Delta, Lambda, and Gamma, are vaccine-resistant and muddy the predictions made about the economy and health of the country. These variants bring back the feeling of uncertainty that has wreaked havoc not only on the stock market but the mindset of people around the world. MCG provides unique insight on how to mitigate these risks to possibly ensure a bright economic future.
Empowering the Unbanked: The Vital Role of NBFCs in Promoting Financial Inclu...Vighnesh Shashtri
In India, financial inclusion remains a critical challenge, with a significant portion of the population still unbanked. Non-Banking Financial Companies (NBFCs) have emerged as key players in bridging this gap by providing financial services to those often overlooked by traditional banking institutions. This article delves into how NBFCs are fostering financial inclusion and empowering the unbanked.
how to sell pi coins on Bitmart crypto exchangeDOT TECH
Yes. Pi network coins can be exchanged but not on bitmart exchange. Because pi network is still in the enclosed mainnet. The only way pioneers are able to trade pi coins is by reselling the pi coins to pi verified merchants.
A verified merchant is someone who buys pi network coins and resell it to exchanges looking forward to hold till mainnet launch.
I will leave the telegram contact of my personal pi merchant to trade with.
@Pi_vendor_247
how to sell pi coins effectively (from 50 - 100k pi)DOT TECH
Anywhere in the world, including Africa, America, and Europe, you can sell Pi Network Coins online and receive cash through online payment options.
Pi has not yet been launched on any exchange because we are currently using the confined Mainnet. The planned launch date for Pi is June 28, 2026.
Reselling to investors who want to hold until the mainnet launch in 2026 is currently the sole way to sell.
Consequently, right now. All you need to do is select the right pi network provider.
Who is a pi merchant?
An individual who buys coins from miners on the pi network and resells them to investors hoping to hang onto them until the mainnet is launched is known as a pi merchant.
debuts.
I'll provide you the Telegram username
@Pi_vendor_247
how to sell pi coins in all Africa Countries.DOT TECH
Yes. You can sell your pi network for other cryptocurrencies like Bitcoin, usdt , Ethereum and other currencies And this is done easily with the help from a pi merchant.
What is a pi merchant ?
Since pi is not launched yet in any exchange. The only way you can sell right now is through merchants.
A verified Pi merchant is someone who buys pi network coins from miners and resell them to investors looking forward to hold massive quantities of pi coins before mainnet launch in 2026.
I will leave the telegram contact of my personal pi merchant to trade with.
@Pi_vendor_247
how can i use my minded pi coins I need some funds.DOT TECH
If you are interested in selling your pi coins, i have a verified pi merchant, who buys pi coins and resell them to exchanges looking forward to hold till mainnet launch.
Because the core team has announced that pi network will not be doing any pre-sale. The only way exchanges like huobi, bitmart and hotbit can get pi is by buying from miners.
Now a merchant stands in between these exchanges and the miners. As a link to make transactions smooth. Because right now in the enclosed mainnet you can't sell pi coins your self. You need the help of a merchant,
i will leave the telegram contact of my personal pi merchant below. 👇 I and my friends has traded more than 3000pi coins with him successfully.
@Pi_vendor_247
Financial Assets: Debit vs Equity Securities.pptxWrito-Finance
financial assets represent claim for future benefit or cash. Financial assets are formed by establishing contracts between participants. These financial assets are used for collection of huge amounts of money for business purposes.
Two major Types: Debt Securities and Equity Securities.
Debt Securities are Also known as fixed-income securities or instruments. The type of assets is formed by establishing contracts between investor and issuer of the asset.
• The first type of Debit securities is BONDS. Bonds are issued by corporations and government (both local and national government).
• The second important type of Debit security is NOTES. Apart from similarities associated with notes and bonds, notes have shorter term maturity.
• The 3rd important type of Debit security is TRESURY BILLS. These securities have short-term ranging from three months, six months, and one year. Issuer of such securities are governments.
• Above discussed debit securities are mostly issued by governments and corporations. CERTIFICATE OF DEPOSITS CDs are issued by Banks and Financial Institutions. Risk factor associated with CDs gets reduced when issued by reputable institutions or Banks.
Following are the risk attached with debt securities: Credit risk, interest rate risk and currency risk
There are no fixed maturity dates in such securities, and asset’s value is determined by company’s performance. There are two major types of equity securities: common stock and preferred stock.
Common Stock: These are simple equity securities and bear no complexities which the preferred stock bears. Holders of such securities or instrument have the voting rights when it comes to select the company’s board of director or the business decisions to be made.
Preferred Stock: Preferred stocks are sometime referred to as hybrid securities, because it contains elements of both debit security and equity security. Preferred stock confers ownership rights to security holder that is why it is equity instrument
<a href="https://www.writofinance.com/equity-securities-features-types-risk/" >Equity securities </a> as a whole is used for capital funding for companies. Companies have multiple expenses to cover. Potential growth of company is required in competitive market. So, these securities are used for capital generation, and then uses it for company’s growth.
Concluding remarks
Both are employed in business. Businesses are often established through debit securities, then what is the need for equity securities. Companies have to cover multiple expenses and expansion of business. They can also use equity instruments for repayment of debits. So, there are multiple uses for securities. As an investor, you need tools for analysis. Investment decisions are made by carefully analyzing the market. For better analysis of the stock market, investors often employ financial analysis of companies.
how to swap pi coins to foreign currency withdrawable.DOT TECH
As of my last update, Pi is still in the testing phase and is not tradable on any exchanges.
However, Pi Network has announced plans to launch its Testnet and Mainnet in the future, which may include listing Pi on exchanges.
The current method for selling pi coins involves exchanging them with a pi vendor who purchases pi coins for investment reasons.
If you want to sell your pi coins, reach out to a pi vendor and sell them to anyone looking to sell pi coins from any country around the globe.
Below is the contact information for my personal pi vendor.
Telegram: @Pi_vendor_247
how to sell pi coins in South Korea profitably.DOT TECH
Yes. You can sell your pi network coins in South Korea or any other country, by finding a verified pi merchant
What is a verified pi merchant?
Since pi network is not launched yet on any exchange, the only way you can sell pi coins is by selling to a verified pi merchant, and this is because pi network is not launched yet on any exchange and no pre-sale or ico offerings Is done on pi.
Since there is no pre-sale, the only way exchanges can get pi is by buying from miners. So a pi merchant facilitates these transactions by acting as a bridge for both transactions.
How can i find a pi vendor/merchant?
Well for those who haven't traded with a pi merchant or who don't already have one. I will leave the telegram id of my personal pi merchant who i trade pi with.
Tele gram: @Pi_vendor_247
#pi #sell #nigeria #pinetwork #picoins #sellpi #Nigerian #tradepi #pinetworkcoins #sellmypi
Poonawalla Fincorp and IndusInd Bank Introduce New Co-Branded Credit Cardnickysharmasucks
The unveiling of the IndusInd Bank Poonawalla Fincorp eLITE RuPay Platinum Credit Card marks a notable milestone in the Indian financial landscape, showcasing a successful partnership between two leading institutions, Poonawalla Fincorp and IndusInd Bank. This co-branded credit card not only offers users a plethora of benefits but also reflects a commitment to innovation and adaptation. With a focus on providing value-driven and customer-centric solutions, this launch represents more than just a new product—it signifies a step towards redefining the banking experience for millions. Promising convenience, rewards, and a touch of luxury in everyday financial transactions, this collaboration aims to cater to the evolving needs of customers and set new standards in the industry.
how to sell pi coins at high rate quickly.DOT TECH
Where can I sell my pi coins at a high rate.
Pi is not launched yet on any exchange. But one can easily sell his or her pi coins to investors who want to hold pi till mainnet launch.
This means crypto whales want to hold pi. And you can get a good rate for selling pi to them. I will leave the telegram contact of my personal pi vendor below.
A vendor is someone who buys from a miner and resell it to a holder or crypto whale.
Here is the telegram contact of my vendor:
@Pi_vendor_247
The Evolution of Non-Banking Financial Companies (NBFCs) in India: Challenges...beulahfernandes8
Role in Financial System
NBFCs are critical in bridging the financial inclusion gap.
They provide specialized financial services that cater to segments often neglected by traditional banks.
Economic Impact
NBFCs contribute significantly to India's GDP.
They support sectors like micro, small, and medium enterprises (MSMEs), housing finance, and personal loans.
Currently pi network is not tradable on binance or any other exchange because we are still in the enclosed mainnet.
Right now the only way to sell pi coins is by trading with a verified merchant.
What is a pi merchant?
A pi merchant is someone verified by pi network team and allowed to barter pi coins for goods and services.
Since pi network is not doing any pre-sale The only way exchanges like binance/huobi or crypto whales can get pi is by buying from miners. And a merchant stands in between the exchanges and the miners.
I will leave the telegram contact of my personal pi merchant. I and my friends has traded more than 6000pi coins successfully
Tele-gram
@Pi_vendor_247
what is the best method to sell pi coins in 2024DOT TECH
The best way to sell your pi coins safely is trading with an exchange..but since pi is not launched in any exchange, and second option is through a VERIFIED pi merchant.
Who is a pi merchant?
A pi merchant is someone who buys pi coins from miners and pioneers and resell them to Investors looking forward to hold massive amounts before mainnet launch in 2026.
I will leave the telegram contact of my personal pi merchant to trade pi coins with.
@Pi_vendor_247
The European Unemployment Puzzle: implications from population agingGRAPE
We study the link between the evolving age structure of the working population and unemployment. We build a large new Keynesian OLG model with a realistic age structure, labor market frictions, sticky prices, and aggregate shocks. Once calibrated to the European economy, we quantify the extent to which demographic changes over the last three decades have contributed to the decline of the unemployment rate. Our findings yield important implications for the future evolution of unemployment given the anticipated further aging of the working population in Europe. We also quantify the implications for optimal monetary policy: lowering inflation volatility becomes less costly in terms of GDP and unemployment volatility, which hints that optimal monetary policy may be more hawkish in an aging society. Finally, our results also propose a partial reversal of the European-US unemployment puzzle due to the fact that the share of young workers is expected to remain robust in the US.
BYD SWOT Analysis and In-Depth Insights 2024.pptxmikemetalprod
Indepth analysis of the BYD 2024
BYD (Build Your Dreams) is a Chinese automaker and battery manufacturer that has snowballed over the past two decades to become a significant player in electric vehicles and global clean energy technology.
This SWOT analysis examines BYD's strengths, weaknesses, opportunities, and threats as it competes in the fast-changing automotive and energy storage industries.
Founded in 1995 and headquartered in Shenzhen, BYD started as a battery company before expanding into automobiles in the early 2000s.
Initially manufacturing gasoline-powered vehicles, BYD focused on plug-in hybrid and fully electric vehicles, leveraging its expertise in battery technology.
Today, BYD is the world’s largest electric vehicle manufacturer, delivering over 1.2 million electric cars globally. The company also produces electric buses, trucks, forklifts, and rail transit.
On the energy side, BYD is a major supplier of rechargeable batteries for cell phones, laptops, electric vehicles, and energy storage systems.
1. FIBO Proof of Concept for
Blockchain Applications
Enterprise Date World – FIBO Conference
San Diego, CA
April 24 2018
Mike Bennett – Hypercube Ltd. – EDM Council Inc.
2. Overview
• Exploration by Object Management Group Finance Domain Task Force
• Premise: Use FIBO ontology as standard concept model for Smart
Contracts in Blockchain
• Exploration of concept model parameters
• Chosen PoC; Interest Rate Swaps
• FIBO exploration and extension
4. Actually
• The value in Blockchain is in the architecture
• Not only used for cryptocurrency
• Anything that benefits from un-repudiatable data records
• Anonymity versus performance
• Not every application suddenly needs to be done on Distributed
Ledgers
• But there are many new opportunities that do
• And many new architectures that open up the potential application space
5. Blockchain and Distributed Ledgers
• Origin: BitCoin, paper Satoshi Nakamoto
• Ability to post non repudiatable stuff on a distributed ledger
• What stuff?
• Balances
• Hash code for any kind of file?
• What is distributed?
• Smart Contracts
6. OMG Motivation
• Are there aspects of Blockchain where some standardization would
be of value?
• If so, what to introduce as OMG standards v what to coordinate with other
WGs
• Potential value of FIBO for Blockchain / DLs
7. Discovery: Key Points
• A distributed ledger is not a ledger
• Though it can be
• A Smart Contract is not a contract
• BitCoin is not really money
• It’s a commodity used as currency, like any mined commodity
• It is not a fiat currency, as most modern money is.
8. A Model Driven Approach to Blockchain/DL
• We see things happening at a physical or logical design level
• Smart Contracts code in e.g. Python, defining required behaviors, conditions
etc. when a thing is posted
• Physical architecture of the different Blockchain and DLs
• We see the need for and benefits of a common business view
• Computational independent of “Conceptual”
• Business semantics
• Conceptual view for both structural and behavioral
• FIBO as the conceptual model for contract terms
9. SMART CONTRACTS
• Premise: Smart Contracts should standardize around ontological
definitions of contracts (or commitments etc.) rather than whatever is
the "Code" approach that others in Smart Contracts have been
considering.
• One opportunity is that whatever code is distributed, itself has a hash
key, so if that has been vetted and accepted then the code is known
that it can be trusted. We know we are executing the same code in
the same VM
• So consensus can be at the level of both the business content and the code
that does the work (meta level)
• Then execution is off chain but the DLT validates that you can trust the
outcome
10. IR Swaps Proof of Concept
• Based on a simple vanilla IR Swap example
• Mapped out the process workflow
• Use FIBO for representation of contractual terms
• Extend for other conceptual matter (process and events)
11. What is an Interest Rate Swap?
Principal Principal
Interest: Fixed rate Interest: Floating rate
Party A Party B
Two loans
12. What is an Interest Rate Swap?
Principal Principal
Interest: Fixed rate Interest: Floating rate
Party A Party B
Party A and Party B want to exchange interest cashflows
13. IR Swap
What is an Interest Rate Swap?
Principal Principal
Fixed Rate Leg Floating Rate Leg
Party A Party B
Common Trade / Contract terms
15. Structural and Behavioral Business Content
• Structural: terms of the contract
• Mappable to or derived from FIBO
• Behavioral
• Payments process flow
• off-Blockchain behaviors and activities
• Events in the life of the swap and possible block postings for those events
• Smart Contract program flow
16. Structural
• If anything can be hashed and posted to the DL then:
• Can be at the level of the whole contract (PDF)
• OR
• Can be at the level of the individual Commitment
• FIBO describes commitments individually
• Based on REA transaction semantics
• Benefit in posting specific commitments to the DL for greater
precision
• These also map more directly to process activities
17. Behavioral
• Business process or workflow
• E.g. payments, settlement delivery workflows
• Swap: two cashflow-based commitments
• Can model these in
• UML Activity Diagram format
• BPMN (Business process modeling notation)
• Process ontology
• FIBO Conceptual Abstractions Model
• Includes process and activity primitives
20. Process Definition versus Events Occurrences
• The proves activity is a prescriptive occurrent
• It describes an event that is mandated and will happen many times
• It has a ‘date’ but not a date
• The instance is a descriptive occurrent
• This happens (whether now or in the future, once or many times or never)
• This has the percentages, dates, monetary amounts and so on
22. Prescriptive and Descriptive Occurrents
Control flow
Process occurrence is the
‘instance’ of the prescriptive
occurrent being carried out
23. Next Step: Conceptual to Physical
• Need indicative physical implementation(s)
• Show how conceptual model specifies these
• Need to get to know the languages used
24. Languages
Foundational languages to be familiar with
C++ Main https://www.gitbook.com/book/programmingblockchain/progr
ammingblockchain/details
Node.JS These are typically compatible,
accepted, supported in active
crypto environments
Python
Perl
DLT Programming language Link
The Blockchain (Bitcoin) Script http://davidederosa.com/basic-blockchain-programming/
Ethereum Solidity https://solidity.readthedocs.io/en/develop/
Hyperledger Fabric Go https://hyperledger-
fabric.readthedocs.io/en/latest/prereqs.html#go-programming-
language
Corda (R3) Kotlin https://www.corda.net/2017/01/kotlin/
Ripple C++, Java, Node.JS, etc https://github.com/ripple
Digital Asset Holdings Digital Asset Modeling
Language
https://digitalasset.com/press/introducing-daml.html
Stellar Networks Horizon RESTful HTTP API https://www.stellar.org/developers/guides/get-started/
25. Hunting the Elusive Code
• Went looking at a range of sites for
• Ethereum
• Hyperledger
• Found:
• An IR Swaps hackathon and description
• Lots of code for different financial actions
• Specific to cryptocurrency
• Ledger as ledger
• We want ledger as record of anything
27. Code Fragments Needed
• Post a percentage
• Post a monetary amount
• Not as a transaction but as a non repudiable record of that amount
• Post a netting transaction
28. What gets Posted Where?
• Oracles and other data stores
• Posting to a Blockchain block incurs overheads
• No one size fits all
• If we did a model with ‘Posted to the Block this would represent one
architecture not all
• Logical (design) model not concept model
29. Different Block-thing Architectures
• Blockchain
• Bitcoin
• Ethereum
• Hyperledger
• Block Graphs
• IOTA – the Tangle
• Hashgraph – gossip protocol
• Blockchain Menu
• CORDA / R3
• Solutions to the bottleneck issues
• Proof of Stake v Proof of Work
• Use of cryptocurrency for mining reward etc. v Tangle Proof of Work
30. The “Blockchain Bundle” Menu
• Ref
• https://gendal.me/2016/04/05/introducing-r3-corda-a-distributed-ledger-
designed-for-financial-services/
• Consensus
• Validity
• Uniqueness
• Immutability
• Authentication
31. R3 Proposal for FiServ Bundle
• Consensus
• Between parties – not to all
• Validity
• Stakeholders
• Validation logic
• Uniqueness
• Uniqueness service implementations
• Trade-offs and prioritization
• Immutability and Authentication
• As per Blockchain
32. IOTA
• New architecture in the Distributed Ledger space
• “Tangle” not Block
• What this is: Proofs of work done on two randomly chosen upstream items
• Moves forward as a connected set (tangle) of records not a distributed set of
identical records
• Will release open source software plus open standards
• Tangle Architecture – via OMG FDTF
• Protocol – ESMA
• Code – open source via eCore
33. Architecture Implications
• Not only one ‘Blockchain’ approach
• Some architectures you would want to post some material to other
data stores
• Called Oracles
• Or you might retain some information only in dynamic memory
• What you post to ‘The Block’ will vary by architecture
• Derive a logical design from the concept model based on design decisions
• Conclusion:
• Back to the Concept Model principles: Identify variable data regardless of
where posted
34. IR Swap Process Analysis
• At first we defined a swimlane for ‘The Block’
• What gets posted to ‘The Block’?
• It turns out this depends very much on the block architecture and
performance constraints
• Come material might go on separate data stores sometimes called ‘oracles’
• Latest: Swimlane defined by function not by form:
• Time-variable information about the contract
35. Refining this
• Final process model with variable data
• Static contract terms (term sheets) in FIBO IR Swaps
• Variable terms
• Process activities that generate and use this data
• Prescriptive: process definition
• Descriptive: events that happen
• These originate each piece of variable data
37. Data typing questions
• Any kind of data ca be posted to the ‘Block’ or ledger
• E.g. numbers, for money
• Also text etc.
• BUT how do we deal with posting of complex or qualified data
• Monetary amounts (qualified by currency)
• Interest amounts – (on what, for what period)
• How to identify related contract etc. in a hige and growing block of stuff?
• Also provenance and ther metadata
38. Breakthrough Moment
• If any kind of datatype can be posted to the block, then
• XML can be posted (as a complete XML Document)
• E.g. ISO 20022
• XML includes RDF/XML
• So can post a semantic graph
• e.g. instance data referenced to FIBO and others
39. Then
• XML / ISO 20022 can be used directly were appropriate
• Each ISO 20022 message type accompanies a process definition
• If you are following that part of a process, this tells you all the data elements
needed to accompany and qualify that element
• Can also use the ISO 220022 content as analysis for an ontology
representation of the same stuff
40. Then
• Post RDF graphs with qualifying information for each data item
• USE existing analysis where available
• e.g. ISO 20022 or FpML
• IR Swaps PoC: Analyze the qualifying terms needed
• This will compete our PoC conceptual model exercise
41. PoC: What qualifies each Postable Term?
• Specific trade (Unique Transaction Identifier)
• Specific Quarterly cycle (Reset Date)
• Monetary Amounts: Currency
• These are the qualifying data elements for each posting of these
records to a block or oracle or other record, depending on
architecture
• Posted graph = the core data (number etc.) plus these qualifiers
42. Term Sheet
Semantics
This stuff: Contractual commitments
and agreed terms (triggers etc.)
= the semantics of the Contract
= FIBO IR Swaps
43. FIBO: Scope and Content
Upper Ontology
FIBO Foundations: High level abstractions
FIBO Contract Ontologies
FIBO Pricing and Analytics (time-sensitive concepts)
Pricing, Yields, Analytics per instrument class
Future FIBO: Portfolios, Positions etc.
Concepts relating to individual institutions, reporting requirements etc.
FIBO Process
Corporate Actions, Securities Issuance and Securitization
Derivatives Loans, Mortgage Loans
Funds Rights and Warrants
FIBO Indices and Indicators
Securities (Common, Equities) Securities (Debt)
FIBO Business Entities
FIBO Financial Business and
Commerce
53. Records and Occurrences Closeup
• Occurrences of events make reference to IR Swaps terms
• Variable data qualified by specific other data
54. Variable Data Qualifying Information
• This is the data
accompanying each
record that may get
posted
55. Conclusions
• We know what needs to be posted to [something e.g. Block] at any
given point in the process
• This is posted as a textual record
• Where the text is XML and the XML is RDF
• We are posting graphs!
• Narrows the search for physical code in different languages and
architectures
56. Smart Contracts Specification
• The part where it posts something is now simple enough
• RDF graph with the qualifying terms identified
• The other part of the Smart Contracts is the calculations etc.
• The calculations within the boxes are well enough understood
• Ontology provides formal declaration of the concepts involved
• There is also plain mathematics:
• Interest rate = Base + Margin
• Accrued amounts – as defined for Accrual Basis
• Mathematics is another computationally independent way of saying things
• Some things we can’t say in Ontology but we can declare the parameters
used
57. Conclusions
• Computationally independent representations
• Formal process
• Ontology
• Mathematics
• Ontology can represent processes, events and activities given the right
upper ontology and semantics treatment
• (the meaning of things that happen)
• Ontology provides the connective glue into the mathematical content
• FIBO provides the ontology of the contractual terms
• Extend FIBO for additional concepts
• Conceptual underpinnings for Smart Contracts
• Architecturally agnostic
58. Take Aways
• You need computationally independent models for complex
developments like blockchain / Smart Contract
• The more wild the technology, the more you need mature governance to get
ahead and mitigate risk
• Ontology provides ‘first order’ concept modeling
• Semantic analysis yields process, events, records ontology
• FIBO provides ontology of contract and trade terms sheet