This document discusses scalable 5-tuple packet classification in overlay networks. It proposes a rule aggregation algorithm to minimize the number of entries stored and reduce storage requirements. It also proposes a flow classification method to minimize lookup time and reduce processing latency. The key points are:
1) It develops a 5-tuple LCAF implementation and mapping system for faster lookups and reduced control messages.
2) The rule aggregation algorithm breaks rules into header fields and limits wildcard positions to minimize stored entries.
3) The flow classification method uses pre-filtering to split rules into multiple tables based on effective bit positions, reducing lookup times.
4) Testing shows the methods reduce storage requirements by 58.6% and processing latency
University program - writing an apache apex applicationAkshay Gore
This presentation was delivered to engineering students from Computer, IT, Electronics background. This was lab hands on session on Apache Apex. The lab session was conducted after having lecture on introduction to Apex.
Advanced users of Apex/experts may not find this relevant.
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...Flink Forward
Approximate computing aims for efficient execution of workflows where an approximate output is sufficient instead of the exact output. The idea behind approximate computing is to compute over a representative sample instead of the entire input dataset. Thus, approximate computing — based on the chosen sample size — can make a systematic trade-off between the output accuracy and computation efficiency. Unfortunately, state-of-the-art systems for approximate computing, such as BlinkDB, ApproxHadoop, primarily target batch analytics, where the input data remains unchanged during the course of sampling. Thus, they are not well-suited for stream analytics. In this talk, we will present the design of StreamApprox, a Flink-based stream analytics system for approximate computing. StreamApprox implements an online stratified reservoir sampling algorithm in Apache Flink to produce approximate output with rigorous error bounds.
Apache Apex: Stream Processing Architecture and ApplicationsThomas Weise
Slides from http://www.meetup.com/Hadoop-User-Group-Munich/events/230313355/
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
Apache Big Data 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Apache Apex is a next gen big data analytics platform. Originally developed at DataTorrent it comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn about the Apex architecture, including its unique features for scalability, fault tolerance and processing guarantees, programming model and use cases.
http://apachebigdata2016.sched.org/event/6M0L/next-gen-big-data-analytics-with-apache-apex-thomas-weise-datatorrent
Actionable Insights with Apache Apex at Apache Big Data 2017 by Devendra TagareApache Apex
The presentation covers how Apache Apex is used to deliver actionable insights in real-time for Ad-tech. It includes a reference architecture to provide dimensional aggregates on TB scale for billions of events per day. The reference architecture covers concepts around Apache Apex, with Kafka as source and dimensional compute. Slides from Devendra Tagare at Apache Big Data North America in Miami 2017.
Hadoop Summit SJ 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
University program - writing an apache apex applicationAkshay Gore
This presentation was delivered to engineering students from Computer, IT, Electronics background. This was lab hands on session on Apache Apex. The lab session was conducted after having lecture on introduction to Apex.
Advanced users of Apex/experts may not find this relevant.
Flink Forward Berlin 2017: Pramod Bhatotia, Do Le Quoc - StreamApprox: Approx...Flink Forward
Approximate computing aims for efficient execution of workflows where an approximate output is sufficient instead of the exact output. The idea behind approximate computing is to compute over a representative sample instead of the entire input dataset. Thus, approximate computing — based on the chosen sample size — can make a systematic trade-off between the output accuracy and computation efficiency. Unfortunately, state-of-the-art systems for approximate computing, such as BlinkDB, ApproxHadoop, primarily target batch analytics, where the input data remains unchanged during the course of sampling. Thus, they are not well-suited for stream analytics. In this talk, we will present the design of StreamApprox, a Flink-based stream analytics system for approximate computing. StreamApprox implements an online stratified reservoir sampling algorithm in Apache Flink to produce approximate output with rigorous error bounds.
Apache Apex: Stream Processing Architecture and ApplicationsThomas Weise
Slides from http://www.meetup.com/Hadoop-User-Group-Munich/events/230313355/
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
Apache Big Data 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
Apache Apex is a next gen big data analytics platform. Originally developed at DataTorrent it comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn about the Apex architecture, including its unique features for scalability, fault tolerance and processing guarantees, programming model and use cases.
http://apachebigdata2016.sched.org/event/6M0L/next-gen-big-data-analytics-with-apache-apex-thomas-weise-datatorrent
Actionable Insights with Apache Apex at Apache Big Data 2017 by Devendra TagareApache Apex
The presentation covers how Apache Apex is used to deliver actionable insights in real-time for Ad-tech. It includes a reference architecture to provide dimensional aggregates on TB scale for billions of events per day. The reference architecture covers concepts around Apache Apex, with Kafka as source and dimensional compute. Slides from Devendra Tagare at Apache Big Data North America in Miami 2017.
Hadoop Summit SJ 2016: Next Gen Big Data Analytics with Apache ApexApache Apex
This is an overview of architecture with use cases for Apache Apex, a big data analytics platform. It comes with a powerful stream processing engine, rich set of functional building blocks and an easy to use API for the developer to build real-time and batch applications. Apex runs natively on YARN and HDFS and is used in production in various industries. You will learn more about two use cases: A leading Ad Tech company serves billions of advertising impressions and collects terabytes of data from several data centers across the world every day. Apex was used to implement rapid actionable insights, for real-time reporting and allocation, utilizing Kafka and files as source, dimensional computation and low latency visualization. A customer in the IoT space uses Apex for Time Series service, including efficient storage of time series data, data indexing for quick retrieval and queries at high scale and precision. The platform leverages the high availability, horizontal scalability and operability of Apex.
More complex streaming applications generally need to store some state of the running computations in a fault-tolerant manner. This talk discusses the concept of operator state and compares state management in current stream processing frameworks such as Apache Flink Streaming, Apache Spark Streaming, Apache Storm and Apache Samza.
We will go over the recent changes in Flink streaming that introduce a unique set of tools to manage state in a scalable, fault-tolerant way backed by a lightweight asynchronous checkpointing algorithm.
Talk presented in the Apache Flink Bay Area Meetup group on 08/26/15
IoT Ingestion & Analytics using Apache Apex - A Native Hadoop PlatformApache Apex
Internet of Things (IoT) devices are becoming more ubiquitous in consumer, business and industrial landscapes. They are being widely used in applications ranging from home automation to the industrial internet. They pose a unique challenge in terms of the volume of data they produce, and the velocity with which they produce it, and the variety of sources they need to handle. The challenge is to ingest and process this data at the speed at which it is being produced in a real-time and fault tolerant fashion. Apache Apex is an industrial grade, scalable and fault tolerant big data processing platform that runs natively on Hadoop. In this deck, you will see how Apex is being used in IoT applications and also see how the enterprise features such as dimensional analytics, real-time dashboards and monitoring play a key role.
Presented by Pramod Immaneni, Principal Architect at DataTorrent and PPMC member Apache Apex, on BrightTALK webinar on Apr 6th, 2016
Ingesting Data from Kafka to JDBC with Transformation and EnrichmentApache Apex
Presenter - Dr Sandeep Deshmukh, Committer Apache Apex, DataTorrent engineer
Abstract:
Ingesting and extracting data from Hadoop can be a frustrating, time consuming activity for many enterprises. Apache Apex Data Ingestion is a standalone big data application that simplifies the collection, aggregation and movement of large amounts of data to and from Hadoop for a more efficient data processing pipeline. Apache Apex Data Ingestion makes configuring and running Hadoop data ingestion and data extraction a point and click process enabling a smooth, easy path to your Hadoop-based big data project.
In this series of talks, we would cover how Hadoop Ingestion is made easy using Apache Apex. The third talk in this series would focus on ingesting unbounded data from Kafka to JDBC with couple of processing operators -Transform and enrichment.
These are the slides that supported the presentation on Apache Flink at the ApacheCon Budapest.
Apache Flink is a platform for efficient, distributed, general-purpose data processing.
Highlights of Features Coming Soon in HPCC Systems 6.0.0!
Come learn how the upcoming 6.0 release can help you solve Big Data problems faster and more efficient. Topics include:
· How using the new Virtual slave Thor makes using a smart/lookup join faster
· How to add and leave tracing in your code without affecting the graph
· How the HPCC Systems Visualisations Framework provides easy and fast access to visualisations from data included in a workunit or Roxie query
· Plus, hear how our success with GSoC (Google Summer of Code) in 2015 is preparing us for this year
Third Party DNS Operator to Registries/Registrars ProtocolAPNIC
Third Party DNS Operator to Registries/Registrars Protocol by Tom Harrison.
A presentation given at Lightning Talks session on Tuesday, 4 October 2016.
Building Your First Apache Apex (Next Gen Big Data/Hadoop) ApplicationApache Apex
This webinar will be a hands-on demonstration of how to clone and build the Apache Apex source code repositories, how to run the maven archetype to create a new Apex project, how to enhance it to build a word counting application and finally, how to run it and view results. We will also do a brief code walkthrough.
Bio:
Dr. Munagala V. Ramanath is a Committer for Apache Apex and a Software Engineer at DataTorrent. He has many years experience working for a variety of companies in California and a Ph.D. in Computer Science from the University of Wisconsin, Madison.
More complex streaming applications generally need to store some state of the running computations in a fault-tolerant manner. This talk discusses the concept of operator state and compares state management in current stream processing frameworks such as Apache Flink Streaming, Apache Spark Streaming, Apache Storm and Apache Samza.
We will go over the recent changes in Flink streaming that introduce a unique set of tools to manage state in a scalable, fault-tolerant way backed by a lightweight asynchronous checkpointing algorithm.
Talk presented in the Apache Flink Bay Area Meetup group on 08/26/15
IoT Ingestion & Analytics using Apache Apex - A Native Hadoop PlatformApache Apex
Internet of Things (IoT) devices are becoming more ubiquitous in consumer, business and industrial landscapes. They are being widely used in applications ranging from home automation to the industrial internet. They pose a unique challenge in terms of the volume of data they produce, and the velocity with which they produce it, and the variety of sources they need to handle. The challenge is to ingest and process this data at the speed at which it is being produced in a real-time and fault tolerant fashion. Apache Apex is an industrial grade, scalable and fault tolerant big data processing platform that runs natively on Hadoop. In this deck, you will see how Apex is being used in IoT applications and also see how the enterprise features such as dimensional analytics, real-time dashboards and monitoring play a key role.
Presented by Pramod Immaneni, Principal Architect at DataTorrent and PPMC member Apache Apex, on BrightTALK webinar on Apr 6th, 2016
Ingesting Data from Kafka to JDBC with Transformation and EnrichmentApache Apex
Presenter - Dr Sandeep Deshmukh, Committer Apache Apex, DataTorrent engineer
Abstract:
Ingesting and extracting data from Hadoop can be a frustrating, time consuming activity for many enterprises. Apache Apex Data Ingestion is a standalone big data application that simplifies the collection, aggregation and movement of large amounts of data to and from Hadoop for a more efficient data processing pipeline. Apache Apex Data Ingestion makes configuring and running Hadoop data ingestion and data extraction a point and click process enabling a smooth, easy path to your Hadoop-based big data project.
In this series of talks, we would cover how Hadoop Ingestion is made easy using Apache Apex. The third talk in this series would focus on ingesting unbounded data from Kafka to JDBC with couple of processing operators -Transform and enrichment.
These are the slides that supported the presentation on Apache Flink at the ApacheCon Budapest.
Apache Flink is a platform for efficient, distributed, general-purpose data processing.
Highlights of Features Coming Soon in HPCC Systems 6.0.0!
Come learn how the upcoming 6.0 release can help you solve Big Data problems faster and more efficient. Topics include:
· How using the new Virtual slave Thor makes using a smart/lookup join faster
· How to add and leave tracing in your code without affecting the graph
· How the HPCC Systems Visualisations Framework provides easy and fast access to visualisations from data included in a workunit or Roxie query
· Plus, hear how our success with GSoC (Google Summer of Code) in 2015 is preparing us for this year
Third Party DNS Operator to Registries/Registrars ProtocolAPNIC
Third Party DNS Operator to Registries/Registrars Protocol by Tom Harrison.
A presentation given at Lightning Talks session on Tuesday, 4 October 2016.
Building Your First Apache Apex (Next Gen Big Data/Hadoop) ApplicationApache Apex
This webinar will be a hands-on demonstration of how to clone and build the Apache Apex source code repositories, how to run the maven archetype to create a new Apex project, how to enhance it to build a word counting application and finally, how to run it and view results. We will also do a brief code walkthrough.
Bio:
Dr. Munagala V. Ramanath is a Committer for Apache Apex and a Software Engineer at DataTorrent. He has many years experience working for a variety of companies in California and a Ph.D. in Computer Science from the University of Wisconsin, Madison.
The CMS Innovation Center hosted a webinar on Tuesday, June 10, 2014 from 3:00pm - 4:00pm EDT that focused on the proposal requirements of the Round Two Model Design Award opportunity.
- - -
CMS Innovation Center
http://innovation.cms.gov
We accept comments in the spirit of our comment policy:
http://newmedia.hhs.gov/standards/comment_policy.html
CMS Privacy Policy
http://cms.gov/About-CMS/Agency-Information/Aboutwebsite/Privacy-Policy.html
The CMS Innovation Center hosted a webinar on Thursday, June 12, 2014 from 4:00pm - 5:00pm EDT that focused on the application process, specifically how to apply through Grants.gov.
- - -
CMS Innovation Center
http://innovation.cms.gov
We accept comments in the spirit of our comment policy:
http://newmedia.hhs.gov/standards/comment_policy.html
CMS Privacy Policy
http://cms.gov/About-CMS/Agency-Information/Aboutwebsite/Privacy-Policy.html
PelotonDB - A self-driving database for hybrid workloads宇 傅
PelotonDB is an In-memory, Hybrid, Autonomous databased system developed by CMU database group, equipped with many new technologies such as NVM, machine learning, hybrid storage, etc. Our sharing is focused on the hybrid storage part, aka. HTAP
There exist some valid reasons to rebuild indexes on an Oracle database (not many). This presentation is about some of those reasons and how to automate such online index rebuild.
Parallel Distributed Deep Learning on HPCC SystemsHPCC Systems
As part of the 2018 HPCC Systems Community Day event:
The training process for modern deep neural networks requires big data and large amounts of computational power. Combining HPCC Systems and Google’s TensorFlow, Robert created a parallel stochastic gradient descent algorithm to provide a basis for future deep neural network research, thereby helping to enhance the distributed neural network training capabilities of HPCC Systems.
Robert Kennedy is a first year Ph.D. student in CS at Florida Atlantic University with research interests in Deep Learning and parallel and distributed computing. His current research is in improving distributed deep learning by implementing and optimizing distributed algorithms.
Benchmarking Elastic Cloud Big Data Services under SLA ConstraintsNicolas Poggi
We introduce an extension for TPC benchmarks addressing the requirements of big data processing in cloud environments. We characterize it as the Elasticity Test and evaluate under TPCx-BB (BigBench). First, the Elasticity Test incorporates an approach to generate real-world query submissions patterns with distinct data scale factors based on major industrial cluster logs. Second, a new metric is introduced based on Service Level Agreements (SLAs) that takes the quality of service requirements of each query under consideration.
Experiments with Apache Hive and Spark on the cloud platforms of three major vendors validate our approach by comparing to the current TPCx-BB metric.
Results show how systems who fail to meet SLAs under concurrency due to queuing or degraded performance negatively affect the new metric. On the other hand, elastic systems meet a higher percentage of SLAs and thus are rewarded in the new metric. Such systems have the ability to scale up and down compute workers according to the demands of a varying workload and can thus save dollar costs.
Hhm 3470 mq v8 and more recent new things for z osPete Siddall
IBM MQ for z/OS makes use of many system features and facilities to provide a very high level of availability and performance for your messages. Come along to this session to learn the detail behind all the new features and enhancements in the latest release of IBM MQ for z/OS.
Reduced instruction set computing, or RISC (pronounced 'risk', /ɹɪsk/), is a CPU design strategy based on the insight that a simplified instruction set provides higher performance when combined with a microprocessor architecture capable of executing those instructions using fewer microprocessor cycles per instruction.
Metrics Are Not Enough: Monitoring Apache Kafka and Streaming Applicationsconfluent
When you are running systems in production, clearly you want to make sure they are up and running at all times. But in a distributed system such as Apache Kafka… what does “up and running” even mean?
Experienced Apache Kafka users know what is important to monitor, which alerts are critical and how to respond to them. They don’t just collect metrics - they go the extra mile and use additional tools to validate availability and performance on both the Kafka cluster and their entire data pipelines.
In this presentation we’ll discuss best practices of monitoring Apache Kafka. We’ll look at which metrics are critical to alert on, which are useful in troubleshooting and what may actually be misleading. We’ll review a few “worst practices” - common mistakes that you should avoid. We’ll then look at what metrics don’t tell you - and how to cover those essential gaps.
400-101 CCIE Routing and Switching IT Certificationwrouthae
The Cisco CCIE Routing and Switching written exam validate that professionals have the expertise to: configure, validate, and troubleshoot complex enterprise network infrastructure; understand how infrastructure components inter operate; and translate functional requirements into specific device configurations.http://www.testbells.com/400-101.html
Meet Ilya Savelyev - Software Architect at GreenM company.
Ilya will talk about the dynamics in big data. About when the traditional ETL will no longer be much help, but it’s still far from streaming. During the lecture, we will look at working with analytical and operational data warehouses, try serverless treats, touch Elasticsearch and so much more.
4. Traditional Networking
Destination-based
Forwarding
No global view, multiple
control plane
Introduction
4
• Problems: Number of rules increases and rule size
increases
• processing latency increases
• storage space requirement also increases
OpenFlow
Up to 38 packet header
fields
(SDN) Has global view,
centralized control plane
5. Problem Statement
• Trade-off between flexibility vs performance: 5-tuple
5
• Objectives:
• Propose rule aggregation algorithm to minimize
number of entries stored reduce the storage space
requirement
• Propose flow classification method to minimize the
lookup time reduce processing latency
Flexibility Performance
SDN TN5-Tuple
6. Related Works
• Packet Classification: Well-known topic for research
• Decision-tree based:
• Well-known approach, such as in HiCut and HyperCut
• Cons: Tree size grows significantly due to finer TE
• TCAM-based:
• Fast lookup process
• Cons: high implementation cost
• Tuple Space Search (TSS):
• Very flexible
• Cons: high processing latency for high number of rules
• Main Idea: MC-SBC
• Modified TSS
• Send rules and packets to specific smaller lookup tables
6
8. Open Overlay Router (OOR)
• Overlay network is one of the well-known implementation
of SDN
• OOR: Open-source platform to create programmable
overlay network
8
9. Vector Packet Processing (VPP)
• High performance packet processing platform and runs
entirely in the userspace
• Process multiple packets in a time by building superframe
• Packets are processed through Packet Processing Graph
9
13. 5-Tuple Mapping System
• Goals:
• Faster Lookup Times
• Reduce amount of Control Message
• Three Layers of Information Bases:
• NIB: Whole TE Rules
• RIB: Subset of NIB
• FIB: Ready-to-use Info
13
14. Rule Aggregation and Flow Classification
• Developed based on VPP Classification method
• Basic Ideas:
• Splitting rules into each header fields
• Limiting wildcard positions during aggregation
• Using Multiple Tables for classification
15
15. Rule Aggregation
• VPP classifier does not accept variable-sized header
(wildcards) mask
• Goals:
• Minimize the number of entries
• Give flexibility to use wildcards (predicted)
• Predicted vs Free Wildcard Positions (4 bits rules)
Predicted Free Wildcard Positions
1111 1111 0111 1010 0010
0111 1110 1100 0101 0100
0011 1101 1001 0110 1000
0001 1011 0011 0001
16
16. Rule Aggregation (Cont.)
• Inserted rules (with wildcards) are broken down into
multiple entries 𝐸𝑗
𝑖
to remove the wildcards
• The entries are converted into binary and split into each
packet header fields 𝑒 𝑘,𝑗
𝑖
and inserted to 𝑅𝑖 matrix
• Matrix 𝑅𝑖 is inserted into the the rule aggregation algorithm
and results in aggregated rules matrix 𝑅𝑖
′
17
19. Flow Classification
• Goal: reduce the lookup times
• Similar to Tuple Space Search method, but based on
specific bit position
• Process: Offline and Online Stages
• Pre-filtering: select one or more bit positions to split the
rules into smaller multiple lookup tables as even as
possible
21
20. Flow Classification (Cont.)
• Bit positions selected for the pre-filtering Effective Bit
Position (EBP)
• To determine EBP:
• Diversity Index: to find most even distribution of zeros
and ones
• Independence Index (for >1 EBP): to find bit positions
that draws good distinction between entries
22
21. Flow Classification (Cont.)
• Best case scenario: High 𝐻 𝑞 and 𝐼𝑛𝑑 𝑞
• EBP has to be updated periodically, especially if there’s
many rules addition
• In the beginning, when there’s no rules (day 0): select EBP
at random
23
27. Simulation and Testing Setup
• Testbed using VMs to implement OOR and VPP
• Synthetic rules generator: Classbench
• Packet generation: hping3
29
Name Function
VM1 VPP and Classbench
VM2 & VM3 xTR and hping3
VM4 MS and hping3
28. Result: Rule Aggregation
• Number of stored entries can be reduced
• Minimum achievable aggregation: 41.4%
• Higher number of original rules lead to more aggregation
30
Original Aggregated %
1607 1419 94.5
8640 6573 76.0
31571 16357 51.8
74527 31321 42.02
84815 35367 41.7
29. Result: Flow Classification
• Lookup times can be reduced
• After 76 entries, the performance of proposed method
surpassed the basic method
• Maximum achievable reduction: 29.6%
31
31. Conclusions
• SDN provides flexibility and programmability, but it comes
with scalability problem
• Implementation of proposed 5-tuple rule aggregation and
flow classification are able to reduce:
• Number of entries stored less storage space
requirement
• Lookup times lower processing latency
• In the testing phase, we found:
• 58.6% savings in storage space requirement
• 29.6% reduction in processing latency
33
32. Future Works
• Extend the proposed rule aggregation
• To check for overlapping rules
• To find faster way for calculation
• Extend the flow classification method
• Calculate and Update EBP in real time
• Other method to determine EBP
• To check the effectiveness of the proposed methods in the
real and big network
34