This document discusses various internet measurement tools and their usefulness for network engineers. It describes tools run by academic groups like CAIDA and RIPE, as well as community/industry tools like Routeviews, CIDR Report, and looking glasses. These tools provide continuous measurements of reachability, routing tables, latency, and BGP updates to help monitor and understand internet performance and stability.
Get Internet Number Resources from ARIN (IPv4, IPv6, ASNs)ARIN
Getting Internet Number Resources from the American Registry for Internet Numbers (ARIN) Find out how to get resources from ARIN, including Autonomous System Numbers (ASNs), Internet Protocol version 4 (IPv4), and Internet Protocol version 6 (IPv6). PPTX version available at: https://www.arin.net/knowledge/general.html
Find out how to get resources from ARIN, including Autonomous System Numbers (ASNs), Internet Protocol version 4 (IPv4), and Internet Protocol version 6 (IPv6).
ARIN 35 Tutorial: Life after IPv4 depletion by Leslie Nobile. Leslie talks about the various options for obtaining IP address space as we near full IPv4 address depletion: check the available ARIN inventory; go on the waiting list, explore transfers, request IPv6. Video archives at: https://www.arin.net/participate/meetings/reports/ARIN_35/premeeting.html
Improving Traffic Prediction Using Weather Datawith Ramya RaghavendraSpark Summit
As common sense would suggest, weather has a definite impact on traffic. But how much? And under what circumstances? Can we improve traffic (congestion) prediction given weather data? Predictive traffic is envisioned to significantly impact how driver’s plan their day by alerting users before they travel, find the best times to travel, and over time, learn from new IoT data such as road conditions, incidents, etc. This talk will cover the traffic prediction work conducted jointly by IBM and the traffic data provider. As a part of this work, we conducted a case study over five large metropolitans in the US, 2.58 billion traffic records and 262 million weather records, to quantify the boost in accuracy of traffic prediction using weather data. We will provide an overview of our lambda architecture with Apache Spark being used to build prediction models with weather and traffic data, and Spark Streaming used to score the model and provide real-time traffic predictions. This talk will also cover a suite of extensions to Spark to analyze geospatial and temporal patterns in traffic and weather data, as well as the suite of machine learning algorithms that were used with Spark framework. Initial results of this work were presented at the National Association of Broadcasters meeting in Las Vegas in April 2017, and there is work to scale the system to provide predictions in over a 100 cities. Audience will learn about our experience scaling using Spark in offline and streaming mode, building statistical and deep-learning pipelines with Spark, and techniques to work with geospatial and time-series data.
Get Internet Number Resources from ARIN (IPv4, IPv6, ASNs)ARIN
Getting Internet Number Resources from the American Registry for Internet Numbers (ARIN) Find out how to get resources from ARIN, including Autonomous System Numbers (ASNs), Internet Protocol version 4 (IPv4), and Internet Protocol version 6 (IPv6). PPTX version available at: https://www.arin.net/knowledge/general.html
Find out how to get resources from ARIN, including Autonomous System Numbers (ASNs), Internet Protocol version 4 (IPv4), and Internet Protocol version 6 (IPv6).
ARIN 35 Tutorial: Life after IPv4 depletion by Leslie Nobile. Leslie talks about the various options for obtaining IP address space as we near full IPv4 address depletion: check the available ARIN inventory; go on the waiting list, explore transfers, request IPv6. Video archives at: https://www.arin.net/participate/meetings/reports/ARIN_35/premeeting.html
Improving Traffic Prediction Using Weather Datawith Ramya RaghavendraSpark Summit
As common sense would suggest, weather has a definite impact on traffic. But how much? And under what circumstances? Can we improve traffic (congestion) prediction given weather data? Predictive traffic is envisioned to significantly impact how driver’s plan their day by alerting users before they travel, find the best times to travel, and over time, learn from new IoT data such as road conditions, incidents, etc. This talk will cover the traffic prediction work conducted jointly by IBM and the traffic data provider. As a part of this work, we conducted a case study over five large metropolitans in the US, 2.58 billion traffic records and 262 million weather records, to quantify the boost in accuracy of traffic prediction using weather data. We will provide an overview of our lambda architecture with Apache Spark being used to build prediction models with weather and traffic data, and Spark Streaming used to score the model and provide real-time traffic predictions. This talk will also cover a suite of extensions to Spark to analyze geospatial and temporal patterns in traffic and weather data, as well as the suite of machine learning algorithms that were used with Spark framework. Initial results of this work were presented at the National Association of Broadcasters meeting in Las Vegas in April 2017, and there is work to scale the system to provide predictions in over a 100 cities. Audience will learn about our experience scaling using Spark in offline and streaming mode, building statistical and deep-learning pipelines with Spark, and techniques to work with geospatial and time-series data.
An elastic batch-and stream-processing stack with Pravega and Apache FlinkDataWorks Summit
Stream processing is a popular paradigm that is becoming more relevant as many applications provide low-latency response time and new application domains emerge that naturally demand data to be processed in motion. One particularly attractive characteristic of the stream-processing paradigm is that it conceptually unifies batch processing (bounded/static historic data) and continuous near-real-time data processing (unbounded streaming event data).
Implementing a unified batch and streaming data architecture is in practice not seamless—near-real-time event data and bulk historic data use different storage systems (messages queues or logs vs. file systems or object stores). Consequently, running the same analysis now and at some arbitrary time in the future (e.g., months, possibly years ahead) means dealing with different data sources and APIs. Few systems are capable of handling both near-real-time streaming workloads and large batch workloads at the same time. And streaming workloads tend to be inherently dynamic, requiring both storage and compute to adjust continuously for maximum resource efficiency.
In this talk, we present an open source streaming data stack consisting of Pravega (stream storage) and Apache Flink (computation on streams). The combination of these two systems offers an unprecedented way of handling “everything as a stream,” while dynamically accommodating workload variations in a novel way. Pravega enables the ingestion capacity of a stream to grow and shrink according to workload and sends signals downstream to enable Flink to scale accordingly.
Pravega offers a permanent streaming storage, exposing an API than enables applications to access data in either near-real time or at any arbitrary time in the future in a uniform fashion. Apache Flink’s SQL and streaming APIs provide a common interface for processing continuous near-real-time data, sets of historic data, or combinations of both. A deep integration between these two systems gives end-to-end exactly-once semantics for pipelines of streams and stream processing and lets both systems jointly scale and adjust automatically to changing data rates.
Speakers:
Stephan Ewen, Co-Founder/ CTO, Data Artisans
Flavio Junqueira, Engineering Lead, Pravega by DellEMC
Hortonworks Technical Workshop: HBase and Apache Phoenix Hortonworks
HBASE is the leading NoSQL database. Tightly integrated with Hadoop ecosystem, it offers random, real-time read/write capabilities on billions of rows and millions of columns. Apache Phoenix offers a SQL interface to HBASE, opening HBase to large community of SQL developers and enabling inter-operability with SQL compliant applications. The session will cover the essentials of HBASE and provide an in-depth insight into Apache Phoenix. Audience: Developers, Architects and System Engineers from the Hortonworks Technology Partner community. Recording:
https://hortonworks.webex.com/hortonworks/lsr.php?RCID=de6d0c435c0761adedf3114a100e7483%20
High Speed Continuous & Reliable Data Ingest into HadoopDataWorks Summit
This talk will explore the area of real-time data ingest into Hadoop and present the architectural trade-offs as well as demonstrate alternative implementations that strike the appropriate balance across the following common challenges: * Decentralized writes (multiple data centers and collectors) * Continuous Availability, High Reliability * No loss of data * Elasticity of introducing more writers * Bursts in Speed per syslog emitter * Continuous, real-time collection * Flexible Write Targets (local FS, HDFS etc.)
Geographica: A Benchmark for Geospatial RDF StoresKostis Kyzirakos
Geospatial extensions of SPARQL like GeoSPARQL and stSPARQL have recently been defined and corresponding geospatial RDF stores have been implemented. However, there is no widely used benchmark for evaluating geospatial RDF stores which takes into account recent advances to the state of the art in this area. In this paper, we develop a benchmark, called Geographica, which uses both real-world and synthetic data to test the offered functionality and the performance of some prominent geospatial RDF stores.
How to load data for applications/programs into an Oracle database: SQL Developer, APEX Data Upload, APEX Data Upload Wizard, REST web services. Review and demonstration of techniques for loading data for applications.
Apache phoenix: Past, Present and Future of SQL over HBAseenissoz
HBase as the NoSQL database of choice in the Hadoop ecosystem has already been proven itself in scale and in many mission critical workloads in hundreds of companies. Phoenix as the SQL layer on top of HBase, has been increasingly becoming the tool of choice as the perfect complementary for HBase. Phoenix is now being used more and more for super low latency querying and fast analytics across a large number of users in production deployments. In this talk, we will cover what makes Phoenix attractive among current and prospective HBase users, like SQL support, JDBC, data modeling, secondary indexing, UDFs, and also go over recent improvements like Query Server, ODBC drivers, ACID transactions, Spark integration, etc. We will conclude by looking into items in the pipeline and how Phoenix and HBase interacts with other engines like Hive and Spark.
A TPC Benchmark of Hive LLAP and Comparison with PrestoYu Liu
It is a TPC/H/DS benchmark on both Hive (Low Latency Analytical Processing) and Presto, comparing the two popular bigdata query engines.
The results shows significant advantages of Hive LLAP on performance and durability.
Raft protocol has been successfully used for consistent metadata replication; however, using it for data replication poses unique challenges. Apache Ratis is a RAFT implementation targeted at high throughput data replication problems. Apache Ratis is being successfully used as a consensus protocol for data stored in Ozone (object store) and Quadra(block device) to provide data throughput that saturates the network links and disk bandwidths.
Pluggable nature of Ratis renders it useful for multiple use cases including high availability, data or metadata replication, and ensuring consistency semantics.
This talk presents the design challenges to achieve high throughput and how Apache Ratis addresses them. We talk about specific optimizations that have been implemented to minimize overheads and scale up the throughput while maintaining correctness of the consistency protocol. The talk also explains how systems like Ozone take advantage of Ratis’s implementation choices to achieve scale. We will discuss the current performance numbers and also future optimizations. MUKUL KUMAR SINGH, Staff Software Engineer, Hortonworks and LOKESH JAIN, Software Engineer, Hortonworks
32nd TWNIC IP OPM: ROA+ROV deployment & industry developmentAPNIC
APNIC Infrastructure & Development Director Che-Hoo Cheng gives a presentation on ROA and ROV deployment and why routing security is becoming more important than ever at the 32nd TWNIC IP OPM in Taipei from 20 to 21 June 2019.
An elastic batch-and stream-processing stack with Pravega and Apache FlinkDataWorks Summit
Stream processing is a popular paradigm that is becoming more relevant as many applications provide low-latency response time and new application domains emerge that naturally demand data to be processed in motion. One particularly attractive characteristic of the stream-processing paradigm is that it conceptually unifies batch processing (bounded/static historic data) and continuous near-real-time data processing (unbounded streaming event data).
Implementing a unified batch and streaming data architecture is in practice not seamless—near-real-time event data and bulk historic data use different storage systems (messages queues or logs vs. file systems or object stores). Consequently, running the same analysis now and at some arbitrary time in the future (e.g., months, possibly years ahead) means dealing with different data sources and APIs. Few systems are capable of handling both near-real-time streaming workloads and large batch workloads at the same time. And streaming workloads tend to be inherently dynamic, requiring both storage and compute to adjust continuously for maximum resource efficiency.
In this talk, we present an open source streaming data stack consisting of Pravega (stream storage) and Apache Flink (computation on streams). The combination of these two systems offers an unprecedented way of handling “everything as a stream,” while dynamically accommodating workload variations in a novel way. Pravega enables the ingestion capacity of a stream to grow and shrink according to workload and sends signals downstream to enable Flink to scale accordingly.
Pravega offers a permanent streaming storage, exposing an API than enables applications to access data in either near-real time or at any arbitrary time in the future in a uniform fashion. Apache Flink’s SQL and streaming APIs provide a common interface for processing continuous near-real-time data, sets of historic data, or combinations of both. A deep integration between these two systems gives end-to-end exactly-once semantics for pipelines of streams and stream processing and lets both systems jointly scale and adjust automatically to changing data rates.
Speakers:
Stephan Ewen, Co-Founder/ CTO, Data Artisans
Flavio Junqueira, Engineering Lead, Pravega by DellEMC
Hortonworks Technical Workshop: HBase and Apache Phoenix Hortonworks
HBASE is the leading NoSQL database. Tightly integrated with Hadoop ecosystem, it offers random, real-time read/write capabilities on billions of rows and millions of columns. Apache Phoenix offers a SQL interface to HBASE, opening HBase to large community of SQL developers and enabling inter-operability with SQL compliant applications. The session will cover the essentials of HBASE and provide an in-depth insight into Apache Phoenix. Audience: Developers, Architects and System Engineers from the Hortonworks Technology Partner community. Recording:
https://hortonworks.webex.com/hortonworks/lsr.php?RCID=de6d0c435c0761adedf3114a100e7483%20
High Speed Continuous & Reliable Data Ingest into HadoopDataWorks Summit
This talk will explore the area of real-time data ingest into Hadoop and present the architectural trade-offs as well as demonstrate alternative implementations that strike the appropriate balance across the following common challenges: * Decentralized writes (multiple data centers and collectors) * Continuous Availability, High Reliability * No loss of data * Elasticity of introducing more writers * Bursts in Speed per syslog emitter * Continuous, real-time collection * Flexible Write Targets (local FS, HDFS etc.)
Geographica: A Benchmark for Geospatial RDF StoresKostis Kyzirakos
Geospatial extensions of SPARQL like GeoSPARQL and stSPARQL have recently been defined and corresponding geospatial RDF stores have been implemented. However, there is no widely used benchmark for evaluating geospatial RDF stores which takes into account recent advances to the state of the art in this area. In this paper, we develop a benchmark, called Geographica, which uses both real-world and synthetic data to test the offered functionality and the performance of some prominent geospatial RDF stores.
How to load data for applications/programs into an Oracle database: SQL Developer, APEX Data Upload, APEX Data Upload Wizard, REST web services. Review and demonstration of techniques for loading data for applications.
Apache phoenix: Past, Present and Future of SQL over HBAseenissoz
HBase as the NoSQL database of choice in the Hadoop ecosystem has already been proven itself in scale and in many mission critical workloads in hundreds of companies. Phoenix as the SQL layer on top of HBase, has been increasingly becoming the tool of choice as the perfect complementary for HBase. Phoenix is now being used more and more for super low latency querying and fast analytics across a large number of users in production deployments. In this talk, we will cover what makes Phoenix attractive among current and prospective HBase users, like SQL support, JDBC, data modeling, secondary indexing, UDFs, and also go over recent improvements like Query Server, ODBC drivers, ACID transactions, Spark integration, etc. We will conclude by looking into items in the pipeline and how Phoenix and HBase interacts with other engines like Hive and Spark.
A TPC Benchmark of Hive LLAP and Comparison with PrestoYu Liu
It is a TPC/H/DS benchmark on both Hive (Low Latency Analytical Processing) and Presto, comparing the two popular bigdata query engines.
The results shows significant advantages of Hive LLAP on performance and durability.
Raft protocol has been successfully used for consistent metadata replication; however, using it for data replication poses unique challenges. Apache Ratis is a RAFT implementation targeted at high throughput data replication problems. Apache Ratis is being successfully used as a consensus protocol for data stored in Ozone (object store) and Quadra(block device) to provide data throughput that saturates the network links and disk bandwidths.
Pluggable nature of Ratis renders it useful for multiple use cases including high availability, data or metadata replication, and ensuring consistency semantics.
This talk presents the design challenges to achieve high throughput and how Apache Ratis addresses them. We talk about specific optimizations that have been implemented to minimize overheads and scale up the throughput while maintaining correctness of the consistency protocol. The talk also explains how systems like Ozone take advantage of Ratis’s implementation choices to achieve scale. We will discuss the current performance numbers and also future optimizations. MUKUL KUMAR SINGH, Staff Software Engineer, Hortonworks and LOKESH JAIN, Software Engineer, Hortonworks
32nd TWNIC IP OPM: ROA+ROV deployment & industry developmentAPNIC
APNIC Infrastructure & Development Director Che-Hoo Cheng gives a presentation on ROA and ROV deployment and why routing security is becoming more important than ever at the 32nd TWNIC IP OPM in Taipei from 20 to 21 June 2019.
APAN 50: RPKI industry trends and initiatives APNIC
APNIC Infrastructure and Development Director Che-Hoo Cheng gives an overview of the RPKI, why it is important, and how to create ROAs and ROVs to secure routing announcements.
Using Familiar BI Tools and Hadoop to Analyze Enterprise NetworksMapR Technologies
From the Hadoop Summit 2015 Session with Nick Amato.
This session examines practical ways you can begin leveraging network data sources in Hadoop using familiar technologies like SQL and BI tools. Using the diverse sets of sources available, such as traces, routing protocol data, and direct packet captures from critical network locations, we will examine the capabilities of BI tools in the network context and examine cases for extracting value from data collected from the network infrastructure.
What would you do if you had access to all the routing data from the Internet? In this talk, we will introduce a new framework for collecting, storing, and parsing routing data in a way that can be made available to network engineers and application developers through a simple and clean REST API. This API presents a new opportunity for network engineers to understand, visualize, and analyze their network in a way consistent with today’s software engineering practices.
A review of Autonomous System Numbers: what is it, how to get it, and why it’s important. It highlights the challenges of the 2-byte ASN run-out and adoption of 4-byte ASN, and how Indonesia fare compared to other economies. It then looks at the distribution of ASNs in Indonesia, and more importantly how the ASNs are interconnected locally and internationally. The presentation ends with how ASN usage may change in the future, and what role network operators can play in building a robust Internet by adopting best current practice in deploying and managing ASNs.
LkNOG 3: Strengthening the Internet infrastructure in Sri LankaAPNIC
APNIC Deputy Director General Sanjaya gives a keynote address on strengthening the Sri Lankan Internet infrastructure at LkNOG 3 in Colombo, Sri Lanka from 2 to 4 October 2019.
If your business is heavily dependent on the Internet, you may be facing an unprecedented level of network traffic analytics data. How to make the most of that data is the challenge. This presentation from Kentik VP Product and former EMA analyst Jim Frey explores the evolving need, the architecture and key use cases for BGP and NetFlow analysis based on scale-out cloud computing and Big Data technologies.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. Internet
Measurements
• There
is
a
lot
of
measurements
for
various
purposes
on
the
Internet
– Reachability
and
Latency
Measurements
– RouAng
Table
measurements
– RouAng
stability
measurements
– IPv6
/
DNSSec
/
$VAR
measurement
• These
measurements
may
serve
various
purpose
– We’ll
look
at
some
common
ones
and
how
Network
engineers
can
uAlize
them.
3. Measurement
Models
• There
are
a
lot
of
one-‐off
measurements,
we
won’t
dwell
into
those.
• ConAnuous
measurements
can
be
categorized
in
three
main
groups
– Academic
Study
• CAIDA
(www.caida.org)
• Planet
Lab
• Lots
of
others
smaller
ones
out
there
– Community/Industry
Run
•
•
•
•
•
RIPE
LABS
(ATLAS,
TTM,
DNSMON
et
al)
CIDR-‐REPORT
(and
BGP
Stability
Report
)
Routeviews
(www.route-‐views.org)
Looking
Glasses
HE
BGP
Toolkit
(bgp.he.net)
– Commercially
run
• Renesys
• Arbor
4.
5. CAIDA
ARK
• CAIDA:
The
CooperaAve
AssociaAon
for
Internet
Data
Analysis
(www.caida.org)
• CAIDA
ARK
is
short
form
of
the
Archipelago
Measurement
Infrastructure
• Measures
path
and
latency
to
ipv4/v6
address
space
visible
on
the
global
rouAng
table.
• ARK
data
is
used
in
lots
of
modeling
and
research.
E.g
AS-‐RANK
9. RIPE
• RIPE
NCC
–
the
Regional
Internet
Registry
has
a
long
history
of
running
measurements
• All
the
RIPE
data
is
available
through
hp://stat.ripe.net
– RouAng
InformaAon
Service
(RIS)
• Collects
BGP
Data
• hp://www.ripe.net/ris
– DNSMon
• Monitors
criAcal
DNS
Servers
• hp://dnsmon.ripe.net
– Test
Traffic
Measurement
(TTM)
• Measures
latency
and
path,
stores
trace-‐routes
between
all
TTM
nodes
• Gradually
being
replaced
by
RIPE
ATLAS
12. RIPE
ATLAS
• New
RIPE
Measurements
are
using
RIPE
ATLAS
• A
lot
of
stuff
is
reported
by
RIPE
Labs
• A
combinaAon
of
TTM,
DNSMON
in
a
very
Any
form
factor
– Can
be
installed
in
home
broadband
behind
NATs
– USB
Powered
and
easy
to
install
and
forget.
13. RIPE
ATLAS
• RIPE
ATLAS
does
a
pre-‐defined
set
of
measurements
– ICMP
Ping
/Trace
with
v4/v6
to
parAcipaAng
root
servers
– To
selected
other
AuthoritaAve
servers
• User
Defined
Measurements
– If
you
host
a
RIPE
ATLAS
probe,
you
get
credits
– You
can
use
your
credit
to
run
your
own
measurements
(one
off
or
ongoing).
15. CIDR
Report
• CIDR
report
is
at
www.cidr-‐report.org
• Original
Concept:
Tony
Bates,
Revised
by:
Philip
Smith,
Further
Revised:
Geoff
Huston
• If
you
don’t
get
a
copy
of
it
every
week,
you
probably
are
not
on
the
right
mailing
lists
J
– The
weekly
reports
on
BGP
RouAng
Tables
reports
on
de-‐aggregaAon
– A
second
report
on
BGP
updates
reports
on
the
number
of
BGP
Updates
received
• The
Website
is
something
you
should
bookmark
18. Route-‐Views
and
BGPlay
• Routeviews
is
at
www.routeviews.org
• Operated
by
the
University
of
Oregon
Route
Views
Project
• While
the
Route
Views
project
was
originally
moAvated
by
interest
on
the
part
of
operators
in
determining
how
the
global
rouAng
system
viewed
their
prefixes
and/or
AS
space,
there
have
been
many
other
interesAng
uses
of
this
Route
Views
data.
(from
routeviews.org)
• Route
Views
collector
Peers
with
very
large
number
of
ASNs
either
directly
at
IXPs
or
through
eBGP
mulAhop.
• BGP
visualizaAon
tool
BGPlay
uses
Routeviews
19.
20. MulA
Network
Looking
Glasses
• Packet
Clearing
House
route-‐collector
AS3856
peers
at
a
large
number
IXPs
and
looking
glass
is
available
at
hp://lg.pch.net
• Many
of
the
IXPs
have
visible
looking
glasses
on
their
websites.
– HKIX
:
hp://www.hkix.net/hkix/hkixlg.htm
– LINX
:
hps://www.linx.net/pubtools/looking-‐glass.html
– NIXI
:
hp://www.nixi.in/lookingglass.php
• There
is
a
list
available
at
www.traceroute.org
(but
not
all
of
them
are
current).
• Historical
archives
of
the
data
is
also
available
on
request
from
most
of
these.
21. More
Resources
• Hurricane
Electric
BGP
Toolkit.
hp://bgp.he.net/
– Uses
HE
internal
BGP
data,
and
data
from
routeviews,
and
other
sources
– It’s
the
packaging
that
is
immensely
useful
with
the
HE
BGP
toolkit.
• Peering
DB
(www.peeringdb.com)
:
For
the
peering
co-‐
ordinators
by
the
peering
co-‐ordinators
– Lists
the
Network
ASNs,
– IX
it’s
present
at,
– the
colocaAon
faciliAes
for
private
peering,
– Peering
Policies
– Contact
Addresses
24. Common
Use
Cases
• RouAng
Trouble
– Put
the
IP
addresses
in
the
HE
BGP
Toolkit
and
you’ll
get
the
associated
ASNs
and
upstream
– Check
to
see
if
there
has
been
any
topology
changes
on
the
source
and
desAnaAon
ASN
in
BGPlay
– Cross
verify
it
through
ARK
or
CIDR-‐REPORT
– Use
your
RIPE
ATLAS
access
to
run
trace
from
other
locaAons
around
the
world
– RouAng
Trouble
may
originate
inside
your
networks
as
well,
so
it’s
useful
to
see
your
own
routes
as
seen
by
route-‐views
or
other
looking
glass.
25. Network
Expansion
• When
you
need
to
expand
to
locaAons
outside
of
your
primary
operaAons
area,
how
can
the
data
help
– CAIDA
Data
can
show
you
where
the
‘hubs’
are
near
you.
– Peering
DB
can
tell
you
where
the
largest
number
of
networks
are,
and
which
colocaAon
points
are
the
most
dense
in
the
city
you
are
looking
at.
– Peering
DB
will
also
tell
you
the
peering
policy
of
the
ASNs
you
are
interested
in
peering
with.
In
many
cases
e-‐mailing
in
advance
asking
for
peering
potenAal
is
acceptable.
– The
PCH/IX/HE
looking
glass
tells
you
which
routes
are
easily
available.
– These
tools
help
you
narrow
down
your
opAons
before
you
start
looking
at
commercials.
26. HosAng
Probes
/
ContribuAng
Data
• CAIDA
ARK
footprint
is
prey
small,
but
sAll
prefers
a
public
IP.
If
you
like
to
host
one,
talk
to
me
(and
I’ll
put
you
in
touch)
• RIPE
ATLAS
is
available
by
request
on
their
website.
RIPE
Staff
also
hands
them
out
at
different
NOG
conferences,
so
do
APNIC
staff.
• Routeviews
is
at
IXPs
only,
but
as
an
network,
do
eBGP
MulA-‐hop
peer
with
them.
– Internet
RouAng
data
is
publicly
visible,
so
you
don’t
loose
by
sharing
directly,
but
contribute
to
the
richness
of
it.
27. Conclusion
• Internet
Measurement
tools
and
acAviAes
are
not
just
for
academic
purpose,
but
helps
in
operaAonal
troubleshooAng
• Large
datasets
can
help
in
modeling
and
planning
exercises.
• Publicly
available
resources
makes
Internet
a
nicer
place