PacNOG 29: Routing security is more than RPKIAPNIC
APNIC Chief Scientist presented on how much more there is to routing security than just RPKI at PacNOG 29, held online from 29 November to 9 December 2021.
The document discusses the openness of the Domain Name System (DNS). It begins by defining openness as users being able to access information and services irrespective of location. The DNS was originally designed with this principle of openness in mind by providing the same resolution responses regardless of who or where the query came from. However, the document notes the DNS is no longer truly open due to factors like government censorship of domain names and ISP practices. It questions whether these issues are actually problems and discusses themes like how the DNS relates to privacy, trust, and market failures.
This document provides an overview of DNSSEC (Domain Name System Security Extensions). It begins with some background on the history and vulnerabilities of DNS that led to the development of DNSSEC. It then explains how DNSSEC works by digitally signing DNS data to authenticate its source and ensure its integrity. The document discusses the state of DNSSEC deployment globally and in various top-level domains and country code top-level domains. It also provides examples of DNSSEC implementation strategies used by early adopter registries such as Sweden, the Netherlands, Czechia, and Norway.
1) Edward Lewis discusses using the Regional Internet Registry (RIR) WhoIs databases to reach out to autonomous system (AS) operators with an important operational message from ICANN.
2) They had doubts it would work due to concerns about spamming, but contacted nearly 16,000 ASs anyway with few complaints.
3) The talk highlights issues with RIR WhoIs data like a lack of standard formats and fields that made the process challenging but also suggests how the new RDAP system could help improve coordination.
Rolling the Root Zone DNSSEC Key Signing Key, by Edward Lewis.
A presentation given at APNIC 42's DNS and INR Security session on Monday, 3 October 2016.
BSides: BGP Hijacking and Secure Internet RoutingAPNIC
The document provides an introduction to internet routing, BGP hijacking, and the Resource Public Key Infrastructure (RPKI) system for securing internet routing. It discusses how BGP works and how hijacks can occur when more specific routes are announced. The document then summarizes the RPKI framework for validating route origins using Route Origin Authorizations (ROAs) and filtering routes based on their validation state. It provides examples of implementing RPKI on routers to help secure internet routing.
- 22% of visible DNS resolvers are capable of making IPv6 queries, but 35% of DNS queries are actually passed to these resolvers, indicating more widespread IPv6 support.
- The top IPv6-capable resolvers are operated by companies like Google, AT&T, and Comcast, serving over 60% of queries.
- IPv6 DNS responses have a high success rate (96%) when response sizes are kept below the typical 1500 byte MTU to avoid fragmentation issues.
PacNOG 29: Routing security is more than RPKIAPNIC
APNIC Chief Scientist presented on how much more there is to routing security than just RPKI at PacNOG 29, held online from 29 November to 9 December 2021.
The document discusses the openness of the Domain Name System (DNS). It begins by defining openness as users being able to access information and services irrespective of location. The DNS was originally designed with this principle of openness in mind by providing the same resolution responses regardless of who or where the query came from. However, the document notes the DNS is no longer truly open due to factors like government censorship of domain names and ISP practices. It questions whether these issues are actually problems and discusses themes like how the DNS relates to privacy, trust, and market failures.
This document provides an overview of DNSSEC (Domain Name System Security Extensions). It begins with some background on the history and vulnerabilities of DNS that led to the development of DNSSEC. It then explains how DNSSEC works by digitally signing DNS data to authenticate its source and ensure its integrity. The document discusses the state of DNSSEC deployment globally and in various top-level domains and country code top-level domains. It also provides examples of DNSSEC implementation strategies used by early adopter registries such as Sweden, the Netherlands, Czechia, and Norway.
1) Edward Lewis discusses using the Regional Internet Registry (RIR) WhoIs databases to reach out to autonomous system (AS) operators with an important operational message from ICANN.
2) They had doubts it would work due to concerns about spamming, but contacted nearly 16,000 ASs anyway with few complaints.
3) The talk highlights issues with RIR WhoIs data like a lack of standard formats and fields that made the process challenging but also suggests how the new RDAP system could help improve coordination.
Rolling the Root Zone DNSSEC Key Signing Key, by Edward Lewis.
A presentation given at APNIC 42's DNS and INR Security session on Monday, 3 October 2016.
BSides: BGP Hijacking and Secure Internet RoutingAPNIC
The document provides an introduction to internet routing, BGP hijacking, and the Resource Public Key Infrastructure (RPKI) system for securing internet routing. It discusses how BGP works and how hijacks can occur when more specific routes are announced. The document then summarizes the RPKI framework for validating route origins using Route Origin Authorizations (ROAs) and filtering routes based on their validation state. It provides examples of implementing RPKI on routers to help secure internet routing.
- 22% of visible DNS resolvers are capable of making IPv6 queries, but 35% of DNS queries are actually passed to these resolvers, indicating more widespread IPv6 support.
- The top IPv6-capable resolvers are operated by companies like Google, AT&T, and Comcast, serving over 60% of queries.
- IPv6 DNS responses have a high success rate (96%) when response sizes are kept below the typical 1500 byte MTU to avoid fragmentation issues.
HKNOG 1.0 - DDoS attacks in an IPv6 WorldTom Paseka
The document discusses DDoS attacks in an IPv6 world and how CloudFlare provides an automatic IPv6 gateway. It notes that many security tools still lack IPv6 support, which could impede the ability to identify and filter attacks over IPv6. The document outlines some IPv6 attacks CloudFlare has seen, such as DNS cache-busted query attacks, and how botnets can unintentionally send attack traffic over IPv6 if the target has an AAAA record. It emphasizes that security practices need to be equal for both IPv4 and IPv6 to prevent future IPv6-based attacks.
DDosMon A Global DDoS Monitoring Project by Yiming Gong.
A presentation given at APNIC 42's FIRST TC Security Session (2) session on Wednesday, 5 October 2016.
NetFlow Best Practices - Tips and Tricks to Get the Most Out of Your Network ...SolarWinds
Network bandwidth usage is one of the biggest contributors to your network performance. By taking advantage of the flow technology that is built into most routers and switches, you can quickly identify bottlenecks and troubleshoot bandwidth related problems. Join our SolarWinds Head Geek, Don Jacob and Sales Engineer David Byrd as they discuss and share the tips and tricks to get the most out of your network bandwidth.
Community tools to fight against DDoS, SANOG 27APNIC
Community tools can help fight DDoS attacks in three ways:
1. Bogon filtering blocks traffic from bogon address space not assigned to any network. Networks share bogon lists and filter incoming routes.
2. Flow Sonar provides visual network traffic analysis to detect anomalies indicating attacks. It incorporates DDoS alert feeds to identify compromised sources.
3. UTRS implements remote triggered blackhole filtering to divert suspected attack traffic to a null route. Cooperating networks distribute and apply attack filters to mitigate large infrastructure attacks.
DDOS Mitigation Experience from IP ServerOne by CL LeeMyNOG
IP ServerOne is a Malaysian data center provider that manages over 4500 physical servers across 5 data centers. They experience 2-5 DDoS attacks per day, mostly ranging from 4.5-8.9 Gbps. To detect attacks, they use netflow to monitor traffic patterns and flag abnormal packet rates to single IPs. When an attack is detected, traffic is rerouted to on-premise filtering devices in less than 90 seconds to scrub attacks while allowing legitimate traffic. IP ServerOne advocates a hybrid mitigation approach using their own infrastructure alongside cloud-based protection.
Broadband India Forum Session on IPv6: The Post-IPocalypse InternetAPNIC
APNIC Chief Scientist Geoff Huston gave a presentation on the challenges of IPv6 implementation at the Broadband India Forum Session on IPv6, held online on 7 October 2021
This document provides an overview of network state awareness and troubleshooting techniques. The agenda covers troubleshooting methodology, packet forwarding review, active and passive monitoring, quality of service, control plane, and routing protocol stability. It distinguishes between the control plane, which creates routing information based on aggregated data, and the data plane, which makes forwarding decisions based on packet details. Various troubleshooting tools are discussed like traceroute, interface statistics, NetFlow, and performance monitoring to analyze the network from the data plane perspective.
Traffic Insight Using Netflow and Deepfield SystemsMyNOG
Netflow data combined with Deepfield's network definitions can provide insight into network usage and user behavior. This includes identifying top applications and streaming services used, DNS server preferences of subscribers, popular domains and websites visited, and strategic network planning based on analyzing historical traffic patterns. The combined solution offers more visibility than netflow alone while being less costly than full deep packet inspection. Challenges include processing huge and high-velocity data streams and mapping netflow data to individual users.
Network Security and Visibility through NetFlowLancope, Inc.
With the rise of disruptive forces such as cloud computing and mobile technology, the enterprise network has become larger and more complex than ever before. Meanwhile, sophisticated cyber-attackers are taking advantage of the expanded attack surface to gain access to internal networks and steal sensitive data.
Perimeter security is no longer enough to keep threat actors out, and organizations need to be able to detect and mitigate threats operating inside the network. NetFlow, a context-rich and common source of network traffic metadata, can be utilized for heightened visibility to identify attackers and accelerate incident response.
Join Richard Laval to discuss the security applications of NetFlow using StealthWatch. This session will cover:
- An overview of NetFlow, what it is, how it works, and how it benefits security
- Design, deployment, and operational best practices for NetFlow security monitoring
- How to best utilize NetFlow and identity services for security telemetry
- How to investigate and identify threats using statistical analysis of NetFlow telemetry
1. The document discusses ranking CDNs based on latency from probes in Nepal. It aims to identify the fastest CDN for a given location by analyzing latency data from RIPE ATLAS probes.
2. The team collected ping data from RIPE ATLAS for several major CDNs and analyzed the latency information to rank the CDNs. Akamai had the lowest average latency at 25ms, making it the fastest CDN in Nepal based on the analysis.
3. Future work includes collecting data for other regions, building logic to determine the best CDN location for traffic routing, deploying a GUI tool, and fully automating the process. The team seeks suggestions on continuing and improving their analysis of CDN
The University of Edinburgh is undergoing a large project to reprocure its campus networking infrastructure. The existing network, which has grown organically over many years, contains equipment that is up to 20 years old and no longer meets the university's needs. After an internal review in 2014 recommended a new network be procured, the university embarked on a multi-stage competitive dialogue procurement process that is still ongoing. The process involves pre-market engagement, shortlisting bidders, and multiple rounds of dialogue and evaluation to refine solutions before selecting a final vendor. The procurement has proven to be a large undertaking but may result in a network solution tailored to the university's unique requirements.
APNIC has set up a honeypot network using Modern Honeypot Network (MHN) and is looking for more volunteers to participate by running a virtual machine with a public IP address. Passive DNS databases can provide information about past domain name resolutions and relationships between domains and IP addresses that is difficult to obtain through standard DNS queries. Farsight runs a large passive DNS network and the FIRST Passive DNS Exchange SIG is working on a common output format standard. CyberGreen collects risk indicator data on issues like open DNS, NTP, SSDP, and SNMP. The Censys Projects publishes daily snapshots of configuration data on IPv4 hosts, top websites, and X.509 certificates. APNIC conducts biennial surveys to
RPKI is a system that provides validation of IP address and AS number ownership through the use of digital certificates. It aims to reduce routing leaks and hijacking by allowing routers to verify that the origin AS of a route matches what is published in the RPKI database. The key components of RPKI are trust anchors maintained by Regional Internet Registries, Route Origin Authorizations (ROA) that are published by network operators, and validators that check BGP routes against the ROA database.
Building and operating a global DNS content delivery anycast networkAPNIC
This document discusses Packet Clearing House's (PCH) global anycast network for content delivery of DNS services. PCH operates 118 nodes across 14 global locations and 152 internet exchange points. Anycast technology allows optimal routing of DNS queries to the closest server instance. PCH has been operating anycast DNS services since 1997 and their network has evolved to support the growth of the internet. The document describes considerations for planning new anycast nodes, and how PCH monitors and operates their global anycast network.
NetFlow Auditor Anomaly Detection Plus Forensics February 2010 08NetFlowAuditor
Flow Based technology provides network visibility that reduces time and costs for understanding, alerting, and reporting on network issues. It gives real-time and historical insight into network traffic through non-intrusive collection of flow data from routers and switches. This flow-based network intelligence is useful for various teams and helps with tasks like capacity planning, security, and troubleshooting.
NetFlow Deep Dive: NetFlow Tips and Tricks to get the Most Out of Your Networ...SolarWinds
In this webcast, we’ll dive deeper into NetFlow and discuss day-to-day networking challenges and highlight common use cases that will help you better leverage the flow technology and its applications to troubleshoot many networking problems.
This webcast covers some key topics including:
• Introduction to NetFlow and other flow technologies
• Configuring your network to collect flow data
• Some everyday use cases for effective network monitoring
-- Troubleshooting network issues
-- Anomaly detection
-- Tracking cloud performance
-- Monitoring the impact of BYOD traffic
-- Monitoring Quality of Service (QoS) and Type of Service (ToS)
-- Capacity Planning
DNS in IR: Collection, Analysis and Responsepm123008
This document discusses DNS monitoring and analysis for incident response. It begins by explaining why DNS is important to monitor, as it underpins all network activity and malware frequently uses DNS. It then discusses collecting DNS data at the border, resolver, and endpoint levels. Common collection tools include IDSes, passive DNS loggers, and native resolver logging. The document outlines enriching DNS data with information like WHOIS and blacklists. It provides examples of DNS analysis like detecting fast flux domains, tunneling, DGA, and low prevalence domains. It notes common false positives and discusses response actions like using RPZ (Response Policy Zones) to block or redirect malicious DNS queries.
APNIC Chief Scientist, Geoff Huston, gives a presentation on DOH and the changing nature of the DNS as infrastructure at NZNOG 2020 in Christchurch, New Zealand, from 28 to 31 January 2020.
HKNOG 1.0 - DDoS attacks in an IPv6 WorldTom Paseka
The document discusses DDoS attacks in an IPv6 world and how CloudFlare provides an automatic IPv6 gateway. It notes that many security tools still lack IPv6 support, which could impede the ability to identify and filter attacks over IPv6. The document outlines some IPv6 attacks CloudFlare has seen, such as DNS cache-busted query attacks, and how botnets can unintentionally send attack traffic over IPv6 if the target has an AAAA record. It emphasizes that security practices need to be equal for both IPv4 and IPv6 to prevent future IPv6-based attacks.
DDosMon A Global DDoS Monitoring Project by Yiming Gong.
A presentation given at APNIC 42's FIRST TC Security Session (2) session on Wednesday, 5 October 2016.
NetFlow Best Practices - Tips and Tricks to Get the Most Out of Your Network ...SolarWinds
Network bandwidth usage is one of the biggest contributors to your network performance. By taking advantage of the flow technology that is built into most routers and switches, you can quickly identify bottlenecks and troubleshoot bandwidth related problems. Join our SolarWinds Head Geek, Don Jacob and Sales Engineer David Byrd as they discuss and share the tips and tricks to get the most out of your network bandwidth.
Community tools to fight against DDoS, SANOG 27APNIC
Community tools can help fight DDoS attacks in three ways:
1. Bogon filtering blocks traffic from bogon address space not assigned to any network. Networks share bogon lists and filter incoming routes.
2. Flow Sonar provides visual network traffic analysis to detect anomalies indicating attacks. It incorporates DDoS alert feeds to identify compromised sources.
3. UTRS implements remote triggered blackhole filtering to divert suspected attack traffic to a null route. Cooperating networks distribute and apply attack filters to mitigate large infrastructure attacks.
DDOS Mitigation Experience from IP ServerOne by CL LeeMyNOG
IP ServerOne is a Malaysian data center provider that manages over 4500 physical servers across 5 data centers. They experience 2-5 DDoS attacks per day, mostly ranging from 4.5-8.9 Gbps. To detect attacks, they use netflow to monitor traffic patterns and flag abnormal packet rates to single IPs. When an attack is detected, traffic is rerouted to on-premise filtering devices in less than 90 seconds to scrub attacks while allowing legitimate traffic. IP ServerOne advocates a hybrid mitigation approach using their own infrastructure alongside cloud-based protection.
Broadband India Forum Session on IPv6: The Post-IPocalypse InternetAPNIC
APNIC Chief Scientist Geoff Huston gave a presentation on the challenges of IPv6 implementation at the Broadband India Forum Session on IPv6, held online on 7 October 2021
This document provides an overview of network state awareness and troubleshooting techniques. The agenda covers troubleshooting methodology, packet forwarding review, active and passive monitoring, quality of service, control plane, and routing protocol stability. It distinguishes between the control plane, which creates routing information based on aggregated data, and the data plane, which makes forwarding decisions based on packet details. Various troubleshooting tools are discussed like traceroute, interface statistics, NetFlow, and performance monitoring to analyze the network from the data plane perspective.
Traffic Insight Using Netflow and Deepfield SystemsMyNOG
Netflow data combined with Deepfield's network definitions can provide insight into network usage and user behavior. This includes identifying top applications and streaming services used, DNS server preferences of subscribers, popular domains and websites visited, and strategic network planning based on analyzing historical traffic patterns. The combined solution offers more visibility than netflow alone while being less costly than full deep packet inspection. Challenges include processing huge and high-velocity data streams and mapping netflow data to individual users.
Network Security and Visibility through NetFlowLancope, Inc.
With the rise of disruptive forces such as cloud computing and mobile technology, the enterprise network has become larger and more complex than ever before. Meanwhile, sophisticated cyber-attackers are taking advantage of the expanded attack surface to gain access to internal networks and steal sensitive data.
Perimeter security is no longer enough to keep threat actors out, and organizations need to be able to detect and mitigate threats operating inside the network. NetFlow, a context-rich and common source of network traffic metadata, can be utilized for heightened visibility to identify attackers and accelerate incident response.
Join Richard Laval to discuss the security applications of NetFlow using StealthWatch. This session will cover:
- An overview of NetFlow, what it is, how it works, and how it benefits security
- Design, deployment, and operational best practices for NetFlow security monitoring
- How to best utilize NetFlow and identity services for security telemetry
- How to investigate and identify threats using statistical analysis of NetFlow telemetry
1. The document discusses ranking CDNs based on latency from probes in Nepal. It aims to identify the fastest CDN for a given location by analyzing latency data from RIPE ATLAS probes.
2. The team collected ping data from RIPE ATLAS for several major CDNs and analyzed the latency information to rank the CDNs. Akamai had the lowest average latency at 25ms, making it the fastest CDN in Nepal based on the analysis.
3. Future work includes collecting data for other regions, building logic to determine the best CDN location for traffic routing, deploying a GUI tool, and fully automating the process. The team seeks suggestions on continuing and improving their analysis of CDN
The University of Edinburgh is undergoing a large project to reprocure its campus networking infrastructure. The existing network, which has grown organically over many years, contains equipment that is up to 20 years old and no longer meets the university's needs. After an internal review in 2014 recommended a new network be procured, the university embarked on a multi-stage competitive dialogue procurement process that is still ongoing. The process involves pre-market engagement, shortlisting bidders, and multiple rounds of dialogue and evaluation to refine solutions before selecting a final vendor. The procurement has proven to be a large undertaking but may result in a network solution tailored to the university's unique requirements.
APNIC has set up a honeypot network using Modern Honeypot Network (MHN) and is looking for more volunteers to participate by running a virtual machine with a public IP address. Passive DNS databases can provide information about past domain name resolutions and relationships between domains and IP addresses that is difficult to obtain through standard DNS queries. Farsight runs a large passive DNS network and the FIRST Passive DNS Exchange SIG is working on a common output format standard. CyberGreen collects risk indicator data on issues like open DNS, NTP, SSDP, and SNMP. The Censys Projects publishes daily snapshots of configuration data on IPv4 hosts, top websites, and X.509 certificates. APNIC conducts biennial surveys to
RPKI is a system that provides validation of IP address and AS number ownership through the use of digital certificates. It aims to reduce routing leaks and hijacking by allowing routers to verify that the origin AS of a route matches what is published in the RPKI database. The key components of RPKI are trust anchors maintained by Regional Internet Registries, Route Origin Authorizations (ROA) that are published by network operators, and validators that check BGP routes against the ROA database.
Building and operating a global DNS content delivery anycast networkAPNIC
This document discusses Packet Clearing House's (PCH) global anycast network for content delivery of DNS services. PCH operates 118 nodes across 14 global locations and 152 internet exchange points. Anycast technology allows optimal routing of DNS queries to the closest server instance. PCH has been operating anycast DNS services since 1997 and their network has evolved to support the growth of the internet. The document describes considerations for planning new anycast nodes, and how PCH monitors and operates their global anycast network.
NetFlow Auditor Anomaly Detection Plus Forensics February 2010 08NetFlowAuditor
Flow Based technology provides network visibility that reduces time and costs for understanding, alerting, and reporting on network issues. It gives real-time and historical insight into network traffic through non-intrusive collection of flow data from routers and switches. This flow-based network intelligence is useful for various teams and helps with tasks like capacity planning, security, and troubleshooting.
NetFlow Deep Dive: NetFlow Tips and Tricks to get the Most Out of Your Networ...SolarWinds
In this webcast, we’ll dive deeper into NetFlow and discuss day-to-day networking challenges and highlight common use cases that will help you better leverage the flow technology and its applications to troubleshoot many networking problems.
This webcast covers some key topics including:
• Introduction to NetFlow and other flow technologies
• Configuring your network to collect flow data
• Some everyday use cases for effective network monitoring
-- Troubleshooting network issues
-- Anomaly detection
-- Tracking cloud performance
-- Monitoring the impact of BYOD traffic
-- Monitoring Quality of Service (QoS) and Type of Service (ToS)
-- Capacity Planning
DNS in IR: Collection, Analysis and Responsepm123008
This document discusses DNS monitoring and analysis for incident response. It begins by explaining why DNS is important to monitor, as it underpins all network activity and malware frequently uses DNS. It then discusses collecting DNS data at the border, resolver, and endpoint levels. Common collection tools include IDSes, passive DNS loggers, and native resolver logging. The document outlines enriching DNS data with information like WHOIS and blacklists. It provides examples of DNS analysis like detecting fast flux domains, tunneling, DGA, and low prevalence domains. It notes common false positives and discusses response actions like using RPZ (Response Policy Zones) to block or redirect malicious DNS queries.
APNIC Chief Scientist, Geoff Huston, gives a presentation on DOH and the changing nature of the DNS as infrastructure at NZNOG 2020 in Christchurch, New Zealand, from 28 to 31 January 2020.
1. The DNS is widely used but lacks security measures like encryption, making it easy to monitor users' activities and inject false responses.
2. Efforts to add authenticity through DNSSEC have had limited success due to technical challenges and lack of adoption.
3. The DNS is increasingly being used for application-level functions through extensions, but this increases the privacy risks as DNS queries reveal more user intentions.
4. Solutions exist like DNS over TLS and DNS over HTTPS to provide encryption, but full deployment faces economic and technical barriers, risking further fragmentation of the DNS.
How DNS works and How to secure it: An Introductionyasithbagya1
The document discusses DNS (Domain Name System) and how to secure it. It explains that DNS translates domain names to IP addresses, involving recursive queries to root nameservers and authoritative nameservers. Common DNS attacks are spoofing, poisoning, hijacking, amplification and flooding. Recommended security measures include DNS encryption using DNS over HTTPS and TLS, DNSSEC for response authentication, DNSCrypt for encryption and anonymity, redundant infrastructure for DDoS protection, and DNS firewalls.
This document provides an introduction to DNSSEC (Domain Name System Security Extensions) in 3 parts:
1. It explains the purpose of DNSSEC is to address vulnerabilities in the DNS like cache poisoning and lack of data integrity by cryptographically signing DNS records.
2. It discusses some of the operational implications of DNSSEC like increased response sizes requiring EDNS0, using multiple keys (KSK and ZSK), and developing a DNSSEC Policy and Practice Statement.
3. It provides resources for further learning including open source DNSSEC software, mailing lists, and examples of deployed DNSSEC at the root zone and in some top-level domains.
THOTCON - The War over your DNS QueriesJohn Bambenek
Talk given at THOTCON on October 9, 2021 entitled the War over your DNS queries and what to do about it. Covers DNS security and privacy and the importance of running your own DNS resolver.
2nd ICANN APAC-TWNIC Engagement Forum: DNS OblivionAPNIC
APNIC Chief Scientist Geoff Huston gives an overview of the complex many-layered model of DNS security, and a new emerging world of choices for protecting traffic, hiding queries, and the future trends in ISP provided, and independent third-party DNS services at the 2nd ICANN APAC-TWNIC Engagement Forum, held from 15 to 16 April 2021.
This document discusses various techniques for improving DNS privacy, including:
1. QNAME minimization which strips the full query name to limit what is exposed to upstream name servers.
2. DNSSEC which allows validation of DNS responses to detect tampering or altered responses.
3. Aggressive NSEC caching which caches negative responses to limit queries to authoritative servers.
4. DNS over TLS, HTTPS and QUIC which encrypt DNS queries and responses to prevent inspection and tampering by intermediaries.
This document discusses strategies for improving the resilience of the Domain Name System (DNS) against distributed denial-of-service (DDoS) attacks. It outlines how caching of DNSSEC-signed responses for non-existent domain names can help prevent unnecessary queries from reaching the DNS root servers. The document details an initiative by APNIC to sponsor the inclusion of this NSEC caching in the upcoming BIND 9.12 release, which would help distribute DNS query load more efficiently and mitigate DDoS attacks targeting the root servers.
The document discusses the importance of DNS security for the internet. It provides background on the Domain Name System (DNS), explaining that DNS acts as the "phonebook" of the internet by translating domain names to IP addresses. While DNS was originally designed to be fault tolerant, dynamic, scalable, and redundant, security was not initially considered. The document outlines how DNS works and its hierarchical structure. It notes that traditional DNS uses UDP and lacks security features like authentication, making it vulnerable to spoofing and cache poisoning attacks. Finally, the document argues that DNSSEC is crucial for online safety as it uses digital signatures to authenticate DNS data and verify responses came from authorized servers.
bdNOG 7 - Re-engineering the DNS - one resolver at a timeAPNIC
APNIC Director General, Paul Wilson, talks about APNIC's support of updates to BIND to implement caching of NSEC responses to reduce root server query load.
Get an overview of the Domain Name System (DNS) one of the pillars of the Internet and understand the internal security issues of the DNS as well as the crucial role it plays in cybersecurity.
This document discusses DNS cache poisoning. It begins by explaining what DNS is and its purpose of mapping domain names to IP addresses. It then discusses how DNS servers implement caching to improve performance and defines DNS cache poisoning as getting unauthorized entries into a DNS server's cache. The document outlines how an attacker could poison a cache to redirect traffic to a machine they control in order to perform man-in-the-middle attacks or install malware. It describes various methods of poisoning caches locally or remotely, such as between end users and nameservers or between nameservers themselves using the Kaminsky attack. Defenses like DNSSEC are mentioned along with encouragement to try cache poisoning in a controlled lab environment.
Honeypots Unveiled: Proactive Defense Tactics for Cyber Security, Phoenix Sum...APNIC
Adli Wahid, Senior Internet Security Specialist at APNIC, delivered a presentation titled 'Honeypots Unveiled: Proactive Defense Tactics for Cyber Security' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
Securing BGP: Operational Strategies and Best Practices for Network Defenders...APNIC
Md. Zobair Khan,
Network Analyst and Technical Trainer at APNIC, presented 'Securing BGP: Operational Strategies and Best Practices for Network Defenders' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Registry Data Accuracy Improvements, presented by Chimi Dorji at SANOG 41 / I...APNIC
Chimi Dorji, Internet Resource Analyst at APNIC, presented on Registry Data Accuracy Improvements at SANOG 41 jointly held with INNOG 7 in Mumbai, India from 25 to 30 April 2024.
APNIC Policy Roundup, presented by Sunny Chendi at the 5th ICANN APAC-TWNIC E...APNIC
Sunny Chendi, Senior Advisor, Membership and Policy at APNIC, presents 'APNIC Policy Roundup' at the 5th ICANN APAC-TWNIC Engagement Forum and 41st TWNIC OPM in Taipei, Taiwan from 23 to 24 April.
DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024APNIC
Dave Phelan, Senior Network Analyst/Technical Trainer at APNIC, presents 'DDoS In Oceania and the Pacific' at NZNOG 2024 held in Nelson, New Zealand from 8 to 12 April 2024.
'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...APNIC
Geoff Huston, Chief Scientist at APNIC deliver keynote presentation on the 'Future Evolution of the Internet' at the Everything Open 2024 conference in Gladstone, Australia from 16 to 18 April 2024.
IP addressing and IPv6, presented by Paul Wilson at IETF 119APNIC
Paul Wilson, Director General of APNIC delivers a presentation on IP addressing and IPv6 to the Policymakers Program during IETF 119 in Brisbane Australia from 16 to 22 March 2024.
draft-harrison-sidrops-manifest-number-01, presented at IETF 119APNIC
Tom Harrison, Product and Delivery Manager at APNIC presents at the Registration Protocols Extensions working group during IETF 119 in Brisbane, Australia from 16-22 March 2024
Benefits of doing Internet peering and running an Internet Exchange (IX) pres...APNIC
Che-Hoo Cheng, Senior Director, Development at APNIC presents on the "Benefits of doing Internet peering and running an Internet Exchange (IX)" at the Communications Regulatory Commission of Mongolia's IPv6, IXP, Datacenter - Policy and Regulation International Trends Forum in Ulaanbaatar, Mongolia on 7 March 2024
APNIC Update and RIR Policies for ccTLDs, presented at APTLD 85APNIC
APNIC Senior Advisor, Membership and Policy, Sunny Chendi presented on APNIC updates and RIR Policies for ccTLDs at APTLD 85 in Goa, India from 19-22 February 2024.
10 Conversion Rate Optimization (CRO) Techniques to Boost Your Website’s Perf...Web Inspire
What is CRO?
Conversion Rate Optimization, or CRO, is the process of enhancing your website to increase the percentage of visitors who take a desired action. This could be anything from purchasing a product to signing up for a newsletter. Essentially, CRO is about making your website more effective in turning visitors into customers.
Why is CRO Important?
CRO is crucial because it directly impacts your bottom line. A higher conversion rate means more customers and revenue without needing to increase your website traffic. Plus, a well-optimized site improves user experience, which can lead to higher customer satisfaction and loyalty.
Decentralized Justice in Gaming and EsportsFederico Ast
Discover how Kleros is transforming the landscape of dispute resolution in the gaming and eSports industry through the power of decentralized justice.
This presentation, delivered by Federico Ast, CEO of Kleros, explores the innovative application of blockchain technology, crowdsourcing, and incentivized mechanisms to create fair and efficient arbitration processes.
Key Highlights:
- Introduction to Decentralized Justice: Learn about the foundational principles of Kleros and how it combines blockchain with crowdsourcing to develop a novel justice system.
- Challenges in Traditional Arbitration: Understand the limitations of conventional arbitration methods, such as high costs and long resolution times, particularly for small claims in the gaming sector.
- How Kleros Works: A step-by-step guide on the functioning of Kleros, from the initiation of a smart contract to the final decision by a jury of peers.
- Case Studies in eSports: Explore real-world scenarios where Kleros has been applied to resolve disputes in eSports, including issues like cheating, governance, player behavior, and contractual disagreements.
- Practical Implementation: Detailed walkthroughs of how disputes are handled in eSports tournaments, emphasizing speed, cost-efficiency, and fairness.
- Enhanced Transparency: The role of blockchain in providing an immutable and transparent record of proceedings, ensuring trust in the resolution process.
- Future Prospects: The potential expansion of decentralized justice mechanisms across various sectors within the gaming industry.
For more information, visit kleros.io or follow Federico Ast and Kleros on social media:
• Twitter: @federicoast
• Twitter: @kleros_io
Network Security and Cyber Laws (Complete Notes) for B.Tech/BCA/BSc. ITSarthak Sobti
Network Security and Cyber Laws
Detailed Course Content
Unit 1: Introduction to Network Security
- Introduction to Network Security
- Goals of Network Security
- ISO Security Architecture
- Attacks and Categories of Attacks
- Network Security Services & Mechanisms
- Authentication Applications: Kerberos, X.509 Directory Authentication Service
Unit 2: Application Layer Security
- Security Threats and Countermeasures
- SET Protocol
- Electronic Mail Security
- Pretty Good Privacy (PGP)
- S/MIME
- Transport Layer Security: Secure Socket Layer & Transport Layer Security
- Wireless Transport Layer Security
Unit 3: IP Security and System Security
- Authentication Header
- Encapsulating Security Payloads
- System Security: Intruders, Intrusion Detection System, Viruses
- Firewall Design Principles
- Trusted Systems
- OS Security
- Program Security
Unit 4: Introduction to Cyber Law
- Cyber Crime, Cyber Criminals, Cyber Law
- Object and Scope of the IT Act: Genesis, Object, Scope of the Act
- E-Governance and IT Act 2000
- Legal Recognition of Electronic Records
- Legal Recognition of Digital Signatures
- Use of Electronic Records and Digital Signatures in Government and its Agencies
- IT Act in Detail
- Basics of Network Security: IP Addresses, Port Numbers, and Sockets
- Hiding and Tracing IP Addresses
- Scanning: Traceroute, Ping Sweeping, Port Scanning, ICMP Scanning
- Fingerprinting: Active and Passive Email
Unit 5: Advanced Attacks
- Different Kinds of Buffer Overflow Attacks: Stack Overflows, String Overflows, Heap and Integer Overflows
- Internal Attacks: Emails, Mobile Phones, Instant Messengers, FTP Uploads, Dumpster Diving, Shoulder Surfing
- DOS Attacks: Ping of Death, Teardrop, SYN Flooding, Land Attacks, Smurf Attacks, UDP Flooding
- Hybrid DOS Attacks
- Application-Specific Distributed DOS Attacks
EASY TUTORIAL OF HOW TO USE CiCi AI BY: FEBLESS HERNANE Febless Hernane
Cici AI simplifies tasks like writing and research with its user-friendly platform. Users sign up, input queries, customize responses, and edit content as needed. It offers efficient saving and exporting options, making it ideal for enhancing productivity through AI assistance.
2. Openness?
When speaking about openness, we do not mean open in a competitive
sense, but rather open as in everyone can access the service:
“Can users access and distribute information and content, use and
provide applications and services, and use terminal equipment of
their choice, irrespective of the user’s or provider’s location or the
location, origin or destination of the information, content,
application or service?”
2
3. But that was the entire POINT
of the DNS!
The DNS was engineered to deliver the same answer to the
same query, irrespective of the querier
• The answer did not depend on who was asking, where they were
asking from, what platform they were using to generate the query,
the resolvers they used to handle the query
• The answer did not depend on the origin of the information used
to form the response, the platform used to serve this information,
nor the location of information servers
3
4. Openness?
When speaking about openness, we do not mean open in a competitive
sense, but rather open as in everyone can access the service::
“Can users access and distribute information and content, use and
provide applications and services, and use terminal equipment of
their choice, irrespective of the user’s or provider’s location or the
location, origin or destination of the information, content,
application or service?”
The answer is ”Yes!”
4
5. Really?
• Is that really true? Do we all see the same DNS?
• Really?
5
6. Really?
• Is that really true? Do we all see the same DNS?
• Really?
6
No!
7. DNS: Not Fully Open!
Why not?
• Government regulatory requirements to block the “correct”
resolution of certain DNS names
• It happens in China, UK, America, Australia, India, Russia, Syria, Iran,
Vietnam, France, Turkey,…..
• Its VERY widespread, for all kinds of national motives
7
8. DNS: Not Fully Open!
Why not?
• Government regulatory requirements to block the “correct”
resolution of certain DNS names
• Occasional ISP desires to monetise the DNS
• NXDOMAIN substitution to direct traffic from named destinations
or services that do not exist to other destinations or services of
their choosing
8
9. DNS: Not Fully Open!
Why not?
• Government regulatory requirements to block the “correct”
resolution of certain DNS names
• Occasional ISP desires to monetise the DNS
• Threat mitigation where the DNS names associated with
malware are blocked
• Such as Quad9 threat intelligence informed DNS resolution
9
10. Is this a ”problem”?
Generally not:
• Nation states have a sovereign right to make such rules and bind
their citizens to such rules
• Threat mitigation is typically regarded as a Good Thing rather than
an incursion against the utility of a single DNS
10
11. Is this a ”problem”?
It’s a problem when it gets used as a lever in a different
fight
• Such as the Australian rule to force Australian ISPs to
block the DNS resolution of “thepiratebay.org”
• It only pushes determined users to alternate name resolution
strategies and ultimately is a comprehensive waste of
everyone’s time!
But even this is a relative sideshow to the larger DNS
11
12. Another interpretation of
“Openness”?
When speaking about openness we might not mean open in a sense of
consistency across providers and across client transactions, but rather:
Is the DNS an “open” system?
12
This is an echo of the 1980’s move by this industry to embrace “open” technology, as in Open Systems Interconnect (OSI) for
networking, open computer architectures, open operating systems. “Open” is being used here in the sense that it is intentionally
positioned as the opposite of a vendor-proprietary “Closed” technology.
13. Is the DNS “Open”?
Yes!
• The DNS name resolution protocol is openly specified without any IPR
encumbrance
• Fully functional implementations of the DNS protocol are available as
open source
• DNS name servers are configured as open “promiscuous” responders
and will provide the same response to a query irrespective of the
identity of the querier
• DNS information is openly available
• There is some subtle qualification here in that the collection of a zone file may
not be openly available, but the individual records in a zone can be queried
• DNS queries and responses are “open”
13
14. Is the DNS “Open”?
Yes!
• The DNS name resolution protocol is openly specified without any IPR
encumbrance
• Fully functional implementations of the DNS protocol are available as
open source
• DNS name servers are configured as open “promiscuous” responders
and will provide the same response to a query irrespective of the
identity of the querier
• DNS information is openly available
• There is some subtle qualification here in that the collection of a zone file may
not be openly available, but the individual records in a zone can be queried
• DNS queries and responses are “open”
14
Which is a massive problem!
15. When “openness” is a weakness
The DNS is used by everyone and everything
• Because pretty much everything you do on the net starts
with a call to the DNS
• If we could see your stream of DNS queries in real time we
could easily assemble a detailed profile of you and your
interests and activities - as it happens!
• If we could edit your DNS responses we could make services
disappear from your Internet!
15
17. Lets look into this further
The DNS is mapping system which takes human-use labels that name
services and maps these labels to IP addresses
17
18. Lets look into this further
The DNS is mapping system which takes human-use labels that name
services and maps these labels to IP addresses
18
1. Connect to www.google.com
DNS: What’s an IP address for www.google.com?
DNS
System
142.250.204.4
2. Send a TCP SYN packet to destination 142.250.204.4
19. What’s in that DNS “cloud”?
19
DNS
System
queries
responses
20. The Hidden parts of “Open”
• For an open technology the infrastructure and process of name
resolution in the DNS is incredibly opaque
• Once queries are passed to the DNS resolution infrastructure they
are impossible to trace. Each element does not simply forward a
query, but casts an entirely new query using its own identity
• There is no query tracing and no clear way to probe into the DNS
infrastructure
• There could be query chains circling in the DNS infrastructure in a
hidden loop and no one would be any the wiser!
• It is challenging to gauge the level of hidden interdependency in the
DNS, nor the level of ongoing centralisation of infrastructure
functions in the DNS
20
21. The Hidden parts of “Open”
• For an open technology the infrastructure and process of name
resolution in the DNS is incredibly opaque
• Once queries are passed to the DNS resolution infrastructure they
are impossible to trace. Each element does not simply forward a
query, but casts an entirely new query using its own identity
• There is no query tracing and no clear way to probe into the DNS
infrastructure
• There could be query chains circling in the DNS infrastructure in a
hidden loop and no one would be any the wiser!
• It is challenging to gauge the level of hidden interdependency in the
DNS, nor the level of ongoing centralisation of infrastructure
functions in the DNS
21
The fact that the DNS works
at all is amazing. The fact that
it works at speed and volume
is a modern miracle!
22. But all that is still not all of
“the DNS”
• The DNS is more than its mapping role, and more than its
infrastructure elements of resolvers and servers
• So what is the DNS?
22
23. What is “the DNS”?
Multiple Choice – tick all that apply:
qA name space: A collection of word-strings that are organised into a
hierarchy of labels
qA distributed name registration framework that assigns a unique
“license to use” to human-centric word-strings to entities (for money)
qA distributed database that maps human-centric word-strings into IP
addresses
qA protocol used by DNS protocol speakers to “resolve” a word-string
into a defined attribute (usually an IP address)
qA signalling medium that is universally supported across all of
the Internet
23
24. Orchestration of the DNS
• If the DNS is a set of functions and a set of various actors in this space
then how are their individual actions orchestrated to provide a
cohesive outcome?
• How can client use the functions of “the DNS” if there is no one
orchestrating all these elements of the name infrastructure?
• Why does all this work in a completely deregulated space?
• The answers lie in Markets and Market Signalling
24
25. What are DNS “Markets”?
The DNS is not a single market – it is a highly devolved framework and there
are a number of discrete markets that are at best loosely coupled
Some of these markets are:
• The market for new “top level” labels (gTLDs) operated by ICANN. This market is open
to ICANN-qualified registry operators. A registry has an exclusive license to operate a
TLD.
• The market for “registrars”, who act as retailers of DNS names and deal with clients
(registrants) and register the client’s DNS names into the appropriate registry
• The market for clients to register a DNS name with a registry
• The market for DNS name certification, which is a third party that attests that an entity
has control of a domain name
• The market for DNS name resolution where users direct their queries to a resolver and
the resolver provides DNS “answers”
• The market for hosting authoritative name services, where “bigger is better” has
driven a highly aggregated market
• The market for DNS query logs 25
26. Maybe it’s more than markets
• Perhaps this is best viewed as a collection of market needs, or
requirements
• Some are well established
• Some are emergent requirements
• Perhaps the question could be rephrased as one that asks to what
extent are conventional open markets a good fit for these various
DNS requirements
• And to what extent these market-based mechanisms are failing to
respond to these needs?
26
27. So, what should we talk about?
If we want to talk about the DNS as a deregulated activity orchestrated
through the operation of open markets then there are a few more
things to bear in mind:
• The DNS is not a single market place, or even a collection of inter-dependent
and tightly coupled market place
• The DNS is constructed of many elements, some of which appear to behave
as tightly regulated markets, some of which are openly competitive markets,
some of which are comprehensive market failures!
27
28. So, what should we talk about?
Maybe we should think of the DNS using a number of themes to give
some focus to this consideration of market effectiveness
28
29. Current DNS Themes
There are so many themes in the DNS, and here are just a few:
• DNS as a control element
• DNS and privacy
• DNS and trust
• DNS and name space fragmentation
• DNS as a rendezvous tool
• DNS as a collection of markets
• DNS and market aggregation
• DNS and abuse and cyber attacks
• DNS as an economic failure
• DNS as the last remaining definition of a coherent Internet
29
30. This is now a very big agenda
• Far bigger than we have time for here!
• So I’ll just take a couple of themes and develop them further
30
34. DNS and Trust
Can you trust what you learn from the DNS?
NO!
• DNS responses can be altered in various ways that are challenging to detect
• We know how to improve this situation by using digital cryptography to protect
the integrity, accuracy and currency of DNS responses
• But does this capability for an improvement in the trust of the DNS correlate to
a visible consumer preference?
• Is security in the DNS a market failure?
34
35. DNSSEC
• A framework to attach digital signatures to DNS responses that attest
to the accuracy and currency of the DNS response
• The method of attachment does not alter the behaviour of the DNS protocol,
nor does it require any changes to DNS servers
• A procedure for clients to follow to validate the DNS response through
the processing of this digital signature
Changes:
• Zone management – add DNSSEC digital signature records through ”zone signing”
• Registry management – add DS records alongside NS records for delegated zones
• Client behaviour – perform additional DNS queries to perform digital signature
validation 35
36. Is DNSSEC being used?
You might think that a change to the DNS that improved the trust in the
DNS would prove to be highly popular in the DNS space
• DNS name “owners” would like their name to be trustworthy
• DNS users would like the names they use to be “genuine” and trustable
So every part of the DNS would want to use DNSSEC!
Right?
Let’s see if this requirement has been translated by the DNS service
market into a service offering by looking at the current metrics of the
use of DNSSEC
36
39. Is DNSSEC being used?
39
Validation Rate of Signed DNS responses
2014 2016 2018 2020
28% of users are
behind DNSSEC-
validating resolvers
who will not resolve
a badly signed DNS
name
40. Quarter Full, or
Three Quarters Empty??
40
Validation Rate of Signed DNS responses
2014 2016 2018 2020
Three quarters of
the Internet’s user
base can’t or won’t
check if a DNS
response is genuine
41. Problems with DNSSEC
41
• Large DNS responses cause robustness issues for DNS
• Getting large responses through the network has reliability issues with
UDP packet fragmentation and timing issues with signalled cut-over to
TCP
• The validator has to perform a full backtrace query sequence to
assemble the full DNSSEC signature chain
• So the problem is that DNSSEC validation may entail a sequence of
queries where each of the responses may require encounter UDP
fragmentation packet loss
All this adds to the time to resolve a signed name
And nobody is tolerant of delays in today’s Internet
42. Some More Problems with DNSSEC
42
• Cryptographically “stronger” keys tend to be bigger keys over time, so
the issue of cramming more data into DNS transactions is not going
away!
• The stub-to-recursive hop is generally not using validation, so the user
ends up trusting the validating recursive resolver in any case
The current DNSSEC framework represents a lot of effort for only a a
tiny marginal gain
43. DNSSEC is a Market Failure!
• Users don’t pay for queries
• Users have no leverage with recursive resolvers in terms of expressing
their preference for authenticity in DNS responses
• Users don’t have a choice in what they query for
• Users have no ability to express a preference for only using domain
names that are signed
• The benefits of a signed DNS zone and validating resolvers are
indirect
• Cost and benefit are totally mis-aligned in this space!
43
44. DNS Trust is a Market Failure!
• If it wasn’t for the lead taken by Google and their adoption of DNSSEC
validation in their Public DNS Service then its likely that DNSSEC
would still be a complete failure
• And only was possible because of the significant level of market share
in the use of Google’s Public DNS Service
• So perhaps we should look at the market for DNS resolvers to see if
others are taking up this lead
44
46. Who Resolves My Names?
46
My ISP
Google
Same country, different network
(likely to still be my ISP)
Everyone else
47. Who Resolves My Names?
If this is a market then it appears to be highly skewed:
• Google has a market share of just under 30% of all users
• The next largest open resolvers are Cloudflare (4%), OpenDNS (2%) and Quad9
(0.5%)
• Everything else is just the default of the local ISP performing the name resolution
service for their customers
Why has this happened?
How has Google succeeded and other open DNS resolver services appear
to struggle to get significant adoption by clients? 47
48. The DNS Name Resolution Economy
• In the public Internet, end clients don’t normally pay directly for DNS
name resolution services
• Which implies that outside of the domain of the local ISP, DNS
resolvers are essentially unfunded by the resolver’s clients
• And efforts to monetise the DNS with various forms of funded
misdirection (such as NXDOMAIN substitution) are generally viewed
with extreme disfavour
• Open Resolver efforts run the risk of success-disaster
• The more they are used, the greater the funding problem to run them at scale
• The greater the funding problem the greater the temptation to monetise the
DNS resolver function in more subtle ways
48
49. Google is Special
• Search is a critical asset for GOOGLE
• Search drives eyeballs and profiles
• Eyeballs and profiles drive advertising
• Advertising drives revenue
• If we could use the DNS as a defacto search engine then Google has a problem
• But that’s what is happening when DNS resolvers don’t pass back NXDOMAIN but instead
pass back a reference to a search engine, or even better use a search engine to pass back
the resolution response that a hidden search engine result obtained when it processed
the queried DNS name
• So it’s in Google’s interests to have a fast, accurate and unaltered DNS
resolution service
• And Google’s real target market for this service is not individual users, but ISPs
• Because the real threat model for Google is NXDOMAIN substitution and other forms of
DNS response manipulation by ISPs as a local revenue source
• No other Open DNS resolver service shares Google’s motivation 49
51. The DNS Name Resolution Economy
• The default option is that the ISP funds and operate the recursive DNS
service, funded by the ISP’s client base
>70% of all end clients use their ISPs’ DNS resolvers
• However the fact that it works today does not mean that you can
double the input costs and expect it to just keep on working
tomorrow
• For ISPs the DNS is a cost department, not a revenue source
• We should expect strong resistance from ISPs to increase their costs in DNS
service provision
51
52. The resistance to change in
the DNS
• The quality of an ISP’s DNS service does not appear to be a significant
competitive discriminatory factor in the consumer market
• So the ISP does not generally devote many resources to tuning their DNS
infrastructure for high performance, resiliency and innovation
• Most users don’t change their platform settings from the defaults and
CPE based service provisioning in the wired networks and direct
provisioning in mobile networks will persist
• So current innovations such as improved DNS privacy (DNS over TLS,
DNS over HTTPS) are looking like being another mainstream market
failure in the DNS space
52
53. The resistance to change in
the DNS
• The quality of an ISP’s DNS service does not appear to be a significant
competitive discriminatory factor in the consumer market
• So the ISP does not generally devote many resources to tuning their DNS
infrastructure for high performance, resiliency and innovation
• Most users don’t change their platform settings from the defaults and
CPE based service provisioning in the wired networks and direct
provisioning in mobile networks will persist
• So current innovations such as improved DNS privacy (DNS over TLS,
DNS over HTTPS) are looking like being another mainstream market
failure in the DNS space
53
But maybe this is not the full story …
54. Fragmenting the DNS
• Is appears more likely that applications who want to tailor their DNS
use to adopt a more private profile will hive off to use DNS over
HTTPS to an application-selected DNS service, while the platform
itself will continue to use libraries that will default to DNS over UDP to
the ISP-provided recursive DNS resolver
• That way the application ecosystem can fund its own DNS privacy
infrastructure and avoid waiting for everyone else to make the
necessary infrastructure and service investments before they can
adopt DNS privacy themselves
• The prospect of application-specific naming services is a very real
prospect in this scenario
54
55. Fragmenting the DNS
• Is appears more likely that applications who want to tailor their DNS
use to adopt a more private profile will hive off to use DNS over
HTTPS to an application-selected DNS service, while the platform
itself will continue to use libraries that will default to DNS over UDP to
the ISP-provided recursive DNS resolver
• That way the application ecosystem can fund its own DNS privacy
infrastructure and avoid waiting for everyone else to make the
necessary infrastructure and service investments before they can
adopt DNS privacy themselves
• The prospect of application-specific naming services is a very real
prospect in this scenario
Those parts of the Internet space with
sufficient motivation and resources will
simply stop waiting for everyone else
to move. They will just do what they
feel they need to do!
55
56. All the Money is in Content
• And this means that applications are driving the Internet market these
days
• Applications are increasingly drawing in platform and infrastructure
services directly into the application
• QUIC
• DNS over HTTPS
• That way they avoid paying the costs of upgrading legacy infrastructure
and waiting for platform upgrades
• And applications no longer have to sustain a single name space, or even
sustain uniform protocol interoperability
• Apart from UDP!
• So they just won’t pay 56
57. It’s life Jim, but not as we
know it!*
• The overall progression here is an evolution from network-centric
services to platform-centric services to today’s world of application-
centric services
• It’s clear that the DNS is being swept up in this shift, and the DNS is
changing in almost every respect
• The future prospects of a single unified coherent name space as
embodied in the DNS, as we currently know it, for the entire internet
service domain are looking pretty poor right now!
* With apologies to the Trekkies!
57
58. Is the DNS “Open”?
• Yes, today, its sort of open
58
Although the cracks are clearly evident with the use of various DNS customizations to tailor each
DNS response to the querier, the interest in lifting the DNS into the application space as an
application service and not a common infrastructure, the strong levels of centralisation and the
destruction of open competition in this space, the continuing erosion of trust and the strong
commercial impetus to use the DNS as a profiling tool in the surveillance economy.
59. Will it continue to be “open”?
• Probably not!
59
It’s just getting too hard to keep it together and keep it open and the market-based
pressures continue to tear the DNS apart and haul it away from a unified open space
into a set of close and opaque markets with a fragmented name space.
And the common benefit of a single, cohesive open name space for the Internet looks
like being a victim of the Tragedy of the Commons