On June 8, 2011, many major websites and internet service providers participated in a global trial of IPv6 to demonstrate readiness for the new internet protocol. The trial showed that major sites could support IPv6, enabling continued growth of the internet. However, the document also notes challenges observed during the trial, such as issues with automatic configuration, firewalls not recognizing IPv6 addresses, and problems with tunneling techniques used to allow IPv6 connectivity over IPv4 networks.
IPv6 day in 2011 was a global trial of the new Internet Protocol IPv6. Major websites participated to demonstrate preparedness for increased Internet growth. The University of Cambridge participated and found that IPv6 requests were low level at around 1-3% for most services. Tunnels like 6to4 caused some issues as they allowed addresses not on the local network. The event concluded that IPv6 day was essentially a non-event, and therefore a success, demonstrating readiness for the full transition.
1. The document describes the common HTTP methods used to retrieve or send data over the web, including GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, and TRACE.
2. GET is used to retrieve a resource, HEAD is like GET but only returns headers, and POST sends data to a server like form data or file uploads.
3. PUT replaces a resource with uploaded content, DELETE removes a resource, and CONNECT establishes a tunnel. OPTIONS returns supported methods and TRACE echoes a request for debugging.
HTTP is the set of rules for transferring data across the World Wide Web. It uses clients like web browsers to make requests to servers using URLs over TCP/IP. HTTP defines request and response messages with request methods like GET and POST and response status codes. HTTP 1.1 supports persistent connections and caching via proxy servers for improved performance over HTTP 1.0.
The document discusses HTTP (Hypertext Transfer Protocol), which is the foundation for web technologies like REST, AJAX, and HTTPS. It explains that HTTP is the language browsers use to communicate with web servers and carry most web traffic. The document provides examples of using tools like Charles, browsers like Chrome, and cURL to view HTTP requests and responses and experiment with different HTTP methods, status codes, and headers.
This document summarizes a lecture on computer networks and the hypertext transfer protocol (HTTP). It first reviews the early history of computer networking and the development of the world wide web. It then provides details on HTTP, including requests and responses, methods, status codes, and cookies. It discusses how caching works to improve performance by satisfying requests locally when possible. Methods like If-Modified-Since are described which check if a cached object has been updated before retrieving from the origin server.
HTTP was created in 1989 by Tim Berners-Lee at CERN to allow information sharing through hypertext. The first web server and website launched in 1990-1991. HTTP uses a client-server model with requests containing a start line with method, URL, and protocol version followed by headers and an optional message body. Responses contain a status line, headers, and body. Key concepts are persistent connections, caching, content types, and new versions that add functionality while maintaining backward compatibility.
IPv6 day in 2011 was a global trial of the new Internet Protocol IPv6. Major websites participated to demonstrate preparedness for increased Internet growth. The University of Cambridge participated and found that IPv6 requests were low level at around 1-3% for most services. Tunnels like 6to4 caused some issues as they allowed addresses not on the local network. The event concluded that IPv6 day was essentially a non-event, and therefore a success, demonstrating readiness for the full transition.
1. The document describes the common HTTP methods used to retrieve or send data over the web, including GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, and TRACE.
2. GET is used to retrieve a resource, HEAD is like GET but only returns headers, and POST sends data to a server like form data or file uploads.
3. PUT replaces a resource with uploaded content, DELETE removes a resource, and CONNECT establishes a tunnel. OPTIONS returns supported methods and TRACE echoes a request for debugging.
HTTP is the set of rules for transferring data across the World Wide Web. It uses clients like web browsers to make requests to servers using URLs over TCP/IP. HTTP defines request and response messages with request methods like GET and POST and response status codes. HTTP 1.1 supports persistent connections and caching via proxy servers for improved performance over HTTP 1.0.
The document discusses HTTP (Hypertext Transfer Protocol), which is the foundation for web technologies like REST, AJAX, and HTTPS. It explains that HTTP is the language browsers use to communicate with web servers and carry most web traffic. The document provides examples of using tools like Charles, browsers like Chrome, and cURL to view HTTP requests and responses and experiment with different HTTP methods, status codes, and headers.
This document summarizes a lecture on computer networks and the hypertext transfer protocol (HTTP). It first reviews the early history of computer networking and the development of the world wide web. It then provides details on HTTP, including requests and responses, methods, status codes, and cookies. It discusses how caching works to improve performance by satisfying requests locally when possible. Methods like If-Modified-Since are described which check if a cached object has been updated before retrieving from the origin server.
HTTP was created in 1989 by Tim Berners-Lee at CERN to allow information sharing through hypertext. The first web server and website launched in 1990-1991. HTTP uses a client-server model with requests containing a start line with method, URL, and protocol version followed by headers and an optional message body. Responses contain a status line, headers, and body. Key concepts are persistent connections, caching, content types, and new versions that add functionality while maintaining backward compatibility.
This document discusses the Open Data Protocol (OData) for querying and updating data over HTTP. It describes OData's use of HTTP, AtomPub, and JSON formats to provide metadata, querying capabilities, and semantics for CRUD operations. The document demonstrates producing an OData service using WCF Data Services and consuming OData from jQuery, Excel, and Windows Phone. It lists several OData producers and consumers and demos live OData services.
This document provides an overview of the basics of HTTP (Hypertext Transfer Protocol). It discusses the history and problems that HTTP solved, the TCP/IP model, pillars of HTTP including paths, methods, status codes and headers. It also describes the client-server workflow involving opening a TCP connection, sending requests and reading responses. Key aspects covered include connection management, status codes, methods, headers for authentication, caching, conditionals, cookies, CORS and security. Finally, it notes why HTTP is designed as simple, extensible, stateless but not sessionless, and connection efficient, before concluding with a brief discussion of HTTPS.
The document discusses the HTTP request methods GET and POST. GET requests retrieve data from a specified resource, while POST submits data to be processed by a specified resource. Both can send requests and receive responses. GET requests can be cached, bookmarked, and have data restrictions. POST requests are never cached, cannot be bookmarked, and have no data restrictions. The document compares the advantages and disadvantages of GET and POST and provides examples of appropriate uses for each.
HTTP is a stateless protocol that uses a request/response model for communication. A client sends a request via a URL to a server, which responds with status codes and content. Common request methods include GET, POST, PUT, DELETE. Responses have status codes like 200 for success and 404 for not found. Caching of responses helps improve performance. HTTPS provides encryption for secure communication via SSL/TLS certificates.
HTTP is an application-level protocol for distributed, collaborative hypermedia systems that has been used by the World Wide Web since 1990. The initial HTTP/0.9 version provided a simple protocol for raw data transfer, while HTTP/1.0 introduced MIME-like messages to include meta information and request/response modifiers. HTTP/1.0 did not sufficiently account for hierarchical proxies, caching, persistent connections or virtual hosts. HTTP sits at the top of the TCP/IP stack and uses ports to carry protocols between services, with HTTP typically using port 80. An HTTP message is delivered over a TCP/IP connection by chopping the message into chunks small enough to fit in TCP segments, which are then sent inside IP datagrams
HTTP is a client-server protocol for transmitting hypermedia documents across the internet. It uses a request-response paradigm where clients make requests which are answered by HTTP servers. Requests use methods like GET and POST, and include headers. Responses contain status lines, headers, and content. HTTP allows caching, cookies, authentication, and redirects. It is the foundation of data communication for the World Wide Web via the hypertext transfer protocol.
HTTP (Hypertext Transfer Protocol) is the set of rules for transferring files between clients and servers on the World Wide Web. Communication occurs through HTTP requests from clients to servers and HTTP responses from servers to clients. A typical request/response cycle involves a browser requesting an HTML page from a server, which returns the page, and then the browser requesting and receiving additional files like stylesheets, images, and JavaScript code referenced in the HTML. An example is provided of an HTTP request from a browser to a server for a youtube.html page and the corresponding response.
The document discusses several key internet languages and protocols:
- HTML is the main language for structuring web pages
- IP addresses computers on the internet and newer version 6 addresses will provide enough for everyone
- TCP establishes connections between computers to ensure reliable communication, numbering messages
- DNS converts domain names to IP addresses so computers know where to send requests
- HTTP is the protocol for requesting and transmitting web pages and files using the TCP/IP protocols.
The HTTP protocol is an application-level protocol used for distributed, collaborative, hypermedia information systems. It operates as a request-response protocol between clients and servers, with clients making requests using methods like GET and POST and receiving responses with status codes. Requests and responses are composed of text-based headers and messages to communicate metadata and content. Caching and cookies can be used to improve performance and maintain state in this otherwise stateless protocol.
HTTP status codes are issued by servers in response to client requests. There are 5 classes of status codes: 1xx for informational responses, 2xx for successful requests, 3xx for redirections, 4xx for client errors, and 5xx for server errors. Common codes include 200 for OK, 301 for moved permanently, 403 for forbidden, 404 for not found, and 500 for internal server error. The 403 and 404 codes are explained in more detail, with 403 occurring when permission is lacking and 404 when the requested page cannot be found. Suggested fixes for 403 include checking file permissions while fixes for 404 involve ensuring the requested URL is correct.
HTTP is an application-layer protocol for transmitting hypermedia documents across the internet. It is a stateless protocol that can be used on any reliable transport layer. HTTP uses requests and responses between clients and servers, with common methods including GET, POST, PUT, DELETE. It supports features like caching, cookies, authentication, and more to enable the web as we know it.
This document provides an overview of the HTTP protocol. It discusses that HTTP has been used by the World Wide Web since 1990 to enable communication between web browsers and servers. It describes some popular web servers like Apache and clients like Firefox. It explains the basic operation of HTTP including requests with methods like GET and responses with status codes. It also discusses URLs, URIs, and different versions of HTTP from 0.9 to 1.1.
HTTP is a protocol for transmitting hypermedia documents across the internet. It uses a client-server model where browsers make HTTP requests to web servers, which respond with HTTP responses. Key aspects of HTTP include using TCP/IP for communication, being stateless, supporting a variety of data types, and incorporating features of both FTP and SMTP protocols.
Overview of what's going on in the HTTP world. This is the latest version of a talk I've given in the past at Google, Bell Labs and QCon San Francisco.
HTTP is the foundation of data communication for the World Wide Web. It is a protocol for transferring various forms of data between a client and server. HTTP works by establishing a TCP connection between a client and server, through which HTTP request and response messages are exchanged. These messages include request methods like GET and POST, as well as response status codes like 200 for success and 404 for not found. HTTP is a stateless protocol, but cookies and caching allow servers to identify users and reduce response times.
Introduction to HTTP - Hypertext Transfer ProtocolSantiago Basulto
This is an Introduction to HTTP and Client Server architecture by https://rmotr.com.
As part of our Flask Tutorial Step By Step. You can also see an explanation of this lesson in a video lesson: http://learn.rmotr.com/python/flask-tutorial-step-by-step/first-steps/introduction-to-http
This document discusses the Hypertext Transfer Protocol (HTTP) and how it enables communication on the World Wide Web. It begins by explaining some key concepts like URLs, web pages, and objects. It then describes how HTTP uses a client-server model where clients like web browsers make requests to servers, which respond with requested objects. The document outlines both non-persistent and persistent HTTP, how they establish TCP connections, and how persistent HTTP can improve performance. It also examines HTTP request and response messages, status codes, and how cookies can be used to maintain state across client-server interactions.
The document provides instructions for setting up mail download on a mail client. It lists the necessary inbox and outgoing mail server settings, including the server type (POP3 and SMTP), port, server address, and requiring SSL connection. It also notes that distant email reading needs to be permitted for the POP3 protocol to work properly.
This document provides an overview of HTTP including:
- HTTP is a stateless protocol that does not require servers to retain user information across requests.
- Popular HTTP proxy tools like Fiddler and Burp Suite can be used to inspect and debug HTTP traffic.
- Key parts of HTTP include requests methods, response codes, headers for accepting content types, encoding, authentication, and more.
- Common players that interact with HTTP include web servers, load balancers, caching servers, CDNs, and security tools.
HTTP/2 is a new version of the HTTP network protocol that aims to improve website performance. It uses a single TCP connection to allow multiple requests and responses to be multiplexed together. This improves efficiency over HTTP/1.1. Additionally, HTTP/2 allows servers to push critical resources like CSS files to clients, potentially reducing load times. While HTTP/2 brings performance benefits, challenges remain around widespread server support and differing optimizations between HTTP/1.1 and HTTP/2.
This document discusses the Open Data Protocol (OData) for querying and updating data over HTTP. It describes OData's use of HTTP, AtomPub, and JSON formats to provide metadata, querying capabilities, and semantics for CRUD operations. The document demonstrates producing an OData service using WCF Data Services and consuming OData from jQuery, Excel, and Windows Phone. It lists several OData producers and consumers and demos live OData services.
This document provides an overview of the basics of HTTP (Hypertext Transfer Protocol). It discusses the history and problems that HTTP solved, the TCP/IP model, pillars of HTTP including paths, methods, status codes and headers. It also describes the client-server workflow involving opening a TCP connection, sending requests and reading responses. Key aspects covered include connection management, status codes, methods, headers for authentication, caching, conditionals, cookies, CORS and security. Finally, it notes why HTTP is designed as simple, extensible, stateless but not sessionless, and connection efficient, before concluding with a brief discussion of HTTPS.
The document discusses the HTTP request methods GET and POST. GET requests retrieve data from a specified resource, while POST submits data to be processed by a specified resource. Both can send requests and receive responses. GET requests can be cached, bookmarked, and have data restrictions. POST requests are never cached, cannot be bookmarked, and have no data restrictions. The document compares the advantages and disadvantages of GET and POST and provides examples of appropriate uses for each.
HTTP is a stateless protocol that uses a request/response model for communication. A client sends a request via a URL to a server, which responds with status codes and content. Common request methods include GET, POST, PUT, DELETE. Responses have status codes like 200 for success and 404 for not found. Caching of responses helps improve performance. HTTPS provides encryption for secure communication via SSL/TLS certificates.
HTTP is an application-level protocol for distributed, collaborative hypermedia systems that has been used by the World Wide Web since 1990. The initial HTTP/0.9 version provided a simple protocol for raw data transfer, while HTTP/1.0 introduced MIME-like messages to include meta information and request/response modifiers. HTTP/1.0 did not sufficiently account for hierarchical proxies, caching, persistent connections or virtual hosts. HTTP sits at the top of the TCP/IP stack and uses ports to carry protocols between services, with HTTP typically using port 80. An HTTP message is delivered over a TCP/IP connection by chopping the message into chunks small enough to fit in TCP segments, which are then sent inside IP datagrams
HTTP is a client-server protocol for transmitting hypermedia documents across the internet. It uses a request-response paradigm where clients make requests which are answered by HTTP servers. Requests use methods like GET and POST, and include headers. Responses contain status lines, headers, and content. HTTP allows caching, cookies, authentication, and redirects. It is the foundation of data communication for the World Wide Web via the hypertext transfer protocol.
HTTP (Hypertext Transfer Protocol) is the set of rules for transferring files between clients and servers on the World Wide Web. Communication occurs through HTTP requests from clients to servers and HTTP responses from servers to clients. A typical request/response cycle involves a browser requesting an HTML page from a server, which returns the page, and then the browser requesting and receiving additional files like stylesheets, images, and JavaScript code referenced in the HTML. An example is provided of an HTTP request from a browser to a server for a youtube.html page and the corresponding response.
The document discusses several key internet languages and protocols:
- HTML is the main language for structuring web pages
- IP addresses computers on the internet and newer version 6 addresses will provide enough for everyone
- TCP establishes connections between computers to ensure reliable communication, numbering messages
- DNS converts domain names to IP addresses so computers know where to send requests
- HTTP is the protocol for requesting and transmitting web pages and files using the TCP/IP protocols.
The HTTP protocol is an application-level protocol used for distributed, collaborative, hypermedia information systems. It operates as a request-response protocol between clients and servers, with clients making requests using methods like GET and POST and receiving responses with status codes. Requests and responses are composed of text-based headers and messages to communicate metadata and content. Caching and cookies can be used to improve performance and maintain state in this otherwise stateless protocol.
HTTP status codes are issued by servers in response to client requests. There are 5 classes of status codes: 1xx for informational responses, 2xx for successful requests, 3xx for redirections, 4xx for client errors, and 5xx for server errors. Common codes include 200 for OK, 301 for moved permanently, 403 for forbidden, 404 for not found, and 500 for internal server error. The 403 and 404 codes are explained in more detail, with 403 occurring when permission is lacking and 404 when the requested page cannot be found. Suggested fixes for 403 include checking file permissions while fixes for 404 involve ensuring the requested URL is correct.
HTTP is an application-layer protocol for transmitting hypermedia documents across the internet. It is a stateless protocol that can be used on any reliable transport layer. HTTP uses requests and responses between clients and servers, with common methods including GET, POST, PUT, DELETE. It supports features like caching, cookies, authentication, and more to enable the web as we know it.
This document provides an overview of the HTTP protocol. It discusses that HTTP has been used by the World Wide Web since 1990 to enable communication between web browsers and servers. It describes some popular web servers like Apache and clients like Firefox. It explains the basic operation of HTTP including requests with methods like GET and responses with status codes. It also discusses URLs, URIs, and different versions of HTTP from 0.9 to 1.1.
HTTP is a protocol for transmitting hypermedia documents across the internet. It uses a client-server model where browsers make HTTP requests to web servers, which respond with HTTP responses. Key aspects of HTTP include using TCP/IP for communication, being stateless, supporting a variety of data types, and incorporating features of both FTP and SMTP protocols.
Overview of what's going on in the HTTP world. This is the latest version of a talk I've given in the past at Google, Bell Labs and QCon San Francisco.
HTTP is the foundation of data communication for the World Wide Web. It is a protocol for transferring various forms of data between a client and server. HTTP works by establishing a TCP connection between a client and server, through which HTTP request and response messages are exchanged. These messages include request methods like GET and POST, as well as response status codes like 200 for success and 404 for not found. HTTP is a stateless protocol, but cookies and caching allow servers to identify users and reduce response times.
Introduction to HTTP - Hypertext Transfer ProtocolSantiago Basulto
This is an Introduction to HTTP and Client Server architecture by https://rmotr.com.
As part of our Flask Tutorial Step By Step. You can also see an explanation of this lesson in a video lesson: http://learn.rmotr.com/python/flask-tutorial-step-by-step/first-steps/introduction-to-http
This document discusses the Hypertext Transfer Protocol (HTTP) and how it enables communication on the World Wide Web. It begins by explaining some key concepts like URLs, web pages, and objects. It then describes how HTTP uses a client-server model where clients like web browsers make requests to servers, which respond with requested objects. The document outlines both non-persistent and persistent HTTP, how they establish TCP connections, and how persistent HTTP can improve performance. It also examines HTTP request and response messages, status codes, and how cookies can be used to maintain state across client-server interactions.
The document provides instructions for setting up mail download on a mail client. It lists the necessary inbox and outgoing mail server settings, including the server type (POP3 and SMTP), port, server address, and requiring SSL connection. It also notes that distant email reading needs to be permitted for the POP3 protocol to work properly.
This document provides an overview of HTTP including:
- HTTP is a stateless protocol that does not require servers to retain user information across requests.
- Popular HTTP proxy tools like Fiddler and Burp Suite can be used to inspect and debug HTTP traffic.
- Key parts of HTTP include requests methods, response codes, headers for accepting content types, encoding, authentication, and more.
- Common players that interact with HTTP include web servers, load balancers, caching servers, CDNs, and security tools.
HTTP/2 is a new version of the HTTP network protocol that aims to improve website performance. It uses a single TCP connection to allow multiple requests and responses to be multiplexed together. This improves efficiency over HTTP/1.1. Additionally, HTTP/2 allows servers to push critical resources like CSS files to clients, potentially reducing load times. While HTTP/2 brings performance benefits, challenges remain around widespread server support and differing optimizations between HTTP/1.1 and HTTP/2.
This document summarizes an update on IPv6 activity in CERNET2 that was presented on March 5, 2015. It discusses that CERNET2 has had a pure IPv6 backbone since 2003 connecting over 600 universities. IPv6 related research and experiments are conducted on CERNET2. Traffic statistics from January 2015 show backbone traffic exceeding 40Gbps and 10Gbps in some locations. The document also discusses challenges with scaling the DNS root server system and efforts to address this through techniques like anycasting and expanding the number of root server operators.
PLNOG 7: Grzegorz Janoszka - Memoirs from an IPv6 deployment in the hosting n...PROIDEA
This document summarizes LeaseWeb's experience deploying IPv6 in its hosting network. It discusses LeaseWeb's history with IPv6, its deployment plan assumptions, current IPv6 usage and traffic levels, methods for promoting IPv6 adoption, overestimated issues, and unexpected problems encountered. The document concludes by outlining LeaseWeb's plans to further improve IPv6 support and automation, promote IPv6 usage among top customers, and expand IPv6 peering and address assignments.
This document provides an overview and agenda for the Splunk App for Stream, including:
- The architecture of the Stream Forwarder for capturing wire data and routing it to Splunk.
- The architecture of the App for Stream for analyzing wire data in Splunk.
- Examples of deployment architectures for ingesting wire data.
- A customer use case where wire data from the network helped provide visibility that log data could not due to access restrictions.
The document discusses logging best practices for incident response in a post-IPv4 internet where Carrier Grade NATs (CGNs) are commonly used. It notes that with CGNs, a single public IPv4 address can represent thousands of users, meaning the source IP address alone is no longer enough to identify individual attackers. It recommends logging additional details like source ports and high-resolution timestamps to aid incident response. It provides examples of configuring Apache and Exim servers to log source ports. It also stresses the importance of network time synchronization across logging systems.
IPv6 performance was analyzed by measuring connection reliability and speed between IPv6 and IPv4 connections. Connection reliability was found to be lower for IPv6, with a 1.8% failure rate for unicast IPv6 compared to 0.2% for IPv4. 6to4 connections had an even higher 9% failure rate. Speed measurements showed that for 65% of unicast connections, IPv6 response times were within 10 milliseconds of IPv4. However, IPv6 connectivity is still not as robust as IPv4, with work remaining to improve IPv6 connection reliability.
This document provides an overview of HTTP and Java networking. It begins with a refresher on HTTP, including versions, methods, status codes, and examples. It then discusses the internet stack and where Java fits in, covering the socket API and classes for HTTP, SSL, SMTP, and other protocols. The document concludes with code examples for building an echo client/server and basic web crawler in Java.
This document provides a primer on browser networking. It begins with an introduction and overview of the target audience. The content includes an explanation of the TCP/IP network model and layers. Key aspects of TCP such as the three-way handshake, flow control, slow start, and head of line blocking are described. The history of web protocols like HTTP 0.9, HTTP 1.0, HTTP 1.1, and developments like HTTP 2.0, SPDY, and QUIC are summarized. Examples and diagrams are provided to illustrate concepts. Resources for further reading are included.
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
At Comcast, our team has been architecting a customer experience platform which is able to react to near-real-time events and interactions and deliver appropriate and timely communications to customers. By combining the low latency capabilities of Apache Flink and the dataflow capabilities of Apache NiFi we are able to process events at high volume to trigger, enrich, filter, and act/communicate to enhance customer experiences. Apache Flink and Apache NiFi complement each other with their strengths in event streaming and correlation, state management, command-and-control, parallelism, development methodology, and interoperability with surrounding technologies. We will trace our journey from starting with Apache NiFi over three years ago and our more recent introduction of Apache Flink into our platform stack to handle more complex scenarios. In this presentation we will compare and contrast which business and technical use cases are best suited to which platform and explore different ways to integrate the two platforms into a single solution.
On the eve of what was hoped to be of the biggest traffic days for New York Magazine’s sites, the company was the target of a DDoS attack that caused their sites to go dark. New York quickly turned to Fastly to deflect and overcome the attack. Larry discusses how New York Mag went from zero page views per second to getting back online and recording one of their biggest traffic days of the year with the aid of Fastly’s team and tech. In addition he discusses how New York is leveraging Fastly as part of a larger strategy of performance improvements to deliver the build a better web and deliver the best premium content experience in the context of alternative distribution and consumption channels, such as Google Amp and FB Instant Article.
Handy Networking Tools and How to Use ThemSneha Inguva
Linux networking tools can be used to analyze network connectivity and performance. Tools like ifconfig show interface configurations, route displays routing tables, arp shows the ARP cache, dig/nslookup resolve DNS, and traceroute traces the network path. Nmap scans for open ports, ping checks latency, and tcpdump captures traffic. Iperf3 and wrk2 can load test throughput and capacity, while tcpreplay replays captured traffic. These CLI tools provide essential network information and testing capabilities from the command line.
The HTML5 WebSocket API allows for true full-duplex communication between a client and server. It uses the WebSocket protocol which provides a standardized way for the client to "upgrade" an HTTP connection to a WebSocket connection, allowing for messages to be sent in either direction at any time with very little overhead. This enables real-time applications that were previously difficult to achieve with traditional HTTP requests. Common server implementations include Kaazing WebSocket Gateway, Jetty, and Node.js. The JavaScript API provides an easy way for clients to connect, send, and receive messages via a WebSocket connection.
The document summarizes a presentation on IPv6 support on the InteropNET network. It discusses:
1) The background and goals of fully supporting dual-stack IPv4 and IPv6 on the network, with equivalent or better functionality for IPv6 compared to IPv4.
2) How IPv6 was implemented on the network, including stateless address autoconfiguration (SLAAC), DNS services, internal services, and wireless connectivity being made dual-stack.
3) Topics covered included IPv6 subnetting and addressing, challenges in implementation, and statistics on adoption and traffic. The document provides recommendations on IPv6 subnetting and addressing approaches.
Real-Time Web Apps & .NET - What are your options?Phil Leggetter
Real-time is becoming the life blood of applications. Facebook, Twitter, Uber, Google Docs and many more apps have increased user expectation to demand real-time features. Features such as notifications, activity streams, real-time data visualisations, chat or collaborative experiences instantly keep users up to date and enable them to work much more effectively. So, how do you build these sorts of features with .NET?
In this session, Phil will cover the benefits of moving away from polling to push, the options you have with .NET web application to do this and when adding real-time features to your apps, and the pros and cons of each to help choose which is the best solution for you.
This document discusses monitoring internet connections of WAN links using only routing configuration in MikroTik routers. It describes how typical ISP failover solutions have problems when the primary ISP gateway responds to checks but internet is still unavailable. The document proposes monitoring multiple remote hosts per WAN link and using routing tables to configure failover without scripting or external tools. Failover is achieved by creating default routes through virtual hops and routes to remote hosts through the actual ISP gateways. This allows automatic switching when checks to remote hosts on the primary WAN link fail.
IPv6 Campus Deployment Updates panel; University of Pennsylvania (Shumon Huque), IIJ (Randy Bush), U of Hawaii (Alan Whinery) - Joint Techs Workshop; February 2010
The 'New [University of Cambridge] MapJon Warbrick
This document discusses the "New" University Map created by Jon Warbrick at the University of Cambridge. It provides links to the map website and overview information. The map uses OpenStreetMap data and can be rendered through APIs for OpenLayers, Leaflet, and Google Maps. The document requests that any uses of the map data credit OpenStreetMap and the University Computing Service and provide feedback on errors or other issues.
This document discusses the challenges of syndicating third-party content on web pages. It covers potential issues with fetching content from other sites, including slow load times, server loads, and failures. It also addresses problems interpreting content due to encoding issues and rendering HTML tags. Finally, it examines security risks of promiscuously including third-party JavaScript that could take control of a page. The document emphasizes the complexity of safely syndicating external content.
An introduction to Version Control SystemsJon Warbrick
Version control systems allow users to track changes to documents and code over time, maintain revision histories, and collaborate on projects. They provide features like check-outs that allow editing working copies, commits to save changes to repositories, diffs to view differences between versions, and merging of changes from multiple branches. Version control is well-suited for software source code management and collaborative work, but not as effective for tasks like bug tracking or large media files.
Google Apps was deployed at the University of Cambridge to provide calendar functionality to over 40,000 users across 100 departments and 32 colleges. A Java-based single sign-on application called gAuth was created to integrate Google authentication with the University's existing Raven authentication system. While rollout went smoothly, ongoing issues included conflicting accounts and support responsibilities. Usage grew steadily, with unique daily and monthly users increasing since the October 2010 launch.
Some slides from a talk on the problems of using passwords. See http://jw35.blogspot.com/2009/11/re-using-ravens-password-database.html for some of the narrative around these topics.
Web Authenication with Shibboleth - a view from the Flat EastJon Warbrick
This document provides an overview of web authentication with Shibboleth. It discusses how traditionally each website had its own user authentication, but organization-wide single sign-on systems like university portals provided a solution. However, these were not suitable for accessing resources outside the organization. Shibboleth was designed as an open standard web authentication system that supports multiple identity providers, inter-organization use, privacy, anonymity, and multiple attributes. The document outlines some common misconceptions about Shibboleth and provides examples of how it can be used for e-journals, standard web plugins, and authorization decisions.
Google Apps - SSO and Identity Management at the University of CambridgeJon Warbrick
Slides from a talk on SSO and Identity Management for Google Apps at the University of Cambridge. Presented at the Google Apps for Education UK User Group meeting on 15th February 2011 at Loughborough University (http://guug11.lboro.ac.uk/)
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
6. Objective On 8 June, 2011, top websites and Internet service providers around the world joined together for a successful global-scale trial of the new Internet Protocol, IPv6. By providing a coordinated 24-hour “test flight”, the event helped demonstrate that major websites around the world are well-positioned for the move to a global IPv6-enabled Internet, enabling its continued exponential growth. http://www.worldipv6day.org / “ ”
23. www.cam: top 10 countries 8,351 requests total, from 230 clients, 28 countries 2619 UCS STAFF 1373 China 1290 Brazil 835 JANET 630 UNIVERSITY 420 United Kingdom 293 United States 171 Greece 123 France 110 Czech Republic
27. That’s it If you have been, thanks for listening If you have been, thanks for listening
Editor's Notes
This talk covers some of the things we leant as a result of participating in Wold IPv6 Day on 8th June 2011. It’s presented mainly from a server administrator’s point of view and, while it mentions assorted network-level issues, it doesn’t go into particular detail. In particular it’s not a guide to setting up an IPv6-capable network, nor a primer on what IPv6 is.
We are probably all used to IPv4. Been around for ages. Critically uses 32 bits to represent addresses, normally written as four dot-separated octets, each expressed in decimal. Trouble is, the world is running out of IPv4 addresses (all the ‘spare’ has now been allocated for use, though there are still addresses not actually being used). IPv4 is only surviving thanks to extensive use of RFC 1918 ‘private’ addresses, though their properties mean that ever increasingly complicated workarounds are needed to support their continued use.
IPv6, on the other hand, uses 128 bits to represent addresses (and note that doesn’t mean that the address space is only four times bigger...), normally written in hexadecimal as multiple 16-bit blocks sepearted by ‘:’ and with rules allowing runs of zeroes to be omitted. The two protocols have quite a few other differences, some of which we’ll come on to, but the longer addresses are the ones you see first. This is an example address used on IPv6 day...
...except that the University has recently been allocated a new, bigger address range (/44 prefix in place of a /48) which means that all the addresses have to change.
So, what was IPv6 day all about?
Here’s what the Internet Society (who suggested and promoted the idea) have to say on the subject.
Here are some of the big players who started it off by promising to take part. Most of these already made their services available over IPv6, though not by default. In the end, at least 1,000 other providers, including the University of Cambridge, also joined in.
We gave this some thought in advance, and identified a number of things that we’d need to worry about...
IPv4 (at least in Cambridge where DHCP - especially dynamic DHCP - has always been considered a bit iffy) needs manual configuration: address, netmask, router, etc. v6, on the other hand, will by default try to configure itself. Connect any modern OS to many IPv6-capable networks and the machine will acquire a globally-routable address. This difference can lead to some surprises.
The DNS handles name<->IPv4 mapping separately to name<->IPv6 mapping So there’s no guarantee that you’ll hit the same server, never mind the same service, over v6 as over v4. Setting things up like this may lead to madness, but can sometimes be useful. IPv6 config may be needed at an application level - for example Apache needs to know what IP addresses it’s doing name-based virtual hosting on and so will need to know about v6 addresses as well as v4 ones. If an advertised v6 address isn’t responding (perhaps because the v6 interface is down) but the corresponding v4 interface is responding then clients will tend to try v6 and only fall back to v4 after a timeout. The symptoms can look VERY like server or network overload!
Packet filters and firewalls will need new configuration for v6 - default will probably be to block everything or allow everything, neither of which will probably be what you want.
It’s tempting to consider a machine with a RFC 1918 private address behind a NAT service to be more secure that a publically addressed one, because it can’t be poked directly from the outside. Private v6 addresses do exist, but are not widely deployed because they are typically a solution to an address shortage and we are not short of v6 addresses. So, stick a v4 privately-addressed machine on a subnet that also supports v6 and it will probably be out there exposed on the public Internet with a global address. This may come as a surprise.
It’s common to setup inter-host communications (e.g. web server to database) to use the localhost interface and to limit connections to this to prevent external meddling. But if you enable v6 on such a machine then internal connection may happen via the v6 local interface on ::1 and not the v4 one on 127.0.0.1. If your rules don’t take this into account you may find that you can’t talk to yourself.
Rather a lot of log analysis software may be assuming that IP addresses in logs will look like 131.111.10.33, and may be ‘surprised’ to find ones that look like 2001:630:212:8080::80:0. How they react will vary, but ignoring such entries (perhaps silently), or stoping dead on the first one are both possibilities.
...and once we got into actually doing the necessary configuration we found some others:
If an IPv6 router finds it has a packet that’s too big to send over a particular link it drops the packet and sends a ‘Packet too big’ ICMP6 message to the packets origin, which is expected to resend it smaller. If anyone foolishly blocks those ICMP6 messages then this won’t work, and you’ll find that you can successfully send small packets but not full size ones. In a web context, this can mean that clients can open connections and successfully send requests, but can’t receive responses (which are typically much bigger). IPv6 requires that all links carry at least 1280 byte packets (c.f. 1500 byte packets typically used on Ethernet) and there is some evidence that the big providers are artificially limiting themselves to 1280 bytes, presumably to avoid this problem. [IPv4 also has fragmentation, but it handled on a per-link basis, rather than end-to-end. It too can cause problems, but these are now largely understood and normally avoided]
Even though it’s been around for a while, IPv6 is still changing quite rapidly, and even ‘current’ software may not be keeping up. For example all but the most recent point release of the version of MacOS current on IPv6 Day had a bug that was likely to affect some users. SuSE Linux Enterprise 10 (old, but still in support) has some failings in its v6 support that caused us problems.
The core of the CUDN already supports IPv6, as does JANET, but only a few University edge networks have enabled it (UCS, Astronomy, Computer Lab, SRCF, ...). The plan was to enable IPv6 on all these services for Pv6 day...
...but inevitably some fell by the wayside. We did manage the rest.
No known problems experienced by any University clients accessing v6-enabled services.
A small but significant number of people accessed our v6 enabled services, apparently successfully.
OK, not exactly big numbers. Services mainly offered to internal clients likely to be low because of the small number of internal clients with IPv6 connectivity. For services also accessed from outside ( www.cam , mx.cam) ~1% of accesses were over v6.
China/Brazil probably high because the developing world has disproportionately fewer IPv4 addresses then US/Europe, etc., because by the time they wanted them the shortage was already becoming apparent and allocation rules were tightened. Such countries are likely to already be deploying v6 to cope with this.
Because of the disconnect between IPv4 and IPv6, various people have created systems what will, automatically or with manual configuration, allow v4 and v6 hosts to communicate or allow a pair of v6 hosts that don’t have v6 connectivity between them to communicate. ‘6to4’ is one such, and a common bug is that machines will sometimes chose an IPv6 connection via one of these ‘transitional technologies’ in favor of a ‘real’ IPv4 connection. For example lots of clients in the University contacted www.cam and smtp.hermes over 6to4 even though all those clients will have had viable IPv4 routes to the same servers. This causes some problems.
6to4 is really clever, and here’s a diagram of how it works. You might want to look at the Wikipedia description for more detail: http://en.wikipedia.org/wiki/6to4 The critical points are that a 6to4 host ends up with an entirely usable IPv6 address in the 6to4 range 2002::/16, and if it wishes can offer to route other address in that range on behalf of other clients on the same subnet (thus bringing IPv6 support to a network that wouldn’t otherwise have it). But all this depends on connections that are probably crossing the institution boundary and which are probably being offered on a ‘best efforts’ basis at best.
So now you have machines on your network that are using IPv6 addresses from a range that you don’t expect. Any access control by IP address is likely to be messed up by this. Worse, since 6to4 machines can advertise themselves as IPv6 routers to other machines, the existence of a machine doing this can easily affect other machines on the same subnet. We saw this effect on IPv6 Day. Part way through the day a department mail server suddenly started using a 6to4 connection being offered by a workstation on the same network. Unfortunately it was forwarding mail to the central mail switch which refused to accept it because it wasn’t (apparently) coming from a machine in the University. Fortunately this was easily fixed, and didn’t result in a loss of mail, but does suggest that a significant barrier to wider Pv6 deployment may turn out to be these very ‘transitional’ technologies that were designed to make it easier.
The bottom line from IPv6 day is that enabling ‘dual stack’ (IPv6 alongside IPv4) operation on servers ‘just works’ and generally doesn’t cause problems for clients (which may themselves be v4-only, v6-only, or dual stack). However 6to4 (and similar technologies), when used inappropriately, may cause problems for some IP address-based access control systems. By and large adding IPv6 support to new or existing servers on networks that already support IPv6 is not difficult.