This document provides instructions for setting up bandwidth limiting on a Linux server using Squid proxy and delay pools. It describes installing Squid with delay pool support enabled, configuring Squid to limit download speeds for certain file types (e.g. MP3s, videos) to around 5KB/s during daytime hours, while allowing unlimited speeds at night. It also discusses using CBQ to control bandwidth for other protocols like FTP and troubleshooting potential issues.
This document summarizes the history and development of HTTP/2. It discusses the limitations of HTTP/1 that HTTP/2 aims to address, such as head-of-line blocking. The key features of HTTP/2 are described, including multiplexing, priority, header compression, and server push. The document also covers topics like TLS, QUIC, and tools for debugging HTTP/2 implementations.
Covert timing channels using HTTP cache headersyalegko
This document discusses using HTTP cache headers to create covert timing channels for transmitting information between hosts without detection. It provides examples of encoding data in the Accept-Language header and describes how headers like Last-Modified and ETag can be used to transmit bits by checking if the page has changed. Issues in implementation are addressed, like needing synchronization. Evaluation shows channels can transmit over 1 bit/second over local networks and around 5 bits/second over the internet. Browser-based channels in JavaScript are also proposed.
HTTP/2 provides improvements over HTTP/1.1 such as multiplexed requests, header compression and priority hints from browsers that can reduce latency. While it shows benefits in testing, real-world impacts may be more modest depending on server and client configurations. Further optimizations are still needed and HTTP/2 opens up new possibilities around features like server pushing and progressive content delivery that could enhance performance.
SPDY - http reloaded - WebTechConference 2012Fabian Lange
The SPDY Protocol is likely going to be the successor of http. This short talk summarizes the most important points and includes a demo on how to migrate a Wordpress blog on httpd.
The document discusses the history and fundamentals of interactive web technologies. It begins with HTTP and its limitations for pushing data from server to client. It then covers early techniques like meta refresh and AJAX polling. It discusses longer polling, HTTP chunked responses, and forever frames. It introduces Comet and web sockets as solutions providing true real-time bidirectional communication. It also covers server-sent events. In conclusion, it recommends using all these techniques together and frameworks like Socket.IO and SignalR that abstract the complexities and provide high-level APIs.
200, 404, 302. Is it a lock combination? A phone number? No, they're HTTP status codes! As we develop Web applications, we encounter these status codes and others, and often we make decisions about which ones to return without giving much thought to their meaning or context. It's time to take a deeper look at HTTP. Knowing the methods, headers, and status codes, what they mean, and how to use them can help you develop richer Internet applications. Join Ben Ramsey as he takes you on a journey through RFC 2616 to discover some of the gems of HTTP.
HTTP/2 is a new version of the HTTP network protocol that makes web content delivery faster and more efficient. It introduces features like multiplexing, header compression, and server push that fix limitations in HTTP/1.1 like head-of-line blocking and slow start. HTTP/2 is now supported in all major browsers and servers and provides performance improvements over HTTP/1.1 without requiring workarounds. The presentation provided an overview of HTTP/2 concepts and how to troubleshoot using developer tools.
This document summarizes the history and development of HTTP/2. It discusses the limitations of HTTP/1 that HTTP/2 aims to address, such as head-of-line blocking. The key features of HTTP/2 are described, including multiplexing, priority, header compression, and server push. The document also covers topics like TLS, QUIC, and tools for debugging HTTP/2 implementations.
Covert timing channels using HTTP cache headersyalegko
This document discusses using HTTP cache headers to create covert timing channels for transmitting information between hosts without detection. It provides examples of encoding data in the Accept-Language header and describes how headers like Last-Modified and ETag can be used to transmit bits by checking if the page has changed. Issues in implementation are addressed, like needing synchronization. Evaluation shows channels can transmit over 1 bit/second over local networks and around 5 bits/second over the internet. Browser-based channels in JavaScript are also proposed.
HTTP/2 provides improvements over HTTP/1.1 such as multiplexed requests, header compression and priority hints from browsers that can reduce latency. While it shows benefits in testing, real-world impacts may be more modest depending on server and client configurations. Further optimizations are still needed and HTTP/2 opens up new possibilities around features like server pushing and progressive content delivery that could enhance performance.
SPDY - http reloaded - WebTechConference 2012Fabian Lange
The SPDY Protocol is likely going to be the successor of http. This short talk summarizes the most important points and includes a demo on how to migrate a Wordpress blog on httpd.
The document discusses the history and fundamentals of interactive web technologies. It begins with HTTP and its limitations for pushing data from server to client. It then covers early techniques like meta refresh and AJAX polling. It discusses longer polling, HTTP chunked responses, and forever frames. It introduces Comet and web sockets as solutions providing true real-time bidirectional communication. It also covers server-sent events. In conclusion, it recommends using all these techniques together and frameworks like Socket.IO and SignalR that abstract the complexities and provide high-level APIs.
200, 404, 302. Is it a lock combination? A phone number? No, they're HTTP status codes! As we develop Web applications, we encounter these status codes and others, and often we make decisions about which ones to return without giving much thought to their meaning or context. It's time to take a deeper look at HTTP. Knowing the methods, headers, and status codes, what they mean, and how to use them can help you develop richer Internet applications. Join Ben Ramsey as he takes you on a journey through RFC 2616 to discover some of the gems of HTTP.
HTTP/2 is a new version of the HTTP network protocol that makes web content delivery faster and more efficient. It introduces features like multiplexing, header compression, and server push that fix limitations in HTTP/1.1 like head-of-line blocking and slow start. HTTP/2 is now supported in all major browsers and servers and provides performance improvements over HTTP/1.1 without requiring workarounds. The presentation provided an overview of HTTP/2 concepts and how to troubleshoot using developer tools.
Altitude San Francisco 2018: Programming the EdgeFastly
Programming the edge
Second floor
Andrew Betts
Principal Developer Advocate, Fastly
Hide abstract
Through our support for running your own code on our edge servers, Fastly's network offers you a platform of unparalleled speed, reliability and efficiency to which you can delegate a surprising amount of logic that has traditionally been in the application layer. In this workshop, you'll implement a series of advanced edge solutions, and learn how to apply these patterns to your own applications to reduce your origin load, dramatically improve performance, and make your applications more secure.
Altitude San Francisco 2018: Testing with Fastly WorkshopFastly
A crucial step for continuous integration and continuous delivery with Fastly is testing the service configuration to provide confidence in changes. This workshop will cover unit-testing VCL, component testing a service as a black box, systems testing a service end-to-end and stakeholder acceptance testing.
HTTP was created in 1989 by Tim Berners-Lee at CERN to allow information sharing through hypertext. The first web server and website launched in 1990-1991. HTTP uses a client-server model with requests containing a start line with method, URL, and protocol version followed by headers and an optional message body. Responses contain a status line, headers, and body. Key concepts are persistent connections, caching, content types, and new versions that add functionality while maintaining backward compatibility.
This document discusses using NGINX to deliver high performance applications through efficient caching. It explains that NGINX can be used as a web server, load balancer, and high availability content cache to provide low latency, scalability, availability and reduced costs. Specific NGINX caching configurations like proxy_cache, proxy_cache_valid and proxy_cache_background_update are described. Microcaching optimizations with NGINX are also covered, showing significant performance improvements over Apache+WordPress and a reverse proxy only setup.
WebSockets allow for bidirectional communication between a client and server over a single TCP connection. They provide lower latency and overhead than traditional HTTP requests which require a new connection for each request. The talk demonstrated how to use WebSockets with JavaScript on the client and event-driven servers like Node.js on the server. While browser support is still limited and the specification is in flux, WebSockets offer a way to build real-time applications without hacks like long-polling that HTTP requires.
Covert Timing Channels using HTTP Cache HeadersDenis Kolegov
In this presentation covert timing channels using HTTP cache headers are described. Peculiarities of programming implementation of the covert channels depending on HTTP cache headers, threat model, programming language (C, JavaScript, Python, Ruby) and environment (web-browser, malicious software) are considered. The basic characteristics of the implemented covert channels are provided. Module and extension implementing ETag-based covert timing channels that were implemented in BeEF framework are discussed.
HTTP is a client-server protocol for transmitting hypermedia documents across the internet. It uses a request-response paradigm where clients make requests which are answered by HTTP servers. Requests use methods like GET and POST, and include headers. Responses contain status lines, headers, and content. HTTP allows caching, cookies, authentication, and redirects. It is the foundation of data communication for the World Wide Web via the hypertext transfer protocol.
This is a presentation made at the Burlington, Vermont PHP Users Group about configuring load balancing using the Apache HTTP Server. Load balancing is a technique that can distribute work across multiple server nodes—here we will discuss load balancing HTTP (i.e. web) traffic. There are many software and hardware load balancing options available including HAProxy, Varnish, Pound, Perlbal, Squid, nginx, and Linux-HA (High-Availability Linux) on Linux Standard Base (LSB). However, many web developers are already familiar with Apache as a web server and it is relatively easy to also configure Apache as a load balancer.
Related concepts such as shared nothing architecture are discussed. We also take a look at some basic load balancing scenarios and features including sticky sessions and proxying requests based on HTTP method. Distributed load testing with Tsung is briefly discussed as well.
The HTML5 WebSocket API allows for true full-duplex communication between a client and server. It uses the WebSocket protocol which provides a standardized way for the client to "upgrade" an HTTP connection to a WebSocket connection, allowing for messages to be sent in either direction at any time with very little overhead. This enables real-time applications that were previously difficult to achieve with traditional HTTP requests. Common server implementations include Kaazing WebSocket Gateway, Jetty, and Node.js. The JavaScript API provides an easy way for clients to connect, send, and receive messages via a WebSocket connection.
The document discusses HTTP (Hypertext Transfer Protocol), which is the foundation for web technologies like REST, AJAX, and HTTPS. It explains that HTTP is the language browsers use to communicate with web servers and carry most web traffic. The document provides examples of using tools like Charles, browsers like Chrome, and cURL to view HTTP requests and responses and experiment with different HTTP methods, status codes, and headers.
The document discusses using alternative infrastructures like Nginx and Redis instead of traditional Apache and MySQL. It describes building a Twitter clone called "Retwis" using Sinatra and Redis, and compares the performance of Nginx to Apache when serving static and dynamic content. Nginx generally outperforms Apache, especially for static files, due to its asynchronous and event-based architecture avoiding context switches. Load testing revealed issues with current practices for MySQL dumps and load testing that do not properly simulate high concurrency or isolate variables.
The document discusses REST API design principles and best practices. It begins by providing examples of RESTful API requests and responses. It then covers REST concepts like resources, verbs, hypermedia, content negotiation, and representation formats. The document advocates for designing APIs that are self-documenting through hypermedia and embedding links to allow discovery of available state transitions and actions. It also discusses balancing REST purism with pragmatic design choices and notes that many popular APIs are not purely RESTful but are still well-designed.
This talk was presented at OSCON 2009 and YAPC::NA 2009. The choices discussed here are still relevant, although Plack has opened up some new options. For a more up-to-date take on this material I would recommend Tatsuhiko Miyagawa's most recent Plack/PSGI talks.
The document introduces HTTP/2 and discusses limitations of HTTP 1.1 including head of line blocking, TCP slow start, and latency issues. It describes key features of HTTP/2 such as multiplexing requests over a single TCP connection, header compression, and server push to reduce page load times. The presentation includes demos of HTTP/2 in Chrome dev tools and Wireshark to troubleshoot HTTP/2 connections.
This document discusses WebSockets as an improvement over previous "fake push" techniques for enabling real-time full duplex communication between web clients and servers. It notes that WebSockets use HTTP for the initial handshake but then provide a persistent, bi-directional connection. Examples are given of how WebSockets work and can be implemented in various programming languages including Ruby. Support in browsers and servers is also discussed.
HTTP/2 (or “H2” as the cool kids call it) has been ratified for months, and browsers already support or have committed to supporting the protocol. Everything we hear tells us that the new version of HTTP will provide significant performance benefits while requiring little to no change to our applications—all the problems with HTTP/1.x have seemingly been addressed; we no longer need the “hacks” that enabled us to circumvent them; and the Internet is about to be a happy place at last.
But maybe we should put the pom-poms down for a minute. Deploying HTTP/2 may not be as easy as it seems since the protocol brings with it new complications and issues. Likewise, the new features the spec introduces may not work as seamlessly as we hope. Hooman Beheshti examines HTTP/2’s core features and how they relate to real-world conditions, discussing the positives, negatives, new caveats, and practical considerations for deploying HTTP/2.
Topics include:
The single-connection model and the impact of degraded network conditions on HTTP/2 versus HTTP/1
How server push interacts (or doesn’t) with modern browser caches
What HTTP/2’s flow control mechanism means for server-to-client communication
New considerations for deploying HPACK compression
Difficulties in troubleshooting HTTP/2 communications, new tools, and new ways to use old tools
HTTP is a protocol for transmitting hypermedia documents across the internet. It uses a client-server model where browsers make HTTP requests to web servers, which respond with HTTP responses. Key aspects of HTTP include using TCP/IP for communication, being stateless, supporting a variety of data types, and incorporating features of both FTP and SMTP protocols.
This document provides information about Common Gateway Interface (CGI) programming and how web browsers communicate with web servers. It discusses how browsers make requests to servers, how servers respond, and how form data is transmitted from browsers to CGI programs using GET and POST methods. It also covers cookies, file uploads, and provides examples of simple CGI programs in Perl and Python.
WebSocket is a protocol that provides bidirectional communication over a single TCP connection. It uses an HTTP handshake to establish a connection and then transmits messages as frames that can contain text or binary data. The frames include a header with metadata like opcode and payload length. WebSocket aims to provide a standard for browser-based applications that require real-time data updates from a server.
The document provides an overview of Earth's atmosphere and how it protects the planet, as well as examples of meteor impacts on Earth and their effects. It then discusses observations of other planets like Mars, Venus, and the outer planets including Jupiter, Saturn, Uranus, and Neptune. The summary concludes with information about trans-Neptunian objects beyond Neptune's orbit.
Altitude San Francisco 2018: Programming the EdgeFastly
Programming the edge
Second floor
Andrew Betts
Principal Developer Advocate, Fastly
Hide abstract
Through our support for running your own code on our edge servers, Fastly's network offers you a platform of unparalleled speed, reliability and efficiency to which you can delegate a surprising amount of logic that has traditionally been in the application layer. In this workshop, you'll implement a series of advanced edge solutions, and learn how to apply these patterns to your own applications to reduce your origin load, dramatically improve performance, and make your applications more secure.
Altitude San Francisco 2018: Testing with Fastly WorkshopFastly
A crucial step for continuous integration and continuous delivery with Fastly is testing the service configuration to provide confidence in changes. This workshop will cover unit-testing VCL, component testing a service as a black box, systems testing a service end-to-end and stakeholder acceptance testing.
HTTP was created in 1989 by Tim Berners-Lee at CERN to allow information sharing through hypertext. The first web server and website launched in 1990-1991. HTTP uses a client-server model with requests containing a start line with method, URL, and protocol version followed by headers and an optional message body. Responses contain a status line, headers, and body. Key concepts are persistent connections, caching, content types, and new versions that add functionality while maintaining backward compatibility.
This document discusses using NGINX to deliver high performance applications through efficient caching. It explains that NGINX can be used as a web server, load balancer, and high availability content cache to provide low latency, scalability, availability and reduced costs. Specific NGINX caching configurations like proxy_cache, proxy_cache_valid and proxy_cache_background_update are described. Microcaching optimizations with NGINX are also covered, showing significant performance improvements over Apache+WordPress and a reverse proxy only setup.
WebSockets allow for bidirectional communication between a client and server over a single TCP connection. They provide lower latency and overhead than traditional HTTP requests which require a new connection for each request. The talk demonstrated how to use WebSockets with JavaScript on the client and event-driven servers like Node.js on the server. While browser support is still limited and the specification is in flux, WebSockets offer a way to build real-time applications without hacks like long-polling that HTTP requires.
Covert Timing Channels using HTTP Cache HeadersDenis Kolegov
In this presentation covert timing channels using HTTP cache headers are described. Peculiarities of programming implementation of the covert channels depending on HTTP cache headers, threat model, programming language (C, JavaScript, Python, Ruby) and environment (web-browser, malicious software) are considered. The basic characteristics of the implemented covert channels are provided. Module and extension implementing ETag-based covert timing channels that were implemented in BeEF framework are discussed.
HTTP is a client-server protocol for transmitting hypermedia documents across the internet. It uses a request-response paradigm where clients make requests which are answered by HTTP servers. Requests use methods like GET and POST, and include headers. Responses contain status lines, headers, and content. HTTP allows caching, cookies, authentication, and redirects. It is the foundation of data communication for the World Wide Web via the hypertext transfer protocol.
This is a presentation made at the Burlington, Vermont PHP Users Group about configuring load balancing using the Apache HTTP Server. Load balancing is a technique that can distribute work across multiple server nodes—here we will discuss load balancing HTTP (i.e. web) traffic. There are many software and hardware load balancing options available including HAProxy, Varnish, Pound, Perlbal, Squid, nginx, and Linux-HA (High-Availability Linux) on Linux Standard Base (LSB). However, many web developers are already familiar with Apache as a web server and it is relatively easy to also configure Apache as a load balancer.
Related concepts such as shared nothing architecture are discussed. We also take a look at some basic load balancing scenarios and features including sticky sessions and proxying requests based on HTTP method. Distributed load testing with Tsung is briefly discussed as well.
The HTML5 WebSocket API allows for true full-duplex communication between a client and server. It uses the WebSocket protocol which provides a standardized way for the client to "upgrade" an HTTP connection to a WebSocket connection, allowing for messages to be sent in either direction at any time with very little overhead. This enables real-time applications that were previously difficult to achieve with traditional HTTP requests. Common server implementations include Kaazing WebSocket Gateway, Jetty, and Node.js. The JavaScript API provides an easy way for clients to connect, send, and receive messages via a WebSocket connection.
The document discusses HTTP (Hypertext Transfer Protocol), which is the foundation for web technologies like REST, AJAX, and HTTPS. It explains that HTTP is the language browsers use to communicate with web servers and carry most web traffic. The document provides examples of using tools like Charles, browsers like Chrome, and cURL to view HTTP requests and responses and experiment with different HTTP methods, status codes, and headers.
The document discusses using alternative infrastructures like Nginx and Redis instead of traditional Apache and MySQL. It describes building a Twitter clone called "Retwis" using Sinatra and Redis, and compares the performance of Nginx to Apache when serving static and dynamic content. Nginx generally outperforms Apache, especially for static files, due to its asynchronous and event-based architecture avoiding context switches. Load testing revealed issues with current practices for MySQL dumps and load testing that do not properly simulate high concurrency or isolate variables.
The document discusses REST API design principles and best practices. It begins by providing examples of RESTful API requests and responses. It then covers REST concepts like resources, verbs, hypermedia, content negotiation, and representation formats. The document advocates for designing APIs that are self-documenting through hypermedia and embedding links to allow discovery of available state transitions and actions. It also discusses balancing REST purism with pragmatic design choices and notes that many popular APIs are not purely RESTful but are still well-designed.
This talk was presented at OSCON 2009 and YAPC::NA 2009. The choices discussed here are still relevant, although Plack has opened up some new options. For a more up-to-date take on this material I would recommend Tatsuhiko Miyagawa's most recent Plack/PSGI talks.
The document introduces HTTP/2 and discusses limitations of HTTP 1.1 including head of line blocking, TCP slow start, and latency issues. It describes key features of HTTP/2 such as multiplexing requests over a single TCP connection, header compression, and server push to reduce page load times. The presentation includes demos of HTTP/2 in Chrome dev tools and Wireshark to troubleshoot HTTP/2 connections.
This document discusses WebSockets as an improvement over previous "fake push" techniques for enabling real-time full duplex communication between web clients and servers. It notes that WebSockets use HTTP for the initial handshake but then provide a persistent, bi-directional connection. Examples are given of how WebSockets work and can be implemented in various programming languages including Ruby. Support in browsers and servers is also discussed.
HTTP/2 (or “H2” as the cool kids call it) has been ratified for months, and browsers already support or have committed to supporting the protocol. Everything we hear tells us that the new version of HTTP will provide significant performance benefits while requiring little to no change to our applications—all the problems with HTTP/1.x have seemingly been addressed; we no longer need the “hacks” that enabled us to circumvent them; and the Internet is about to be a happy place at last.
But maybe we should put the pom-poms down for a minute. Deploying HTTP/2 may not be as easy as it seems since the protocol brings with it new complications and issues. Likewise, the new features the spec introduces may not work as seamlessly as we hope. Hooman Beheshti examines HTTP/2’s core features and how they relate to real-world conditions, discussing the positives, negatives, new caveats, and practical considerations for deploying HTTP/2.
Topics include:
The single-connection model and the impact of degraded network conditions on HTTP/2 versus HTTP/1
How server push interacts (or doesn’t) with modern browser caches
What HTTP/2’s flow control mechanism means for server-to-client communication
New considerations for deploying HPACK compression
Difficulties in troubleshooting HTTP/2 communications, new tools, and new ways to use old tools
HTTP is a protocol for transmitting hypermedia documents across the internet. It uses a client-server model where browsers make HTTP requests to web servers, which respond with HTTP responses. Key aspects of HTTP include using TCP/IP for communication, being stateless, supporting a variety of data types, and incorporating features of both FTP and SMTP protocols.
This document provides information about Common Gateway Interface (CGI) programming and how web browsers communicate with web servers. It discusses how browsers make requests to servers, how servers respond, and how form data is transmitted from browsers to CGI programs using GET and POST methods. It also covers cookies, file uploads, and provides examples of simple CGI programs in Perl and Python.
WebSocket is a protocol that provides bidirectional communication over a single TCP connection. It uses an HTTP handshake to establish a connection and then transmits messages as frames that can contain text or binary data. The frames include a header with metadata like opcode and payload length. WebSocket aims to provide a standard for browser-based applications that require real-time data updates from a server.
The document provides an overview of Earth's atmosphere and how it protects the planet, as well as examples of meteor impacts on Earth and their effects. It then discusses observations of other planets like Mars, Venus, and the outer planets including Jupiter, Saturn, Uranus, and Neptune. The summary concludes with information about trans-Neptunian objects beyond Neptune's orbit.
This document discusses the importance of strategic human resource planning in higher education institutions. It outlines key challenges facing higher education like increased competition and reduced funding. Effective HR planning is needed to address these challenges by understanding the external environment, reviewing current performance, and linking HR strategies to institutional goals. The HR strategy should have clear targets and monitor progress to recruit, develop, and retain high-quality staff through initiatives like training, performance reviews, and equal opportunity programs. Professional services staff also require career development to efficiently support the institution's operations.
This document summarizes key ideas for teaching fractions, including using quantity, operational sense, relationships, representation, and proportional reasoning. It emphasizes using hands-on activities and visual representations to build students' conceptual understanding. Some best practices highlighted are exploring student mental models, allowing creative problem-solving, and acknowledging students' prior knowledge. The document also lists general principles for math instruction focused on developing positive attitudes and metacognition.
Trumpy Company has various accounts payable and liabilities totaling over $400,000. These include accounts payable, notes payable, accumulated amortization, salary and wages payable, mortgage payable, warranty liability, and taxes payable. The current liabilities section of the balance sheet would include notes payable of $90,000 at the top, totaling $230,000. The company's liquidity is favorable as its $400,000 in current assets is almost twice the $230,000 in current liabilities, indicating it can meet its current obligations.
Apache can function as both a forward and reverse proxy server. To configure it as a proxy, enable the proxy module, turn on proxy requests, and specify which clients can access the proxy. The proxy caches frequently accessed pages to improve performance and reduce bandwidth. It also provides security, access control, and logging of internet traffic on the network.
This document provides instructions for configuring a Squid proxy server on CentOS. It discusses obtaining information about the system like the OS distribution, hardware architecture, and installed application versions. It also outlines basic Squid configuration steps like backing up the default configuration file, checking the port Squid listens on, and ensuring the log file location is set correctly before starting Squid. Configuring access controls and caching policies would be covered in more depth in subsequent sections.
A gateway server is a server through which the computers in a LAN access the Internet. This is
usually done through NAT. It should also provide firewall protection for the LAN and it can also serve
as a DNS and DHCPD server for the LAN. Some years ago I have been involved in a project for building gateway servers like this, using
slackware on old PCs. In this article I will try to explain the things that I have done on this project and
how I did them.
This document describes setting up an NS-3 testbed to evaluate MPEG-DASH video streaming. It will simulate a small network with two nodes, one acting as a server and the other as a client. There are two proposed solutions - using physical computers or Linux containers as the nodes. NS-3 will be installed on Ubuntu and configured to represent the network topology and facilitate streaming between the nodes. The testbed will allow evaluation of MPEG-DASH streaming performance over the simulated network.
HTTP caching involves storing copies of resources near clients to serve future requests faster. Caching can happen locally on a client or through shared proxies. Effective caching requires expiration dates, validation of cached responses, and invalidation of cached responses when content changes. Caching allows servers to scale to many users by offloading work to clients and proxies. The HTTP protocol and technologies like ESI were designed to support caching while handling dynamic content.
This document provides an overview of client-server architecture and web servers. It defines clients as programs that request information from servers, while servers are large computers capable of providing data to many clients simultaneously. The document then discusses how the client-server model is used in the World Wide Web, with web browsers as clients that send HTTP requests to web servers. It also covers network connections, ports, functions of web servers and browsers, and browser plugins.
This document provides an overview of client/server basics and electronic publishing as it relates to web servers. It discusses how clients and servers communicate over a network using protocols like HTTP. A web server is a type of server that understands HTTP and can respond to client requests by returning documents. Early web servers were developed by CERN and NCSA. The first web browser was NCSA Mosaic, which popularized the web through its easy interface and ability to create HTML content without specialized software. Electronic publishing on the web involves creating hypertext documents with links using HTML and publishing them on a web server to be retrieved by browsers over HTTP.
This document provides instructions for setting up a single server running Red Hat Linux to function as a small ISP, providing services like dial-up access, web hosting, email, FTP, and DNS. It includes steps for installing Red Hat, configuring security, removing unnecessary services, enabling multiple IP addresses, configuring Apache, Sendmail, POP3, FTP, and dial-up access. The goal is a low-cost solution for basic ISP functions on a single server. Alternatives like virtfs that offer more flexibility through virtualization are also discussed briefly.
White Paper: Accelerating File TransfersFileCatalyst
Check out our White Paper on Accelerating File Transfers
Increase File Transfer Speeds in Poorly-Performing Networks for an understanding of the issues associated with transferring files over the TCP/IP protocol (i.e. using FTP) and how to solve these problems with file transfer acceleration.
This document provides a guide for configuring a Squid proxy server. It discusses requirements like hardware specifications, choosing an operating system, and installing Squid. It then describes basic Squid configuration steps like editing configuration files, starting Squid, and configuring web browsers to use the proxy. Finally, it covers more advanced topics like designing access control lists to control which clients and sites can access the proxy server. The overall document aims to guide readers through the entire process of setting up and managing a Squid proxy server.
Node.js is a server-side JavaScript platform built on Google's V8 engine. It is non-blocking and asynchronous, making it suitable for data-intensive real-time applications. The document discusses how to install Node.js and its dependencies on Ubuntu, introduces key Node.js concepts like events and the event loop, and provides examples of popular Node.js packages and use cases.
This document provides instructions for setting up a transparent caching HTTP proxy server using Linux and Squid. It discusses configuring the Linux kernel to enable transparent proxying features. It then covers installing and configuring Squid to be aware of transparent proxying. The document explains how to set up iptables rules to redirect HTTP traffic to the Squid proxy. It also discusses options for configuring a transparent proxy when the proxy software is on a remote machine rather than the local system.
This document provides an overview and introduction to Linux servers. It covers topics such as Apache web servers, MySQL databases, DNS, DHCP, firewalls (iptables), Samba file sharing, SELinux, git version control, and IPv6. The document is intended to be used for instructor-led training and includes examples and exercises for many of the topics. It provides concise yet thorough explanations of the various server components and how to configure them.
The document discusses SCSI Test Vehicle (STV), a project to develop an FCP Storage emulator using FICON Express 16G channel adapter. STV aims to emulate storage using an FPGA-based design to allow testing and validation of FICON and FCP without requiring physical storage. The emulator development provides a tool for testing host bus adapters and storage connectivity without relying on actual storage hardware.
Purpose and principles of web server and application serverJames Brown
As a rule, an ordinary user has associated such concepts as "webserver" or "hosting", with something completely incomprehensible context. Meanwhile, there is nothing complicated in this matter. We will try to explain what constitutes a web server, why it is needed, and how it works, especially without technical details, but on the fingers, so to speak. To be separate, let's focus on creating and configuring such a server on a home computer terminal or a laptop.
A web hosting service is a type of Internet hosting service that allows individuals and organizations to make their website accessible via the World Wide Web.
Web hosts are companies that provide space on a server owned or leased for use by clients, as well as providing Internet connectivity, typically in a data center.
Web hosts can also provide data center space and connectivity to the Internet for other servers located in their data center, called colocation. Hostindia.net is a web hosting service providing company in India. providing all kind of domain registration and web hosting in India.
https://www.hostindia.net/
This document provides an introduction to Node.js, including what it is, why it is useful, and how it works. Node.js is a platform that allows developers to build fast and scalable network applications using JavaScript. It uses an event-driven, non-blocking model that makes it lightweight and efficient for data-intensive real-time applications. Node.js shines for building real-time web apps with push technology over websockets, allowing two-way communication between client and server.
The document discusses setting up a Squid proxy server on a Linux system to improve network security and performance for a home network. It recommends using an old Pentium II computer with at least 80-100MB of RAM as the proxy server. The document provides instructions for installing Squid and configuring the Squid.conf file to optimize disk usage, caching, and logging. It also explains how to set up the Squid proxy server to work with an iptables firewall for access control and protection from intruders.
A proxy server acts as an intermediary between clients and the internet or other network resources. Squid is a caching and forwarding proxy server that can improve performance by caching frequently requested files. It can restrict access based on client IP, domain, or time of day. Configuring Squid involves installing it, editing the squid.conf file to define access controls and caching, and configuring clients to use the proxy. The access log can be tailed to view current proxy requests.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
HijackLoader Evolution: Interactive Process HollowingDonato Onofri
CrowdStrike researchers have identified a HijackLoader (aka IDAT Loader) sample that employs sophisticated evasion techniques to enhance the complexity of the threat. HijackLoader, an increasingly popular tool among adversaries for deploying additional payloads and tooling, continues to evolve as its developers experiment and enhance its capabilities.
In their analysis of a recent HijackLoader sample, CrowdStrike researchers discovered new techniques designed to increase the defense evasion capabilities of the loader. The malware developer used a standard process hollowing technique coupled with an additional trigger that was activated by the parent process writing to a pipe. This new approach, called "Interactive Process Hollowing", has the potential to make defense evasion stealthier.
Securing BGP: Operational Strategies and Best Practices for Network Defenders...APNIC
Md. Zobair Khan,
Network Analyst and Technical Trainer at APNIC, presented 'Securing BGP: Operational Strategies and Best Practices for Network Defenders' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
Honeypots Unveiled: Proactive Defense Tactics for Cyber Security, Phoenix Sum...APNIC
Adli Wahid, Senior Internet Security Specialist at APNIC, delivered a presentation titled 'Honeypots Unveiled: Proactive Defense Tactics for Cyber Security' at the Phoenix Summit held in Dhaka, Bangladesh from 23 to 24 May 2024.
1. Bandwidth Limiting HOWTO
Tomasz Chmielewski
tch@metalab.unc.edu
Revision History
Revision 0.9 2001−11−20 Revised by: tc
This document describes how to set up your Linux server to limit download bandwidth or incoming traffic
and how to use your internet link more efficiently.
2. Bandwidth Limiting HOWTO
Table of Contents
1. Introduction.....................................................................................................................................................1
1.1. New versions of this document.........................................................................................................1
1.2. Disclaimer.........................................................................................................................................1
1.3. Copyright and License......................................................................................................................1
1.4. Feedback and corrections..................................................................................................................1
1.5. Thanks...............................................................................................................................................1
2. Before We Start...............................................................................................................................................2
2.1. What do we need...............................................................................................................................2
2.2. How does it work?............................................................................................................................2
3. Installing and Configuring Necessary Software...........................................................................................3
3.1. Installing Squid with the delay pools feature....................................................................................3
3.2. Configuring Squid to use the delay pools feature.............................................................................3
3.3. Solving remaining problems.............................................................................................................7
3.3.1. Linux 2.2.x kernels (ipchains)..........................................................................................7
3.3.2. Linux 2.4.x kernels (iptables)...........................................................................................8
4. Dealing with Other Bandwidth−consuming Protocols Using CBQ............................................................9
4.1. FTP....................................................................................................................................................9
4.2. Napster, Realaudio, Windows Media and other issues...................................................................10
5. Frequently Asked Questions........................................................................................................................12
5.1. Is it possible to limit bandwidth on a per−user basis with delay pools?.........................................12
5.2. How do I make wget work with Squid?..........................................................................................12
5.3. I set up my own SOCKS server listening on port 1080, and now I'm not able to connect to
any irc server..........................................................................................................................................12
5.4. I don't like when Kazaa or Audiogalaxy is filling up all my upload bandwidth............................12
5.5. My outgoing mail server is eating up all my bandwidth................................................................13
5.6. Can I limit my own FTP or WWW server in a manner similar it is shown in the question
above?....................................................................................................................................................13
5.7. Is it possible to limit bandwidth on a per−user basis with cbq.init script?.....................................13
5.8. Whenever I start cbq.init, it says sch_cbq is missing.....................................................................14
5.9. CBQ sometimes doesn't work for no reason..................................................................................14
5.10. Delay pools are stupid; why can't I download something at full speed when the network is
used only by me?...................................................................................................................................14
5.11. My downloads break at 23:59 with "acl day time 09:00−23:59" in squid.conf. Can I do
something about it?...............................................................................................................................15
5.12. Squid's logs grow and grow very fast, what can I do about it?.....................................................15
5.13. CBQ is stupid; why can't I download something at full speed when the network is used
only be me?............................................................................................................................................15
6. Miscellaneous.................................................................................................................................................17
6.1. Useful resources..............................................................................................................................17
i
3. 1. Introduction
The purpose of this guide is to provide an easy solution for limiting incoming traffic, thus preventing our
LAN users from consuming all the bandwidth of our internet link.
This is useful when our internet link is slow or our LAN users download tons of mp3s and the newest Linux
distro's *.iso files.
1.1. New versions of this document
You can always view the latest version of this document on the World Wide Web at the URL
http://www.linuxdoc.org.
New versions of this document will also be uploaded to various Linux WWW and FTP sites, including the
LDP home page at http://www.linuxdoc.org.
1.2. Disclaimer
Neither the author nor the distributors, or any other contributor of this HOWTO are in any way responsible
for physical, financial, moral or any other type of damage incurred by following the suggestions in this text.
1.3. Copyright and License
This document is copyright 2001 by Tomasz Chmielewski, and is released under the terms of the GNU Free
Documentation License, which is hereby incorporated by reference.
1.4. Feedback and corrections
If you have questions or comments about this document, please feel free to mail Tomasz Chmielewski at
tch@metalab.unc.edu. I welcome any suggestions or criticisms. If you find a mistake or a typo in this
document (and you will find a lot of them, as English is not my native language), please let me know so I can
correct it in the next version. Thanks.
1.5. Thanks
I would like to thank Ami M. Echeverri lula@pollywog.com who helped me to convert the HOWTO into
SGML format and corrected some mistakes. I also want to thank Ryszard Prosowicz prosowicz@poczta.fm
for useful suggestions.
1. Introduction 1
4. 2. Before We Start
Let's imagine the following situation:
·
We have 115,2 kbits/s ppp (modem) internet link (115,2/10 = 11,5 kbytes/s). Note: with eth
connections (network card) we would divide 115,2 by 8; with ppp we divide by 10, because of
start/stop bits (8 + 1 + 1 = 10).
· We have some LAN stations and their users are doing bulk downloads all the time.
· We want web pages to open fast, no matter how many dowloads are happening.
· Our internet interface is ppp0.
· Our LAN interface is eth0.
· Our network is 192.168.1.0/24
2.1. What do we need
Believe it or not, shaping the incoming traffic is an easy task and you don't have to read tons of books about
routing or queuing algorithms.
To make it work, we need at least Squid proxy; if we want to fine tune it, we will have to get familiar with
ipchains or iptables and CBQ.
To test our efforts, we can install IPTraf.
2.2. How does it work?
Squid is probably the most advanced HTTP proxy server available for Linux. It can help us save bandwidth
in two ways:
·
The first is a main characteristic of proxy servers −− they keep downloaded web pages, pictures, and
other objects in memory or on a disk. So, if two people are requesting the same web page, it isn't
downloaded from the internet, but from the local proxy.
·
Apart from normal caching, Squid has a special feature called delay pools. Thanks to delay pools, it
is possible to limit internet traffic in a reasonable way, depending on so−called 'magic words',
existing in any given URL. For example, a magic word could be '.mp3', '.exe' or '.avi', etc. Any
distinct part of a URL (such as .avi) can be defined as a magic word.
With that, we can tell the Squid to download these kinds of files at a specified speed (in our example, it will
be about 5 kbytes/s). If our LAN users download files at the same time, they will be downloaded at about 5
kbytes/s altogether, leaving remaining bandwidth for web pages, e−mail, news, irc, etc.
Of course, the Internet is not only used for downloading files via web pages (http or ftp). Later on, we will
deal with limiting bandwidth for Napster, Realaudio, and other possibilities.
2. Before We Start 2
5. 3. Installing and Configuring Necessary Software
Here, I will explain how to install the necessary software so that we can limit and test the bandwidth usage.
3.1. Installing Squid with the delay pools feature
As I mentioned before, Squid has a feature called delay pools, which allows us to control download
bandwidth. Unfortunately, in most distributions, Squid is shipped without that feature.
So if you have Squid already installed, I must disappoint you −− you need to uninstall it and do it once again
with delay pools enabled in the way I explain below.
To get maximum performance from our Squid proxy, it's best to create a separate partition for its
cache, called /cache/. Its size should be about 300 megabytes, depending on our needs.
1.
If you don't know how to make a separate partition, you can create the /cache/ directory on a main
partition, but Squid performance can suffer a bit.
2. We add a safe 'squid' user:
# useradd −d /cache/ −r −s /dev/null squid >/dev/null 2>&1
No one can log in as squid, including root.
3. We download Squid sources from http://www.squid−cache.org
When I was writing this HOWTO, the latest version was Squid 2.4 stable 1:
http://www.squid−cache.org/Versions/v2/2.4/squid−2.4.STABLE1−src.tar.gz
4. We unpack everything to /var/tmp:
5. # tar xzpf squid−2.4.STABLE1−src.tar.gz
6. We compile and install Squid (everthing is in one line):
# ./configure −−prefix=/opt/squid −−exec−prefix=/opt/squid
−−enable−delay−pools −−enable−cache−digests −−enable−poll
−−disable−ident−lookups −−enable−truncate −−enable−removal−policies
# make all
# make install
3.2. Configuring Squid to use the delay pools feature
1. Configure our squid.conf file (located under /opt/squid/etc/squid.conf):
#squid.conf
#Every option in this file is very well documented in the original squid.conf file
3. Installing and Configuring Necessary Software 3
6. Bandwidth Limiting HOWTO
#and on http://www.visolve.com/squidman/Configuration%20Guide.html
#
#The ports our Squid will listen on.
http_port 8080
icp_port 3130
#cgi−bins will not be cached.
acl QUERY urlpath_regex cgi−bin ?
no_cache deny QUERY
#Memory the Squid will use. Well, Squid will use far more than that.
cache_mem 16 MB
#250 means that Squid will use 250 megabytes of disk space.
cache_dir ufs /cache 250 16 256
#Places where Squid's logs will go to.
cache_log /var/log/squid/cache.log
cache_access_log /var/log/squid/access.log
cache_store_log /var/log/squid/store.log
cache_swap_log /var/log/squid/swap.log
#How many times to rotate the logs before deleting them.
#See the FAQ for more info.
logfile_rotate 10
redirect_rewrites_host_header off
cache_replacement_policy GDSF
acl localnet src 192.168.1.0/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl Safe_ports port 80 443 210 119 70 20 21 1025−65535
acl CONNECT method CONNECT
acl all src 0.0.0.0/0.0.0.0
http_access allow localnet
http_access allow localhost
http_access deny !Safe_ports
http_access deny CONNECT
http_access deny all
maximum_object_size 3000 KB
store_avg_object_size 50 KB
#Set these if you want your proxy to work in a transparent way.
#Transparent proxy means you generally don't have to configure all
#your client's browsers, but hase some drawbacks too.
#Leaving these uncommented won't do any harm.
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
#all our LAN users will be seen by external web servers
#as if they all used Mozilla on Linux. :)
anonymize_headers deny User−Agent
fake_user_agent Mozilla/5.0 (X11; U; Linux i686; en−US; rv:0.9.6+) Gecko/20011122
#To make our connection even faster, we put two lines similar
#to the ones below. They will point a parent proxy server our own Squid
#will use. Don't forget to change the server to the one that will
#be fastest for you!
#Measure pings, traceroutes and so on.
#Make sure that http and icp ports are correct.
#Uncomment lines beginning with "cache_peer" if necessary.
#This is the proxy you are going to use for all connections...
#cache_peer w3cache.icm.edu.pl parent 8080 3130 no−digest default
3. Installing and Configuring Necessary Software 4
7. Bandwidth Limiting HOWTO
#...except for the connections to addresses and IPs beginning with "!".
#It's a good idea not to use a higher
#cache_peer_domain w3cache.icm.edu.pl !.pl !7thguard.net !192.168.1.1
#This is useful when we want to use the Cache Manager.
#Copy cachemgr.cgi to cgi−bin of your www server.
#You can reach it then via a web browser typing
#the address http://your−web−server/cgi−bin/cachemgr.cgi
cache_mgr your@email
cachemgr_passwd secret_password all
#This is a name of a user our Squid will work as.
cache_effective_user squid
cache_effective_group squid
log_icp_queries off
buffered_logs on
#####DELAY POOLS
#This is the most important part for shaping incoming traffic with Squid
#For detailed description see squid.conf file or docs at http://www.squid−cache.org
#We don't want to limit downloads on our local network.
acl magic_words1 url_regex −i 192.168
#We want to limit downloads of these type of files
#Put this all in one line
acl magic_words2 url_regex −i ftp .exe .mp3 .vqf .tar.gz .gz .rpm .zip .rar .avi .mpeg .mpe .ram .rm .iso .raw .wav .mov
#We don't block .html, .gif, .jpg and similar files, because they
#generally don't consume much bandwidth
#We want to limit bandwidth during the day, and allow
#full bandwidth during the night
#Caution! with the acl below your downloads are likely to break
#at 23:59. Read the FAQ in this bandwidth if you want to avoid it.
acl day time 09:00−23:59
#We have two different delay_pools
#View Squid documentation to get familiar
#with delay_pools and delay_class.
delay_pools 2
#First delay pool
#We don't want to delay our local traffic.
#There are three pool classes; here we will deal only with the second.
#First delay class (1) of second type (2).
delay_class 1 2
#−1/−1 mean that there are no limits.
delay_parameters 1 −1/−1 −1/−1
#magic_words1: 192.168 we have set before
delay_access 1 allow magic_words1
#Second delay pool.
#we want to delay downloading files mentioned in magic_words2.
#Second delay class (2) of second type (2).
delay_class 2 2
3. Installing and Configuring Necessary Software 5
8. Bandwidth Limiting HOWTO
#The numbers here are values in bytes;
#we must remember that Squid doesn't consider start/stop bits
#5000/150000 are values for the whole network
#5000/120000 are values for the single IP
#after downloaded files exceed about 150000 bytes,
#(or even twice or three times as much)
#they will continue to download at about 5000 bytes/s
delay_parameters 2 5000/150000 5000/120000
#We have set day to 09:00−23:59 before.
delay_access 2 allow day
delay_access 2 deny !day
delay_access 2 allow magic_words2
#EOF
OK, when we have configured everything, we must make sure everything under /opt/squid and
/cache directories belongs to user 'squid'.
# mkdir /var/log/squid/
# chown squid:squid /var/log/squid/
# chmod 770 /var/log/squid/
# chown −R squid:squid /opt/squid/
# chown −R squid:squid /cache/
Now everything is ready to run Squid. When we do it for the first time, we have to create its cache
directories:
# /opt/squid/bin/squid −z
We run Squid and check if everything is working. A good tool to do that is IPTraf; you can find it on
http://freshmeat.net. Make sure you have set the appropriate proxy in your web browsers
(192.168.1.1, port 8080 in our example):
# /opt/squid/bin/squid
If everything is working, we add /opt/squid/bin/squid line to the end of our initializing
scripts. Usually, it can be /etc/rc.d/rc.local.
Other helpful options in Squid may be:
# /opt/squid/bin/squid −k reconfigure (it reconfigures Squid if we made any changes in its
squid.conf file)
# /opt/squid/bin/squid −help :) self−explanatory
You can also copy cachemgr.cgi to the cgi−bin directory of your WWW server, to make use of a
useful Cache Manager.
3. Installing and Configuring Necessary Software 6
9. Bandwidth Limiting HOWTO
3.3. Solving remaining problems
OK, we have installed Squid and configured it to use delay pools. I bet nobody wants to be restricted,
especially our clever LAN users. They will likely try to avoid our limitations, just to download their favourite
mp3s a little faster (and thus causing your headache).
I assume that you use IP−masquerade on your LAN so that your users could use IRC, ICQ, e−mail, etc.
That's OK, but we must make sure that our LAN users will use our delay pooled Squid to access web pages
and use ftp.
We can solve most of these problems by using ipchains (Linux 2.2.x kernels) or iptables (Linux 2.4.x
kernels).
3.3.1. Linux 2.2.x kernels (ipchains)
We must make sure that nobody will try to cheat and use a proxy server other than ours. Public proxies
usually run on 3128 and 8080 ports:
/sbin/ipchains −A input −s 192.168.1.1/24 −d ! 192.168.1.1 3128 −p TCP −j REJECT
/sbin/ipchains −A input −s 192.168.1.1/24 −d ! 192.168.1.1 8080 −p TCP −j REJECT
We must also make sure that nobody will try to cheat and connect to the internet directly (IP−masquerade) to
download web pages:
/sbin/ipchains −A input −s 192.168.1.1/24 −d ! 192.168.1.1 80 −p TCP −j REDIRECT 8080
If everything is working, we add these lines to the end of our initializing scripts. Usually, it can be
/etc/rc.d/rc.local.
We might think to block ftp traffic (ports 20 and 21) to force our LAN users to use Squid, but it's not a good
idea for at least two reasons:
·
Squid is a http proxy with ftp support, not a real ftp proxy. It can download from ftp, it can also
upload to some ftp, but it can't delete/change name of files on remote ftp servers.
When we block ports 20 and 21, we won't be able to delete/change name of files on remote
ftp servers.
·
IE5.5 has a bug −− it doesn't use a proxy to retrieve the ftp directory. Instead it connects directly via
IP−masquerade.
When we block ports 20 and 21, we won't be able to browse through ftp directories, using IE5.5.
So, we will block excessive ftp downloads using other methods. We will deal with it in chapter 4.
3.3. Solving remaining problems 7
10. Bandwidth Limiting HOWTO
3.3.2. Linux 2.4.x kernels (iptables)
We must make sure that nobody will try to cheat and use a proxy server other than ours. Public proxies
usually run on 3128 and 8080 ports:
/sbin/iptables −A FORWARD −s 192.168.1.1/24 −d ! 192.168.1.1 −−dport 3128 −p TCP −j DROP
/sbin/iptables −A FORWARD −s 192.168.1.1/24 −d ! 192.168.1.1 −−dport 8080 −p TCP −j DROP
We must also make sure that nobody will try to cheat and connect to the internet directly (IP−masquerade) to
download web pages:
/sbin/iptables −t nat −A PREROUTING −i eth0 −p tcp −−dport 80 −j REDIRECT −−to−port 8080
If everything is working, we add these lines to the end of our initializing scripts. Usually, it can be
/etc/rc.d/rc.local.
We might think to block ftp traffic (ports 20 and 21) to force our LAN users to use Squid, but it's not a good
idea for at least two reasons:
·
Squid is a http proxy with ftp support, not a real ftp proxy. It can download from ftp, it can also
upload to some ftp, but it can't delete/change name of files on remote ftp servers.
When we block ports 20 and 21, we won't be able to delete/change name of files on remote
ftp servers.
·
IE5.5 has a bug −− it doesn't use a proxy to retrieve the ftp directory. Instead it connects directly via
IP−masquerade.
When we block ports 20 and 21, our LAN users won't be able to browse through ftp directories,
using IE5.5.
So, we will block excessive ftp downloads using other methods. We will deal with it in chapter 4.
3.3.2. Linux 2.4.x kernels (iptables) 8
11. 4. Dealing with Other Bandwidth−consuming
Protocols Using CBQ
We must remember that our LAN users can spoil our efforts from chapter 3, if they use Napster, Kazaa or
Realaudio. We must also remember that we didn't block ftp traffic in section 3.3.
We will achieve it in a different way −− not by limiting downloading directly, but rather, indirectly. If our
internet device is ppp0 and LAN device is eth0, we will limit outgoing traffic on interface eth0, and thus,
limit incoming traffic to ppp0.
To do it, we will get familiar with CBQ and cbq.init script. You can obtain it from
ftp://ftp.equinox.gu.net/pub/linux/cbq/. Download cbq.init−v0.6.2 and put it in /etc/rc.d/.
You will also need iproute2 installed. It comes with every Linux distribution.
Now look in your /etc/sysconfig/cbq/ directory. There, you should have an example file, which
should work with cbq.init. If it isn't there, you probably don't have it compiled in your kernel nor it isnt't
present as modules. Well, in any case, just make that directory, put example files provided below, and see if
it'd work for you.
4.1. FTP
In chapter 3, we didn't block ftp for two reasons −− so that we could do uploads, and so that users with buggy
IE5.5 could browse through ftp directories. In all, our web browsers and ftp programs should make
downloads via our Squid proxy and ftp uploads/renaming/deleting should be made via IP−masquerade.
We create a file called cbq−10.ftp−network in the /etc/sysconfig/cbq/ directory:
# touch /etc/sysconfig/cbq/cbq−10.ftp−network
We insert the following lines into it:
DEVICE=eth0,10Mbit,1Mbit
RATE=15Kbit
WEIGHT=1Kbit
PRIO=5
RULE=:20,192.168.1.0/24
RULE=:21,192.168.1.0/24
You will find the description of thses lines in cbq.init−v0.6.2 file.
When you start /etc/rc.d/cbq.init−v0.6.2 script, it will read your configuration, which is placed
in /etc/sysconfig/cbq/:
# /etc/rc.d/cbq.init−v0.6.2 start
If everything is working, we add /etc/rc.d/cbq.init−v0.6.2 start to the end of your initializing
scripts. Usually, it can be /etc/rc.d/rc.local.
4. Dealing with Other Bandwidth−consuming Protocols Using CBQ 9
12. Thanks to this command, your server will not send ftp data through eth0 faster than about 15kbits/s, and
thus will not download ftp data from the internet faster than 15kbits/s.Your LAN users will see that it's
more efficient to use Squid proxy for doing ftp downloads. They will be also able to browse ftp directories
using their buggy IE5.5.
There is also another bug in IE5.5 − when you right click on a file in a ftp directory then select 'Copy To
Folder', the file is downloaded not through proxy, but directly through IP−masquerade, thus omitting Squid
with delay pools.
4.2. Napster, Realaudio, Windows Media and other issues
Here, the idea is the same as with ftp; we just add another port and set a different speed.
We create file called cbq−50.napster−network in the /etc/sysconfig/cbq/ directory:
# touch /etc/sysconfig/cbq/cbq−50.napsterandlive
Put these lines into that file:
DEVICE=eth0,10Mbit,1Mbit
RATE=35Kbit
WEIGHT=3Kbit
PRIO=5
#Windows Media Player.
RULE=:1755,192.168.1.0/24
#Real Player uses TCP port 554, for UDP it uses different ports,
#but generally RealAudio in UDP doesn't consume much bandwidth.
RULE=:554,192.168.1.0/24
RULE=:7070,192.169.1.0/24
#Napster uses ports 6699 and 6700, maybe some other?
RULE=:6699,192.168.1.0/24
RULE=:6700,192.168.1.0/24
#Audiogalaxy uses ports from 41000 to as high as probably 41900,
#there are many of them, so keep in mind I didn't list all of
#them here. Repeating 900 nearly the same lines would be of course
#pointless. We will simply cut out ports 410031−41900 using
#ipchains or iptables.
RULE=:41000,192.168.1.0/24
RULE=:41001,192.168.1.0/24
#continue from 41001 to 41030
RULE=:41030,192.168.1.0/24
#Some clever users can connect to SOCKS servers when using Napster,
#Audiogalaxy etc.; it's also a good idea to do so
#when you run your own SOCKS proxy
RULE=:1080,192.168.1.0/24
#Add any other ports you want; you can easily check and track
#ports that programs use with IPTraf
#RULE=:port,192.168.1.0/24
Don't forget to cut out remaining Audiogalaxy ports (41031−41900), using ipchains (kernels 2.2.x or iptables
(kernels 2.4.x).
Kernels 2.2.x.
Bandwidth Limiting HOWTO
4.2. Napster, Realaudio, Windows Media and other issues 10
13. Bandwidth Limiting HOWTO
/sbin/ipchains −A input −s 192.168.1.1/24 −d ! 192.168.1.1 41031:41900 −p TCP −j REJECT
Kernels 2.4.x.
/sbin/iptables −A FORWARD −s 192.168.1.1/24 −d ! 192.168.1.1 −−dport 41031:41900 −p TCP −j
REJECT
Don't forget to add a proper line to your initializing scripts.
4.2. Napster, Realaudio, Windows Media and other issues 11
14. 5. Frequently Asked Questions
5.1. Is it possible to limit bandwidth on a per−user basis
with delay pools?
Yes. Look inside the original squid.conf file and check the Squid documentation on
http://www.squid−cache.org
5.2. How do I make wget work with Squid?
It's simple. Create a file called .wgetrc and put it in your home directory. Insert the following lines in it
and that's it!
HTTP_PROXY=192.168.1.1:8080
FTP_PROXY=192.168.1.1:8080
You can make it work globally for all users, type man wget to learn how.
5.3. I set up my own SOCKS server listening on port 1080,
and now I'm not able to connect to any irc server.
There can be two issues here.
One is when your SOCKS proxy is open relay, that means everyone can use it from any place in the world. It
is a security issue and you should check your SOCKS proxy configuration again − generally irc servers don't
allow open relay SOCKS servers to connect to them.
If you are sure your SOCKS server isn't open relay, you may be still disallowed to connect to some of the irc
servers − it's because mostly they just check if SOCKS server is running on port 1080 of a client that is
connecting. In that case just reconfigure your SOCKS to work on a different port. You will also have to
reconfigure your LAN software to use a proper SOCKS server and port.
5.4. I don't like when Kazaa or Audiogalaxy is filling up all
my upload bandwidth.
Indeed that can be painful, but it's simple to be solved.
Create a file called for example /etc/sysconfig/cbq/cbq−15.ppp.
Insert the following lines into it, and Kazaa or Audiogalaxy will upload not faster than about 15 kbits/s. I
assume that your outgoing internet interface is ppp0.
DEVICE=ppp0,115Kbit,11Kbit
5. Frequently Asked Questions 12
15. RATE=15Kbit
WEIGHT=2Kbit
PRIO=5
TIME=01:00−07:59;110Kbit/11Kbit
RULE=,:21
RULE=,213.25.25.101
RULE=,:1214
RULE=,:41000
RULE=,:41001
#And so on till :41030
RULE=,:41030
Bandwidth Limiting HOWTO
5.5. My outgoing mail server is eating up all my bandwidth.
You can limit your SMTP, Postfix, Sendmail, or whatever, in a way similar to the question above. Just
change or add one rule:
RULE=,:25
Moreover, if you have an SMTP server, you can force your local LAN users to use it, even though they have
set up their own SMTP servers to smtp.some.server! We'll do it in a transparent way we did before with
Squid.
5.6. Can I limit my own FTP or WWW server in a manner
similar it is shown in the question above?
Generally you can, but usually these servers have got their own bandwidth limiting configurations, so you
will probably want to look into their documentation.
2.2.x Kernels
/sbin/ipchains −A input −s 192.168.1.1/24 −d ! 192.168.1.1 25 −p TCP −j REDIRECT 25
2.4.x Kernels
/sbin/iptables −t nat −A PREROUTING −i eth0 −p tcp −−dport 25 −j REDIRECT −−to−port 25
Don't forget to add a proper line to your initializing scripts.
5.7. Is it possible to limit bandwidth on a per−user basis
with cbq.init script?
Yes. Look inside this script; there are some examples.
5.5. My outgoing mail server is eating up all my bandwidth. 13
16. Bandwidth Limiting HOWTO
5.8. Whenever I start cbq.init, it says sch_cbq is missing.
Probably you don't have CBQ as modules in your system. If you have compiled CBQ into your kernel,
comment out the following lines in your cbq.init−v0.6.2 script.
### If you have cbq, tbf and u32 compiled into kernel, comment it out
#for module in sch_cbq sch_tbf sch_sfq sch_prio cls_u32; do
# if ! modprobe $module; then
# echo "**CBQ: could not load module $module"
# exit
# fi
#done
5.9. CBQ sometimes doesn't work for no reason.
Generally it shouldn't occur. Sometimes, you can observe mass downloads, though you think you have
blocked all ports Napster or Audiogalaxy uses. Well, there is always one more port open for mass downloads.
To find it, you can use IPTraf. As there can be possibly thousands of such ports, it can be really hard task for
you. To make it easier, you can consider running your own SOCKS proxy − Napster, Audiogalaxy and many
programs can use SOCKS proxies, so it's much easier to deal with just one port, than to do so with thousands
of possibilites (standard SOCKS port is 1080, if you run your own SOCKS proxy server, you will be able to
set it up differently, or run multiple instances of SOCKS proxy listening on different ports). Don't forget to
close all ports for traffic, and leave open ports like 25 and 110 (SMTP and POP3), and other you think might
be useful. You will find a link to awesome Nylon socks proxy server at the end of this HOWTO.
5.10. Delay pools are stupid; why can't I download
something at full speed when the network is used only by
me?
Unfortunately, you can't do much about it.
The only thing you can do is to use cron and reconfigure it, for example, at 1.00 am, so that Squid won't use
delay pools, then reconfigure it again, let's say at 7.30 am, to use delay pools.
To do this, create two separate config files, called for example squid.conf−day and
squid.conf−night, and put them into /opt/squid/etc/.
squid.conf−day would be the exact copy of a config we created earlier
squid.conf−night, on the contrary, would not have any delay pool lines, so all you have to do is to
comment them out.
Next thing you have to do is to set up /etc/crontab entries correctly.
Edit /etc/crontab and put the following lines there:
#SQUID − night and day config change
5.8. Whenever I start cbq.init, it says sch_cbq is missing. 14
17. 01 9 * * * root /bin/cp −f /opt/squid/etc/squid.conf−day /opt/squid/etc/squid.conf; /opt/squid/bin/59 23 * * * root /bin/cp −f /opt/squid/etc/squid.conf−night /opt/squid/etc/squid.conf; /opt/squid/5.11. My downloads break at 23:59 with "acl day time
09:00−23:59" in squid.conf. Can I do something about it?
You can achieve by removing that acl from your squid.conf, and "delay_access 2 allow dzien delay_access 2
deny !dzien" as well.
Then try to do it with cron as in the question above.
5.12. Squid's logs grow and grow very fast, what can I do
about it?
Indeed, the more users you have, the more − sometimes useful − information will be logged.
The best way to eradicate it would be to use logrotate, but you'd have to do a little trick to make it work with
Squid: proper cron and logrotate entries.
/etc/crontab entries:
#SQUID − logrotate
01 4 * * * root /opt/squid/bin/squid −k rotate; /usr/sbin/logrotate /etc/logrotate.conf; /bin/rm Here we have caused logrotate to start daily at 04:01 am, so remove any remaining logrotate starting points,
for example from /etc/cron.daily/.
/etc/logrotate.d/syslog entries:
#SQUID logrotate − will keep logs for 40 days
/var/log/squid/*.log.0 {
rotate 40
compress
daily
postrotate
/usr/bin/killall −HUP syslogd
endscript
}
5.13. CBQ is stupid; why can't I download something at full
speed when the network is used only be me?
Lucky you, it's possible!
There are to ways to achieve it.
Bandwidth Limiting HOWTO
5.11. My downloads break at 23:59 with "acl day time 09:00−23:59" in squid.conf. Can I do somet1h5ing about
18. Bandwidth Limiting HOWTO
The first is the easy one, similar to the solution we've made with Squid. Insert a line similar to the one below
to your CBQ config files placed in /etc/sysconfig/cbq/:
TIME=00:00−07:59;110Kbit/11Kbit
You can have multiple TIME parameters in your CBQ config files.
Be careful though, because there is a small bug in that cbq.init−v0.6.2 script − it won't let you set
certain times, for example 00:00−08:00! To make sure if everything is working correctly, start
cbq.init−v0.6.2, and then within the time you set, type
/etc/rc.d/cbq.init−v0.6.2 timecheck
This is the example how the proper output should look like:
[root@mangoo rc.d]# ./cbq.init start; ./cbq.init timecheck **CBQ: 3:44:
class 10 on eth0 changed rate (20Kbit −> 110Kbit) **CBQ: 3:44: class 40
on ppp0 changed rate (15Kbit −> 110Kbit) **CBQ: 3:44: class 50 on eth0
changed rate (35Kbit −> 110Kbit)
In this example something went wrong, probably in the second config file placed in
/etc/sysconfig/cbq/; second counting from the lowest number in its name:
[root@mangoo rc.d]# ./cbq.init start; ./cbq.init timecheck **CBQ: 3:54:
class 10 on eth0 changed rate (20Kbit −> 110Kbit) ./cbq.init: 08: value
too great for base (error token is "08")
The second way to make CBQ more intelligent is harder − it doesn't depend on time. You can read about it in
the Linux 2.4 Advanced Routing HOWTO, and play with tc command.
5.11. My downloads break at 23:59 with "acl day time 09:00−23:59" in squid.conf. Can I do somet1h6ing about
19. 6. Miscellaneous
6.1. Useful resources
Squid Web Proxy Cache
http://www.squid−cache.org
Squid 2.4 Stable 1 Configuration manual
http://www.visolve.com/squidman/Configuration%20Guide.html
http://www.visolve.com/squidman/Delaypool%20parameters.htm
Squid FAQ
http://www.squid−cache.org/Doc/FAQ/FAQ−19.html#ss19.8
cbq−init script
ftp://ftp.equinox.gu.net/pub/linux/cbq/
Linux 2.4 Advanced Routing HOWTO
http://www.linuxdoc.org/HOWTO/Adv−Routing−HOWTO.html
Traffic control (in Polish)
http://ceti.pl/~kravietz/cbq/
Securing and Optimizing Linux Red Hat Edition − A Hands on Guide
http://www.linuxdoc.org/guides.html
IPTraf
http://cebu.mozcom.com/riker/iptraf/
IPCHAINS
http://www.linuxdoc.org/HOWTO/IPCHAINS−HOWTO.html
Nylon socks proxy server
http://mesh.eecs.umich.edu/projects/nylon/
Indonesian translation of this HOWTO by Rahmat Rafiudin mjl_id@yahoo.com
http://raf.unisba.ac.id/resources/BandwidthLimitingHOWTO/index.html
6. Miscellaneous 17