The document summarizes a seminar presentation on the World Wide Web (WWW). It discusses the basic client-server architecture of the WWW, with servers hosting documents and clients providing interfaces for users. It also covers the evolution of the WWW to include distributed services beyond just documents. Traditional web systems are described as using simple client-server models with URLs to locate documents on servers. Key aspects like HTTP, document models, and scripting technologies are summarized. Security measures for web transactions like TLS and aspects of caching, replication, and content delivery are also outlined.
HTTP is a stateless protocol that uses a request/response model for communication. A client sends a request via a URL to a server, which responds with status codes and content. Common request methods include GET, POST, PUT, DELETE. Responses have status codes like 200 for success and 404 for not found. Caching of responses helps improve performance. HTTPS provides encryption for secure communication via SSL/TLS certificates.
HTTP is the protocol of the web, and in this session we will look at HTTP from a web developer's perspective. We will cover resources, messages, cookies, and authentication protocols and we will see how the web scales to meet demand using cache headers. Armed with the fundamentals about HTTP, you will have the knowledge not only to build better Web/Mobile applications but also for consuming Web API.
The document summarizes a presentation on web technology focusing on HTTP servers and protocols. It discusses how the internet and world wide web work, URL structures, HTTP as the application layer protocol for web clients and servers in a request-response model, and how HTTP connections can be non-persistent or persistent. It also covers HTTP request and response messages, status codes, cookies for maintaining state, and web caching.
HTTP is a protocol for transmitting hypermedia documents across the internet. It uses a client-server model where browsers make HTTP requests to web servers, which respond with HTTP responses. Key aspects of HTTP include using TCP/IP for communication, being stateless, supporting a variety of data types, and incorporating features of both FTP and SMTP protocols.
HTTP is an application-layer protocol for transmitting hypermedia documents across the internet. It is a stateless protocol that can be used on any reliable transport layer. HTTP uses requests and responses between clients and servers, with common methods including GET, POST, PUT, DELETE. It supports features like caching, cookies, authentication, and more to enable the web as we know it.
Improving access latency of web browser by using content aliasing inIAEME Publication
This document summarizes a research paper that proposes a methodology to improve web browser access latency by using content aliasing in a proxy cache server. The methodology works by analyzing cached content, identifying duplicate content, and creating soft links to that duplicate content. This avoids storing the same content multiple times in the cache, saving storage space. The document provides background on issues like increased access latency from high web traffic. It also reviews related work on caching approaches, web traffic analysis, and algorithms like MD5 that are relevant to the proposed methodology.
HTTP is the set of rules for transferring data across the World Wide Web. It uses clients like web browsers to make requests to servers using URLs over TCP/IP. HTTP defines request and response messages with request methods like GET and POST and response status codes. HTTP 1.1 supports persistent connections and caching via proxy servers for improved performance over HTTP 1.0.
HTTP defines a client-server model for communication between browsers and web servers. A browser sends HTTP requests to a web server for web pages and objects. The server responds with HTTP responses containing the requested objects. HTTP uses TCP for reliable transmission and defines request and response message formats. Requests contain headers like Accept specifying object types. Responses contain status codes, headers like Content-Type, and the requested object data.
HTTP is a stateless protocol that uses a request/response model for communication. A client sends a request via a URL to a server, which responds with status codes and content. Common request methods include GET, POST, PUT, DELETE. Responses have status codes like 200 for success and 404 for not found. Caching of responses helps improve performance. HTTPS provides encryption for secure communication via SSL/TLS certificates.
HTTP is the protocol of the web, and in this session we will look at HTTP from a web developer's perspective. We will cover resources, messages, cookies, and authentication protocols and we will see how the web scales to meet demand using cache headers. Armed with the fundamentals about HTTP, you will have the knowledge not only to build better Web/Mobile applications but also for consuming Web API.
The document summarizes a presentation on web technology focusing on HTTP servers and protocols. It discusses how the internet and world wide web work, URL structures, HTTP as the application layer protocol for web clients and servers in a request-response model, and how HTTP connections can be non-persistent or persistent. It also covers HTTP request and response messages, status codes, cookies for maintaining state, and web caching.
HTTP is a protocol for transmitting hypermedia documents across the internet. It uses a client-server model where browsers make HTTP requests to web servers, which respond with HTTP responses. Key aspects of HTTP include using TCP/IP for communication, being stateless, supporting a variety of data types, and incorporating features of both FTP and SMTP protocols.
HTTP is an application-layer protocol for transmitting hypermedia documents across the internet. It is a stateless protocol that can be used on any reliable transport layer. HTTP uses requests and responses between clients and servers, with common methods including GET, POST, PUT, DELETE. It supports features like caching, cookies, authentication, and more to enable the web as we know it.
Improving access latency of web browser by using content aliasing inIAEME Publication
This document summarizes a research paper that proposes a methodology to improve web browser access latency by using content aliasing in a proxy cache server. The methodology works by analyzing cached content, identifying duplicate content, and creating soft links to that duplicate content. This avoids storing the same content multiple times in the cache, saving storage space. The document provides background on issues like increased access latency from high web traffic. It also reviews related work on caching approaches, web traffic analysis, and algorithms like MD5 that are relevant to the proposed methodology.
HTTP is the set of rules for transferring data across the World Wide Web. It uses clients like web browsers to make requests to servers using URLs over TCP/IP. HTTP defines request and response messages with request methods like GET and POST and response status codes. HTTP 1.1 supports persistent connections and caching via proxy servers for improved performance over HTTP 1.0.
HTTP defines a client-server model for communication between browsers and web servers. A browser sends HTTP requests to a web server for web pages and objects. The server responds with HTTP responses containing the requested objects. HTTP uses TCP for reliable transmission and defines request and response message formats. Requests contain headers like Accept specifying object types. Responses contain status codes, headers like Content-Type, and the requested object data.
HTTP is the main protocol for transmitting web content. It uses clients, like web browsers, to send requests to servers storing resources. Requests use HTTP methods like GET and servers return responses with status codes. Transactions are conducted through formatted HTTP messages containing request commands and response results. HTTP relies on TCP for reliable data transmission and can use proxies, caches, and gateways to improve performance and security.
A web service allows for data transfer between platforms or languages. It uses PHP code to perform operations like inserting, deleting, fetching, and updating data in a database. The web service code connects to the database using a connection file that contains login credentials. It then decodes request parameters, fires SQL queries to perform the requested operation, and encodes the response. URLs are used to check the operation by passing parameters to specify things like the table row ID, field names, and new values.
This document discusses the process of web crawling and building a web crawler. It describes how crawlers work by starting with a set of URLs, fetching web pages from those URLs, extracting new URLs from the pages, adding them to the list to crawl, and repeating the process. It also discusses important considerations for building large-scale crawlers, such as handling concurrent requests efficiently, avoiding duplicate URLs, managing server load, and storing crawled pages reliably at a large scale.
The document discusses various topics related to shell programming and scripting languages including:
1. It provides an overview of HTML, the basic building blocks of websites including tags, elements and page structure.
2. It describes common HTML tags for text formatting, headings, and other page elements. It also discusses HTML forms and how to pass data.
3. It provides an introduction to CGI (Common Gateway Interface) and how it allows information to be exchanged between a web server and custom scripts to dynamically generate web pages.
4. It includes examples of basic CGI programs in Python for handling GET and POST requests, retrieving and displaying form data, and using cookies to maintain state across web requests.
- The document discusses various topics related to hosting and serving web pages including how host files can provide DNS-like functionality on small networks, how web servers listen for requests on port 80 and can be configured on other ports, and an overview of HTTP including versions, statelessness, and request/response headers.
- It also covers features of different versions of IIS including support for additional protocols, virtual hosting, authentication methods, and components used for file transfer and forums.
- Default website properties in IIS include operators, performance limits, ISAPI filters, home directory location, logging, permissions, default documents, custom headers and error pages.
HTTP is an application-level protocol for distributed, collaborative hypermedia information systems. It is based on the client-server model and uses TCP/IP protocols. HTTP functions by having clients make requests to servers, which respond with status codes and requested resources. Key aspects of HTTP include its stateless and connectionless nature, as well as its use of request methods like GET and POST.
HTTP proxies act as both servers and clients by receiving requests from web clients and forwarding the requests to web servers, while also sending requests to servers on behalf of clients. Proxies are commonly used for filtering content, access control, security firewalls, caching, load balancing, transcoding content formats, and providing anonymity. Proxies can be configured in various network architectures including private networks, ISP access points, as reverse proxies in front of servers, and at network exchange points.
This document discusses several distributed file systems including NFS, Coda, Plan 9, xFS, and SFS. It provides details on each system's architecture, communication methods, naming, caching, replication, fault tolerance, security and access control approaches. The key aspects covered are remote access models, file operations, attributes, sharing semantics, client caching, server organization, naming schemes, and consistency models used by the different systems.
HTTPS presentation at Port80 Sydney meetup March 2016Jason Stangroome
HTTPS has become increasingly important for security and user experience. The document discusses several reasons for using HTTPS, including that 42% of the top 1 million websites have adopted it in the last 6 years. It covers topics like SSL/TLS protocols, certificate validation, HTTP Strict Transport Security, and Let's Encrypt which provides free SSL certificates to help websites transition to HTTPS. Overall it promotes the benefits of HTTPS for users, search engines and the continued improvement and standardization of encryption on the web.
One of MongoDB’s primary attractions for developers is that it gives them the ability to start application development without needing to define a formal, up-front schema. Operations teams appreciate the fact that they don't need to perform a time-consuming schema upgrade operation every time the developers need to store a different attribute.
Some projects reach a point where it's necessary to define rules on what's being stored in the database. This webinar explains how MongoDB 3.2 allows that document validation work to be performed by the database rather than in the application code.
This webinar focuses on the benefits of using document validation: how to set up the rules using the familiar MongoDB Query Language and how to safely roll it out into an existing, mature production environment.
Polylog: A Log-Based Architecture for Distributed SystemsLongtail Video
The talk focuses on a log-based architecture ("The Polylog") we've developed to handle data change capture in order to easily build new services and databases based on other service's full datasets. Some of the tools we'll cover include Debezium for database change capture, Kafka for storing the logs, and the Denormalizer, which is an in-house tool we built to do left joins on streams.
CGI (Common Gateway Interface) allows web servers to interface with external programs to dynamically generate web pages. When a request is made for a file in the CGI directory, the web server executes the corresponding CGI program and returns its output instead of the requested file. CGI programs can be written in many languages like Perl, C, C++ and Shell scripts. They have access to environment variables that provide information about the request.
CGI (Common Gateway Interface) is an interface that allows a web server to launch external applications dynamically in response to requests. It defines standard communication variables between the web server and CGI programs. CGI programs can be written in any programming language and are executed by the web server to generate dynamic web page content on the fly based on request parameters and structured data. However, CGI has performance and security limitations due to creating new processes for each request.
This presentation is based on web server. It is just an overview about web server and its types. It gives an idea about need of server management organization.
Frequently Used Terms Related to cPanelHTS Hosting
cPanel is an intuitive and easy to use Linux-based control panel that is meant for web hosting accounts. There are several terms that are frequently used in the context of cPanel and the knowledge of these terms expands one’s expertise with regard to the usage of cPanel.
The document discusses JSP client requests and the information contained in the request header and available methods. It describes several common header fields sent by the client browser including Accept, Accept-Encoding, Accept-Language, Authorization, Connection, Content-Length, Cookie, and Host. It also outlines methods like getCookies(), getAttributeNames(), getSession(), getMethod(), getPathInfo(), and getProtocol() that can be used to access information from the client request.
My talk for the Dutch PHP Conference, explaining the point of oauth, the mechanics of oauth2 and the various flows, and a spot of oauth1 for completeness
WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. It was standardized in 2011 and allows for real-time data exchange between a client and server. The document discusses how WebSocket works, compares it to previous techniques like polling which had limitations, and outlines how to implement WebSocket in Java using JSR 356 and in Spring using the WebSocket API and STOMP protocol.
This document provides an overview of web applications and the HTTP protocol. It describes the evolution of static pages to dynamic web applications. It outlines the tier structure of web applications and covers browsers, servers, server-side and client-side scripts, and databases. The document then details the HTTP protocol, including sessions, URLs, requests, responses, status codes, and cookies. It explains how HTTP establishes and maintains stateless sessions between clients and servers.
This document discusses web servers. It begins by defining a web server as hardware or software that helps deliver internet content. It then discusses the history of web servers, including the first web server created by Tim Berners-Lee at CERN in 1990. The document outlines common uses of web servers like hosting websites, data storage, and content delivery. It also describes how web servers work, including how they handle requests and responses using HTTP. Finally, it covers topics like installing and hosting a web server, load limits, overload causes and symptoms, and techniques to prevent overload.
The document provides information about the World Wide Web (WWW). It defines the WWW as a way to exchange information by allowing publicly available files on computers to be read remotely, usually using HTML. The document outlines the history of the WWW, with Tim Berners-Lee inventing it in 1989-1990 at CERN. It describes the basic structure of web pages built with HTML and accessed via browsers communicating with servers over HTTP. Finally, it discusses some fundamental concepts underlying the WWW like hypertext, hypermedia, and web browsers.
1. The document introduces the World Wide Web and its core technologies including HTTP, HTML, web servers, and web browsers.
2. It describes how HTTP works using a request/response model and is stateless, while browser cookies allow for stateful sessions.
3. Examples demonstrate basic HTML pages and forms, HTTP requests and responses, and how dynamic content can be generated using server-side technologies like JSP.
HTTP is the main protocol for transmitting web content. It uses clients, like web browsers, to send requests to servers storing resources. Requests use HTTP methods like GET and servers return responses with status codes. Transactions are conducted through formatted HTTP messages containing request commands and response results. HTTP relies on TCP for reliable data transmission and can use proxies, caches, and gateways to improve performance and security.
A web service allows for data transfer between platforms or languages. It uses PHP code to perform operations like inserting, deleting, fetching, and updating data in a database. The web service code connects to the database using a connection file that contains login credentials. It then decodes request parameters, fires SQL queries to perform the requested operation, and encodes the response. URLs are used to check the operation by passing parameters to specify things like the table row ID, field names, and new values.
This document discusses the process of web crawling and building a web crawler. It describes how crawlers work by starting with a set of URLs, fetching web pages from those URLs, extracting new URLs from the pages, adding them to the list to crawl, and repeating the process. It also discusses important considerations for building large-scale crawlers, such as handling concurrent requests efficiently, avoiding duplicate URLs, managing server load, and storing crawled pages reliably at a large scale.
The document discusses various topics related to shell programming and scripting languages including:
1. It provides an overview of HTML, the basic building blocks of websites including tags, elements and page structure.
2. It describes common HTML tags for text formatting, headings, and other page elements. It also discusses HTML forms and how to pass data.
3. It provides an introduction to CGI (Common Gateway Interface) and how it allows information to be exchanged between a web server and custom scripts to dynamically generate web pages.
4. It includes examples of basic CGI programs in Python for handling GET and POST requests, retrieving and displaying form data, and using cookies to maintain state across web requests.
- The document discusses various topics related to hosting and serving web pages including how host files can provide DNS-like functionality on small networks, how web servers listen for requests on port 80 and can be configured on other ports, and an overview of HTTP including versions, statelessness, and request/response headers.
- It also covers features of different versions of IIS including support for additional protocols, virtual hosting, authentication methods, and components used for file transfer and forums.
- Default website properties in IIS include operators, performance limits, ISAPI filters, home directory location, logging, permissions, default documents, custom headers and error pages.
HTTP is an application-level protocol for distributed, collaborative hypermedia information systems. It is based on the client-server model and uses TCP/IP protocols. HTTP functions by having clients make requests to servers, which respond with status codes and requested resources. Key aspects of HTTP include its stateless and connectionless nature, as well as its use of request methods like GET and POST.
HTTP proxies act as both servers and clients by receiving requests from web clients and forwarding the requests to web servers, while also sending requests to servers on behalf of clients. Proxies are commonly used for filtering content, access control, security firewalls, caching, load balancing, transcoding content formats, and providing anonymity. Proxies can be configured in various network architectures including private networks, ISP access points, as reverse proxies in front of servers, and at network exchange points.
This document discusses several distributed file systems including NFS, Coda, Plan 9, xFS, and SFS. It provides details on each system's architecture, communication methods, naming, caching, replication, fault tolerance, security and access control approaches. The key aspects covered are remote access models, file operations, attributes, sharing semantics, client caching, server organization, naming schemes, and consistency models used by the different systems.
HTTPS presentation at Port80 Sydney meetup March 2016Jason Stangroome
HTTPS has become increasingly important for security and user experience. The document discusses several reasons for using HTTPS, including that 42% of the top 1 million websites have adopted it in the last 6 years. It covers topics like SSL/TLS protocols, certificate validation, HTTP Strict Transport Security, and Let's Encrypt which provides free SSL certificates to help websites transition to HTTPS. Overall it promotes the benefits of HTTPS for users, search engines and the continued improvement and standardization of encryption on the web.
One of MongoDB’s primary attractions for developers is that it gives them the ability to start application development without needing to define a formal, up-front schema. Operations teams appreciate the fact that they don't need to perform a time-consuming schema upgrade operation every time the developers need to store a different attribute.
Some projects reach a point where it's necessary to define rules on what's being stored in the database. This webinar explains how MongoDB 3.2 allows that document validation work to be performed by the database rather than in the application code.
This webinar focuses on the benefits of using document validation: how to set up the rules using the familiar MongoDB Query Language and how to safely roll it out into an existing, mature production environment.
Polylog: A Log-Based Architecture for Distributed SystemsLongtail Video
The talk focuses on a log-based architecture ("The Polylog") we've developed to handle data change capture in order to easily build new services and databases based on other service's full datasets. Some of the tools we'll cover include Debezium for database change capture, Kafka for storing the logs, and the Denormalizer, which is an in-house tool we built to do left joins on streams.
CGI (Common Gateway Interface) allows web servers to interface with external programs to dynamically generate web pages. When a request is made for a file in the CGI directory, the web server executes the corresponding CGI program and returns its output instead of the requested file. CGI programs can be written in many languages like Perl, C, C++ and Shell scripts. They have access to environment variables that provide information about the request.
CGI (Common Gateway Interface) is an interface that allows a web server to launch external applications dynamically in response to requests. It defines standard communication variables between the web server and CGI programs. CGI programs can be written in any programming language and are executed by the web server to generate dynamic web page content on the fly based on request parameters and structured data. However, CGI has performance and security limitations due to creating new processes for each request.
This presentation is based on web server. It is just an overview about web server and its types. It gives an idea about need of server management organization.
Frequently Used Terms Related to cPanelHTS Hosting
cPanel is an intuitive and easy to use Linux-based control panel that is meant for web hosting accounts. There are several terms that are frequently used in the context of cPanel and the knowledge of these terms expands one’s expertise with regard to the usage of cPanel.
The document discusses JSP client requests and the information contained in the request header and available methods. It describes several common header fields sent by the client browser including Accept, Accept-Encoding, Accept-Language, Authorization, Connection, Content-Length, Cookie, and Host. It also outlines methods like getCookies(), getAttributeNames(), getSession(), getMethod(), getPathInfo(), and getProtocol() that can be used to access information from the client request.
My talk for the Dutch PHP Conference, explaining the point of oauth, the mechanics of oauth2 and the various flows, and a spot of oauth1 for completeness
WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. It was standardized in 2011 and allows for real-time data exchange between a client and server. The document discusses how WebSocket works, compares it to previous techniques like polling which had limitations, and outlines how to implement WebSocket in Java using JSR 356 and in Spring using the WebSocket API and STOMP protocol.
This document provides an overview of web applications and the HTTP protocol. It describes the evolution of static pages to dynamic web applications. It outlines the tier structure of web applications and covers browsers, servers, server-side and client-side scripts, and databases. The document then details the HTTP protocol, including sessions, URLs, requests, responses, status codes, and cookies. It explains how HTTP establishes and maintains stateless sessions between clients and servers.
This document discusses web servers. It begins by defining a web server as hardware or software that helps deliver internet content. It then discusses the history of web servers, including the first web server created by Tim Berners-Lee at CERN in 1990. The document outlines common uses of web servers like hosting websites, data storage, and content delivery. It also describes how web servers work, including how they handle requests and responses using HTTP. Finally, it covers topics like installing and hosting a web server, load limits, overload causes and symptoms, and techniques to prevent overload.
The document provides information about the World Wide Web (WWW). It defines the WWW as a way to exchange information by allowing publicly available files on computers to be read remotely, usually using HTML. The document outlines the history of the WWW, with Tim Berners-Lee inventing it in 1989-1990 at CERN. It describes the basic structure of web pages built with HTML and accessed via browsers communicating with servers over HTTP. Finally, it discusses some fundamental concepts underlying the WWW like hypertext, hypermedia, and web browsers.
1. The document introduces the World Wide Web and its core technologies including HTTP, HTML, web servers, and web browsers.
2. It describes how HTTP works using a request/response model and is stateless, while browser cookies allow for stateful sessions.
3. Examples demonstrate basic HTML pages and forms, HTTP requests and responses, and how dynamic content can be generated using server-side technologies like JSP.
This document compares Wi-Fi and WiMAX technologies. Wi-Fi was launched in 1997 and defined by IEEE 802.11 standards, while WiMAX was launched in 2004 and defined by IEEE 802.16 standards. Key differences include Wi-Fi having a shorter range of 100 meters versus WiMAX's 80-90 kilometer range and Wi-Fi transferring data at speeds up to 54Mbps while WiMAX transfers at speeds up to 40Mbps. Additionally, Wi-Fi is primarily an end user technology while WiMAX is deployed by service providers to provide internet services to larger areas.
Wifi uses spread spectrum technology which can have difficulties decoding signals when two identical signals are received with a small time delay. Wimax uses OFDMA which divides data across multiple subcarriers, making it more robust to signal interference and easier to implement MIMO. While Wifi struggles at distances over a few hundred meters, Wimax can provide throughput of up to 2.5Gbps at 20km and is better suited for mining sites due to its ability to handle high signal reflection and absorption. Key considerations for a Wimax implementation include data throughput needs, quality of service requirements, and allowing suppliers flexibility in choosing parameters to optimize performance within bandwidth limits.
This presentation introduces the topic of the World Wide Web (WWW). It discusses that the WWW allows for the exchange of information between computers on the Internet using browsers. Key points include that Tim Berners-Lee invented the WWW in 1989 at CERN to allow for simultaneous transfer of text and graphics. The structure of the WWW involves clients using browsers to send requests via HTTP to servers, which respond with web pages rendered by the client's browser. Components include clients, servers, caches, protocols, HTML, URIs, and HTTP. The presentation concludes by noting the visionaries who created the early WWW.
The document discusses the Domain Name System (DNS), which translates domain names to IP addresses. It has three main components: the name space that defines the domain name structure, resolvers that extract information from name servers, and name servers that store information about the domain name structure. The DNS uses a hierarchical and distributed database to map domain names to IP addresses across a network like the internet.
The document provides steps for writing a formal email, including writing the recipient's email address, including a clear subject line, writing the message in a concise and error-free manner, ensuring all relevant details are included, and ending with a closing salutation and signature with the sender's full name.
Tổ chức chạy roadshow hiện nay đã trở lên phổ biến và hiệu quả để chào mừng các sự kiện, ngày lễ như khai trương, khánh thành, hội chợ triển lãm, lễ Tết, quảng cáo …
Tổ chức chạy roadshow cũng là một trong những hình thức quảng cáo đem lại hiệu quả tối ưu khi công ty có sản phẩm mới muốn tung ra thị trường.
Đây là dự án mà UNIQUE đã triển khai rất thành công cho nhãn hàng tiêu dùng Nhật Bản MINISO trước ngày khai trương hệ thống 3 showroom đầu tiên tại Việt Nam.
UNIQUE tự tin là đơn vị tiếp thị và truyền thông chất lượng và uy tín hàng đầu Việt Nam. Hãy để UNIQUE phục vụ quý khách hàng một cách chuyên nghiệp nhất!
This document discusses cyber crime and its history, definition, categories, and perpetrators. It begins with an introduction about the growth of the internet in India and the rise of cyber crime. It then covers the history of the first recorded cyber crime in 1820 involving sabotage of a new textile loom. The document defines cyber crime and outlines its main categories. It also examines the role of computers as tools, targets, and appliances for crime and profiles common cyber criminals. Specific cyber crimes like phishing, denial of service attacks, and logic bombs are explored. The document concludes with prevention tips and a call for India to strengthen its cyber crime laws and security standards.
This document discusses different wireless network topologies including point-to-point, point-to-multipoint, and mesh networks. It then compares WiFi and WiMAX standards, noting that WiMAX has longer range but lower speeds than WiFi. Finally, it contrasts 3G, WiFi, and WiMAX technologies, noting their differences in standards, speeds, licensing, coverage areas, and advantages/disadvantages.
This document provides an overview of Internet Service Providers (ISPs), including the types of ISPs, examples of ISPs, factors to consider when choosing an ISP, the services ISPs provide, and the types of connections and equipment used to connect to an ISP. It discusses how ISPs connect customers to other networks and examples like access ISPs, hosting ISPs, transit ISPs and free ISPs. It also summarizes the different connection types like wireless, mobile phones, hotspots, broadband, and satellite and the equipment like modems and satellite receivers used.
Tim Berners-Lee invented the World Wide Web in 1989-1990 at CERN as a way to share information between computers connected to the internet. The web uses browsers, HTML pages, and URLs to allow users to view and link between pages of text, images, and other multimedia. Users connect to web servers via HTTP and receive requested pages containing HTML markup that browsers interpret to display content. This system of clients, servers, and protocols allows the global sharing of information over the internet.
VoIP, or Voice over Internet Protocol, is a technology that allows users to make voice calls using an Internet connection instead of a regular phone line. It works by converting voice signals into digital data packets that travel over the Internet and are then reconstructed at the other end. There are several VoIP protocols used and many applications that employ VoIP, including Skype. VoIP offers advantages over traditional phone service like lower costs, additional features included for free, and the ability to make calls from any Internet-connected device.
Not sure if you want to start a blog? View this presentation on the advantages and the disadvantages of starting a free hosted blog or a nonhosted blog. Happy Blogging?
GPS uses a constellation of 24 satellites orbiting Earth to provide location and time information to GPS receivers. The satellites circle the planet every 12 hours across multiple orbital planes inclined at 55 degrees to the equator, ensuring signals from 8-10 satellites are visible from any point on Earth. GPS receivers triangulate their position by measuring the time delay of signals from 3 or more satellites, determining distance based on the time required for signals to travel. Factors like ionosphere delays, multipath signals, and satellite geometry can introduce errors, but parallel channel receivers maintain locks on satellites to provide accuracy within 15 meters.
VoIP stands for Voice over Internet Protocol. It allows users to make phone calls using an IP network rather than a traditional telephone network. VoIP works by converting voice into packets of data that travel over the internet through routers to reach the destination. While it is beginning to be used more in businesses due to lower costs, some reliability issues with lost data packets can cause jittering and lower sound quality compared to traditional phone networks.
The document discusses the history and workings of the World Wide Web. It was invented in 1989 by Tim Berners-Lee at CERN as a system of interlinked hypertext documents accessed via the internet. The Web consists of web pages containing text, images, videos and multimedia that can be viewed through a web browser and connected through hyperlinks using URLs. Users can navigate between web pages through these hyperlinks to access the web's collection of interconnected information resources available on the internet.
An Internet service provider (ISP) offers customers access to the Internet using various technologies like dial-up, DSL, cable modem, wireless, or dedicated high-speed. ISPs connect to upstream ISPs who have larger networks and access to more of the Internet to transmit data between networks. ISPs may directly interconnect with each other through peering points without charging for data transmission instead of going through a third upstream ISP. Types of ISPs include virtual ISPs and free ISPs. Internet access is expected to improve in cost and speed as more companies enter the industry, and the true benefit will come when remote areas worldwide can access quality Internet.
This presentation is about GPS... what is it?why GPS? , how it works? and the applications of GPS. By Mostafa Hussien
facebook profile: http://www.facebook.com/mstfahsin
Twitter @MSTFAHSIN
Tumblr mostafahussien.tumblr.com
Podcasting allows people to listen to recorded radio shows when they want using mp3 players. Chatting and instant messaging programs like Yahoo and AOL enable electronic communication and recreation. Discussion boards bring together people with shared interests to argue and share ideas on topics. Text messaging is used primarily for short personal communication and gossip rather than professional reasons. Blogs are online journals where individuals post multiple entries on a specific topic, with examples including Blogger and TypePad.
The document discusses different document models used on the web and in Lotus Notes. It provides examples of HTML, XML, and Notes documents. It also summarizes key architectural aspects of web clients and servers as well as the overall organization and processes used in Lotus Notes.
The document discusses the key differences between the World Wide Web (WWW) and Lotus Notes. It notes that WWW uses HTTP and URLs for communication and naming, while Notes uses its own RPC protocol and identifiers. It also summarizes some of the differences in their models for storage, clients, servers, synchronization, replication, fault tolerance, and access control.
This document provides an introduction to web application development, including the history of the World Wide Web and how it works. It describes the basics of web clients and servers, URLs, HTML, and how communication is established over the internet. It then distinguishes between static and dynamic web pages, and discusses client-side scripting like JavaScript and Java applets as well as server-side scripting using languages like PHP, ASP, and JSP to generate dynamic web content. Finally, it lists some common web development tools.
This document provides an overview of distributed web-based systems, including the key components and technologies that enable them. It discusses the World Wide Web and how documents are accessed via URLs. It also describes HTTP and how connections and requests/responses work. Other topics covered include caching, content distribution networks, web services, traditional and multi-tiered web architectures, web server clusters, and web security protocols like SSL.
The document discusses the World Wide Web (WWW) and Hypertext Transfer Protocol (HTTP). It describes the basic architecture of the WWW including clients, servers, web pages, and URLs. It explains that web pages can be static, dynamic, or active. The document then discusses HTTP in more detail, including how HTTP requests and responses are structured, how persistent connections work in HTTP 1.1, and how caching can improve performance.
The document discusses key considerations for designing effective websites, including browser and operating system support, bandwidth and caching, display resolution, and look and feel. Effective website design requires accounting for different browser versions, connection speeds, screen sizes, and ensuring a consistent user experience across platforms. Planning the goals, content, and technical implementation of a website is also important for success.
web services8 (1).pdf for computer scienceoptimusnotch44
Web services allow for communication between client and server applications over the World Wide Web. A web service is a software module designed to perform tasks and can be invoked directly or indirectly by users or other programs. The web service would then provide functionality to the client application that invoked it. Key aspects of web services include protocols like HTTP that transfer data across the web, DNS that translates human-readable domain names to machine-readable IP addresses, URLs that specify the location of resources, and web servers that host websites and return web pages to clients.
The document discusses the architecture of the World Wide Web. It describes how the web is made up of clients (browsers) that can access and retrieve information from servers using URLs. It also discusses different types of web documents (static, dynamic, active) and technologies involved like HTTP, URLs, cookies.
what is web ?
why database on the web?
website technologies like HTML,CSS,JavaScript,Server,Servlets,Ajax..
all contents ownership goes to respective owners :)
(Classroom Presentaion)
The document provides an introduction to basic web architecture, including HTML, URIs, HTTP, cookies, database-driven websites, AJAX, web services, XML, and JSON. It discusses how the web is a two-tiered architecture with a web browser displaying information from a web server. Key components like HTTP requests and responses are outlined. Extension of web architecture with server-side processing using languages like PHP and client-side processing with JavaScript are also summarized.
Web services allow programs to call methods on other computers over a network. They are frequently web APIs that can be accessed remotely and executed on another system. Web services consist of method information describing the method being called and scoping information describing required arguments. This information is packaged and sent across the network using various protocols like HTTP, SOAP, and XML-RPC. The internet protocol stack, consisting of layers like application, transport, network and link, is used to break information into packets that can travel over the network to their destination and be reassembled.
Web services allow programs to communicate over a network by calling methods on remote systems. They are frequently web APIs that can be accessed over a network like the internet. A web service call packages method and scoping information into an envelope that is transported across the network using defined protocols like HTTP and TCP. At the destination, the same protocols unpack the envelope and call the requested method. Web servers store web pages and dynamic content, and respond to client requests over the internet using HTTP to deliver HTML files and other objects.
This document discusses different types of application architectures like host-based, client-based, client-server, peer-to-peer, and cloud computing architectures. It also describes the four basic functions of application software as data storage, application logic, data access, and presentation logic. Additionally, it compares host-based and client-server networks, defines middleware, discusses switching from host-based to client-server architecture, and compares two-tier, three-tier, and n-tier client-server architectures.
This document outlines chapters for a course on internet programming, including an overview of the internet and world wide web, web design and development fundamentals, cascading style sheets, JavaScript, server-side programming with PHP and MySQL, and project requirements. Evaluation will include midterm and final exams, lab exams, and a project presentation. The project must implement a complete application with internet programming concepts and techniques.
Topics:
- Web Architecture Overview
- HTTP (Hypertext Transfer Protocol)
- REST (Representational State Transfer)
- JSON (JavaScript Object Notation)
Slides for the course of "Ambient Intelligence: Technology and Design" given at Politecnico di Torino during year 2013/2014.
Course website: http://bit.ly/polito-ami
21. Application Development and Administration in DBMSkoolkampus
The document provides an overview of web interfaces to databases and techniques for improving web application performance. It discusses how databases can be interfaced with the web to allow users to access data from anywhere. It then covers topics like dynamic page generation, sessions, cookies, servlets, server-side scripting, and techniques for improving web server performance like caching. The document also discusses performance tuning at the hardware, database, and transaction levels to identify and address bottlenecks.
The document discusses various web technologies including:
- Core web technologies like browsers, servers, URIs and HTTP.
- Client-side technologies like HTML, CSS, JavaScript and HTML5.
- Server-side technologies for web applications like CGI, PHP, Java servlets and JSPs.
- How web applications use technologies like application servers to manage business logic and state in a dynamic way.
- Common methods for managing session state including cookies, databases and application servers.
The document provides an introduction to back-end development, including definitions of the internet, World Wide Web, and request-response cycle. It explains the differences between front-end and back-end development and lists common front-end and back-end programming languages. Main protocols like IP, TCP, UDP, and HTTP are described. Additional back-end concepts covered include CRUD functionality, securing passwords, HTTPS, and APIs. Resources for further learning back-end development with languages like Python, Node.js, and PHP are also provided.
The document discusses various aspects of web technology including:
1. It describes how the internet is organized with clients making requests to servers and responses being sent back over various internet layers using protocols like HTTP and TCP.
2. It explains how URLs work to identify web pages and resources, with domains mapped to IP addresses by the DNS system in a hierarchical structure.
3. It provides an overview of HTML, the publishing language of the web, and common tags used to structure and format text, images, and links on web pages.
The document provides an overview of the World Wide Web (WWW) and its architecture. It discusses how the WWW originated at CERN to share scientific resources. It describes the client-server model of the WWW where clients access servers using browsers. Web pages contain links to other pages and can include various types of media. URLs are used to identify resources. HTML is used to structure and format web pages. Dynamic content is also discussed where servers generate pages on request.
The document provides an overview of ASP.NET, including:
- ASP.NET is a web development platform from Microsoft used to create web applications. It was first released in 2002.
- ASP.NET applications can be written in languages like C# and VB.NET.
- The architecture is based on components like languages, libraries, and the Common Language Runtime which handles tasks like exception handling.
This document provides an overview and comparison of distributed coordination-based systems TIB/Rendezvous and Jini. It describes their coordination models, communication methods, naming services, transactions, caching/replication, reliability, security and differences in major design goals, events, processes, and support for transactions, locking, and recovery. Key aspects of TIB/Rendezvous include publish/subscribe, multicasting messages, and secure channels, while Jini emphasizes flexible integration and uses the Java lookup service and method invocations.
This document discusses various techniques for clock synchronization and maintaining consistency in distributed systems. It covers Cristian's algorithm, the Berkeley algorithm, Lamport timestamps, algorithms for mutual exclusion including a centralized, distributed, and token ring approach, and techniques for concurrency control including two-phase locking, pessimistic timestamp ordering, and ensuring serializability of transactions.
1. A distributed system is a collection of independent computers that appears as a single coherent system to users. It is organized as middleware that extends over multiple machines.
2. Transparency in distributed systems hides where resources are located, that they may move, be replicated, or shared concurrently. It also hides failures and whether resources are in memory or disk.
3. Scaling techniques include dividing work like form checking between servers and clients, partitioning namespaces, and distributing data and services across machines.
CORBA, DCOM, and Globe all provide distributed object models but have different design goals and implementations. CORBA aims for interoperability and provides many standardized services, while DCOM focuses on functionality within Windows environments. Globe emphasizes scalability through replication-based fault tolerance and location transparency using a global naming service. All support synchronous communication but Globe does not provide asynchronous messaging or callbacks like CORBA and DCOM. Security approaches also differ, with Globe requiring more work to support standardized mechanisms.
This document discusses various security concepts including types of threats, security mechanisms, authentication methods, access control, and electronic payment systems. It provides examples of security architectures like Globus and protocols like Kerberos. Key topics covered include encryption, digital signatures, firewalls, capabilities, and privacy in electronic payment systems.
The document summarizes concepts related to fault tolerance including:
- Dependability includes availability, reliability, safety and maintainability.
- Different types of failures like crash, omission, timing and arbitrary failures.
- Redundancy can be used to mask failures through techniques like triple modular redundancy.
- Agreement in faulty systems looks at problems like the Byzantine generals problem.
- Reliable multicasting schemes use techniques such as hierarchical feedback control and virtual synchrony.
- Commit protocols like two-phase commit and three-phase commit coordinate transactions between processes.
- Recovery techniques involve stable storage, checkpointing to avoid domino effect, and message logging to prevent orphans.
The document summarizes key concepts related to consistency models and replication in distributed systems. It covers:
1) Different models of consistency like strict consistency, sequential consistency, causal consistency, and eventual consistency. Weaker models allow more flexible replication but can violate ordering guarantees.
2) Consistency is enforced through synchronization operations or lack thereof. Models using synchronization include weak, release, and entry consistency.
3) Replication techniques like primary-backup, quorum-based, and active replication protocols for maintaining consistency across replicas during reads and writes.
4) Local-write and remote-write protocols differ in where updates are applied primarily. Consistency is maintained through propagating the updates.
The document discusses various topics related to processes and threads including thread usage in nondistributed systems, multithreaded server models, the X-Window system, client-side software for distribution transparency, object adapters, code migration in heterogeneous systems, and software agents in distributed systems. It provides details on thread implementation, reasons for migrating code, models for code migration, and agent communication languages. Key concepts covered include context switching, multithreaded servers, binding of clients to servers, object registration and activation policies, maintaining a migration stack, and FIPA ACL message types and examples.
The document discusses various topics related to communication protocols including layered protocols, data link layer, client-server TCP, middleware protocols, remote procedure calls, parameter passing, distributed objects, persistence and synchronicity in communication, Berkeley sockets, message passing interface, message queuing models, message brokering, message transfer in MQSeries, data streams, quality of service specification, synchronization mechanisms.
The document discusses various techniques for naming and locating entities in distributed systems, including:
1) Name spaces use hierarchical naming schemes to organize entities, while linking and mounting allow connecting different name spaces. Distributed name spaces are partitioned across multiple layers.
2) DNS implements a global, hierarchical name space and uses resource records like A records to map names to IP addresses. X.500 provides a directory service with naming attributes.
3) Location services locate mobile entities, using techniques like home-based approaches, hierarchical location services with forwarding pointers, and pointer caches. Scalability, unreferenced objects, and reference counting are challenges.
This document discusses data warehousing and OLAP technology for data mining. It defines what a data warehouse is, including that it is a subject-oriented, integrated, time-variant and non-volatile collection of data to support management decision making. It also discusses data warehouse architectures like star schemas and snowflake schemas, which organize data into fact and dimension tables. Finally, it discusses OLAP and multidimensional data modeling using data cubes to enable complex analyses of data in multiple dimensions.
This document provides an overview of data mining concepts and techniques. It defines data mining as the extraction of interesting and useful patterns from large amounts of data. The document outlines several potential applications of data mining, including market analysis, risk analysis, and fraud detection. It also describes the typical steps involved in a data mining process, including data cleaning, pattern evaluation, and knowledge presentation. Finally, it discusses different data mining functionalities, such as classification, clustering, and association rule mining.
This document discusses distributed transactions and the challenges of ensuring they satisfy the ACID properties of atomicity, consistency, isolation and durability even when transactions span multiple systems. It introduces the two-phase commit protocol, where a coordinator first polls participants if they can commit and then tells them to either commit or abort, addressing failures through durable logs and timeouts. While two-phase commit ensures all or nothing completion, it risks long blocks if the coordinator or participants fail.
This document discusses how to add a system call to the Ubuntu operating system. It begins with an introduction to Ubuntu and explains what a kernel and system calls are. It then provides step-by-step instructions for adding a new system call, including editing relevant files, adding code, and recompiling the kernel. Sample code for calling the new system call is also included. The document concludes with instructions for replacing the current kernel with a new one that has been compiled.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
1. Seminar
on
WWW(World Wide Web)
Presented By
Mr. Pratik R. Tambekar
Roll No.:-19
M.Tech II Sem(CSE)
Department of Computer Science & Engineering
YCCE, Nagpur
2. The World Wide Web (WWW) can be viewed as a huge
distributed system with millions of clients and servers for
accessing linked documents.
Servers maintain collections of documents while clients
provide users an easy-to-use interface for presenting and
accessing those documents.
A document is fetched from a server, transferred to a
client, and presented on the screen. To a user there is
conceptually no difference between a document stored
locally or in another part of the world.
2
INTRODUCTION
3. CONT…..
Now, Web has become more than just a simple
document based system.
With the emergence of Web services, it is becoming a
system of distributed services rather than just documents
offered to any user or machine.
What can we get from WWW?
Read news, listen to music and watch video;
Buy or sell goods such as books, airline tickets;
Make reservations on hotel room, rental car, restaurant, etc.;
Pay bills and transfer money from one bank account to another;
…
3
5. The core of a Web site: a process that has access
to a local file system storing documents.
How to refer to a document?
URL (Uniform Resource Locator)?
Example:
http://www.cse.unl.edu/~ylu/csce855/notes/web-system.
ppt
A client interacts with Web servers through a special
application known as browser.
What’s the key function of a browser?
Responsible for displaying documents.
5
6. Document Model
A Web document does not only contain text, but it can
include all kinds of dynamic features such as audio, video,
animations, etc.
In many cases special helper applications (interpreters)
are needed, and they are integrated into the browser.
E.g., Windows Media Player and QuickTime Player for playing
streaming content
The variety of document types forces browser to be
extensible. As a result, plug-ins are required to follow a
standard interfaces so that they can be easily integrated
with the browsers. 6
7. CONT…..
User data comes from an HTML form, specifying the
program and parameters.
Server-side scripting technologies are used to generate
dynamic content:
Microsoft: Active Server Pages (ASP.NET)
Sun: Java Server Pages (JSP)
Netscape: JavaScript
Free Software Foundation: PHP
7
8. 8
Document Model (1)
<HTML> <!- Start of HTML document -->
<BODY> <!- Start of the main body -->
<H1>Hello World/H1> <!- Basic text to be displayed -->
<P> <!- Start of a new paragraph -->
<SCRIPT type = "text/javascript"> <!- identify scripting language -->
document.writeln ("<H1>Hello World</H1>; // Write a line of text
</SCRIPT> <!- End of scripting section -->
</P> <!- End of paragraph section -->
</BODY> <!- End of main body -->
</HTML> <!- End of HTML section -->
• A simple Web page embedding a script written in JavaScript.
9. When a web page is loaded, the browser creates a
Document Object Model of the page.
The HTML DOM model is constructed as a tree of
Objects:
9
10. Document Model (2)
10
(1) <!ELEMENT article (title, author+,journal)>
(2) <!ELEMENT title (#PCDATA)>
(3) <!ELEMENT author (name, affiliation?)>
(4) <!ELEMENT name (#PCDATA)>
(5) <!ELEMENT affiliation (#PCDATA)>
(6) <!ELEMENT journal (jname, volume, number?, month? pages, year)>
(7) <!ELEMENT jname (#PCDATA)>
(8) <!ELEMENT volume (#PCDATA)>
(9) <!ELEMENT number (#PCDATA)>
(10) <!ELEMENT month (#PCDATA)>
(11) <!ELEMENT pages (#PCDATA)>
(12) <!ELEMENT year (#PCDATA)>
• An XML definition for referring to a journal article.
11. 11
(1) <?xml = version "1.0">
(2) <!DOCTYPE article SYSTEM "article.dtd">
(3) <article>
(4) <title> Prudent Engineering Practice for Cryptographic Protocols</title>
(5) <author><name>M. Abadi</name></author>
(6) <author><name>R. Needham</name></author>
(7) <journal>
(8) <jname>IEEE Transactions on Software Engineering</jname>
(9) <volume>22</volume>
(10) <number>12</number>
(11) <month>January</month>
(12) <pages>6 – 15</pages>
(13) <year>1996</year>
(14) </journal>
(15) </article>
• An XML document using the XML definitions from previous slide
12. Document Types
12
Type Subtype Description
Text Plain Unformatted text
HTML Text including HTML markup commands
XML Text including XML markup commands
Image GIF Still image in GIF format
JPEG Still image in JPEG format
Audio Basic Audio, 8-bit PCM sampled at 8000 Hz
Tone A specific audible tone
Video MPEG Movie in MPEG format
Pointer Representation of a pointer device for presentations
Application Octet-stream An uninterrupted byte sequence
Postscript A printable document in Postscript
PDF A printable document in PDF
Multipart Mixed Independent parts in the specified order
Parallel Parts must be viewed simultaneously
• Six top-level MIME types and some common subtypes.
14. 14
CONT………..
(1) <HTML>
(2) <BODY>
(3) <P>The current content of <pre>/data/file.txt</PRE>is:</P>
(4) <P>
(5) <SERVER type = "text/javascript");
(6) clientFile = new File("/data/file.txt");
(7) if(clientFile.open("r")){
(8) while (!clientFile.eof())
(9) document.writeln(clientFile.readln());
(10) clientFile.close();
(11) }
(12) </SERVER>
(13) </P>
(14) <P>Thank you for visiting this site.</P>
(15) </BODY>
(16) </HTML>
• An HTML document containing a JavaScript to be executed by the server
16. HTTP
All communication between clients and servers is based
on HTTP. Servers listen on port 80.
HTTP is a simple protocol; a client sends a request to a
server and waits for a response.
HTTP is stateless; it does not have any concept of open
connection and does not require a server to maintain
information on its clients. (Can use HTTP cookies to store
session information.)
HTTP is based on TCP; whenever a client issues a request
to a server, it first sets up a TCP connection and sends
the message on that connection. The same connection is
used for receiving the response.
One of the problems with the first versions of HTTP was
its inefficient use of TCP connections.
HTTP 1.0 vs. HTTP 1.1
16
17. HTTP CONNECTIONS
A Web document is constructed from a collection of
different files from the same server.
In HTTP version 1.0 and older, each request to a server
required setting up a separate connection. When server
had responded, the connection was broken down. These
connections are referred as nonpersistent.
In HTTP version 1.1, several requests and their responses
can be issued without the need for a separate
connection. These connections are referred as
persistent.
Furthermore, a client can issue several requests in a row
without waiting for the response to the first request
which is referred as pipelining.
17
18. CONT……….
(a) Using non-persistent connections. (b) Using persistent connections. 18
19. HTTP Methods
19
Operation Description
Head Request to return the header of a document
Get Request to return a document to the client
Put Request to store a document
Post Provide data that is to be added to a document (collection)
Delete Request to delete a document
• Operations supported by HTTP.
22. 22
Header Source Contents
CONT……….
Accept Client The type of documents the client can handle
Accept-Charset Client The character sets are acceptable for the client
Accept-Encoding Client The document encodings the client can handle
Accept-Language Client The natural language the client can handle
Authorization Client A list of the client's credentials
WWW-Authenticate Server Security challenge the client should respond to
Date Both Date and time the message was sent
ETag Server The tags associated with the returned document
Expires Server The time how long the response remains valid
From Client The client's e-mail address
Host Client The TCP address of the document's server
If-Match Client The tags the document should have
If-None-Match Client The tags the document should not have
If-Modified-Since Client Tells the server to return a document only if it has been
modified since the specified time
If-Unmodified-Since Client Tells the server to return a document only if it has not been
modified since the specified time
Last-Modified Server The time the returned document was last modified
Location Server A document reference to which the client should redirect its
request
Referer Client Refers to client's most recently requested document
Upgrade Both The application protocol the sender wants to switch to
Warning Both Information about the status of the data in the message
• Some
HTTP
message
headers.
23. Processes
A plug-in is a small program that can be dynamically
loaded into a browser for handling a specific document
type.
When a browser encounters a document type for which
it needs a plug-in, it loads the plug-in locally and creates
an instance
The plug-in is removed from the browser when it isno
longer needed.
23
1.Clients:
25. CONT……..
25
• Using a Web proxy when the browser does not speak FTP.
26. Important: The majority of Web servers is a
configured Apache server, which breaks down
each HTTP request handling into eight phases. This
approach allows flexible configuration of servers.
26
2.Servers:
• General organization of the Apache Web server.
27. CONT…….
In order to invoke the appropriate handler at the right time,
processing HTTP requests is broken down into sevral phases.
A module can register a handler for a specific phase.
Whenever a phase is reached, the core module inspects which
handlers have been registered for that phase and invoke one
of them.
27
1. Resolving document reference to local file name
2. Client authentication
3. Client access control
4. Request access control
5. MIME type determination of the response
6. General phase for handling leftovers
7. Transmission of the response
8. Logging data on the processing of the request
28. 28
Server Clusters
• Essence: To improve performance and availability,
WWW servers are often clustered in a way that is
transparent to clients:
• The principle of using a cluster of workstations to implement a Web service.
29. CONT…….
• Problem: The front end may easily get overloaded,
so that special measures need to be taken.
• Transport-layer switching: Front end simply
passes the TCP request to one of the servers,
taking some performance metric into account.
• Content-aware distribution: Front end reads the
content of the HTTP request and then selects the
best server.
• A crucial aspect of this organization is the the
design of the front end as it can easily become a
serious performance bottleneck 29
31. CONT……
(b) A scalable content-aware cluster of Web servers. 31
32. 1.URL(Uniform Resource Locators):
Uniform Resource Locator tells how and
where to access a resource
32
Naming
• Often-used structures for URLs.
a) Using only a DNS name.
b) Combining a DNS name with a port number.
c) combining an IP address with a port number.
33. 33
CONT…….
Name Used for Example
http HTTP http://www.cs.vu.nl:80/globe
ftp FTP ftp://ftp.cs.vu.nl/pup/minx/README
file Local file file:/edu/book/work/chp/11/11
data Inline data data:text/plain;charset=iso-8859-7,%e1%e2%e3
telnet Remote login telnet://flits.cs.vu.nl
tel Telephone tel:+31201234567
modem Modem modem:+31201234567;type=v32
Examples of URLs.
34. 2.URN(Uniform Resource Names):
URNs are location-independent references to
documents.
34
The general structure of a URN
• A typical example of a URN is the one used for
identifying books by means of their ISBN such as
urn:isbn:0-13-349945-6
35. Synchronization: WebDAV
• Problem: There is a growing need for collaborative
auditing of Web documents, but bare-bones HTTP can’t
help here.
• Solution: Web Distributed Authoring and Versioning.
• Supports exclusive and shared write locks, which
operate on entire documents
• A lock is passed by means of a lock token; the server
registers the client(s) holding the lock
• Clients modify the document locally and post it back
to the server along with the lock token
• Note: There is no specific support for crashed clients
holding a lock.
35
36. Caching and Replication
Web Proxy Caching:
• Basic idea: Sites install a separate proxy server that
handles all outgoing requests. Proxies subsequently
cache incoming documents. Cache-consistency
protocols:
• Always verify validity by contacting server
• Age-based consistency:
Texpire = α·(Tcached – Tlast_modified) + Tcached
• Cooperative caching, by which you first check your
neighbors on a cache miss: 36
38. • A nontransparent form of replication that is widely
deployed is to make an entire copy of a web site
available at a different server. This approach is also
called Mirroring.
• Content Delivery Network: CDNs act as Web hosting
services to replicate documents across the Internet
providing their customers guarantees on high availability
and performance (example: Akamai).
38
Server Replication:
40. • Transport Layer Security: Modern version of the
the Secure Socket Layer (SSL), which “sits”
between transport layer and application protocols.
Relatively simple protocol that can support mutual
authentication using certificates:
40
Security
The position of TLS in the Internet protocol stack.
42. CONT……
1.first, The client informs the server of the cryptographic
algorithms it can handle, as well as any compression methods
it supports.
2.In the second phase, authentication takes place. The server is
always required to authenticate itself, for which reason it
passes the client a certificate containing its public key signed
by a certification authority CA.
3.If server requires that the client be authenticated, the client
will have to send a certificate to the server.
4.The client generates a random number that will be used by
both sides for constructing a session key, and sends this
number to the server, encrypted with the server’s public key
5.If client authenticated is required, the client signs the number
with its private key.
42