The document discusses various aspects of application layer protocols. It begins with an overview of the protocol stack and how application data passes through different layers. It then describes several important application layer protocols - Domain Name System (DNS) for name resolution, Hypertext Transfer Protocol (HTTP) for the web, Simple Mail Transfer Protocol (SMTP) for email transmission, Post Office Protocol version 3 (POP3) and Internet Message Access Protocol version 4 (IMAP4) for email delivery, and File Transfer Protocol (FTP) for file transfer. For each protocol, the document explains their basic architecture, message formats, and mechanisms of data transfer.
Human: Thank you for the summary. Can you provide a more concise summary in 2 sentences
To build a web app, one must:
1. Get a unique domain name so users can easily access the site.
2. Set up hosting for a server by renting a computer and web server space from a provider that will assign an IP address.
3. Write application code using common frameworks like LAMP or MEAN stacks, then deploy the code on the rented server. Once the domain is linked to the server IP, the web app will be accessible to users worldwide.
Tim Berners-Lee invented the World Wide Web in 1989-1990 at CERN as a means to transfer text and graphics simultaneously using a client/server data transfer protocol. The web relies on URIs for locating resources, HTTP for accessing resources over the web, and hypertext for easy navigation between resources using HTML. Key components included browsers to send HTTP requests to servers and render returned web pages constructed with HTML, CSS, and other files.
Building a Linux IPv6 DNS Server Project review PPT v3.0Hari
The document summarizes an academic project that implements IPv6 to address limitations in IPv4. The project involves setting up a client-server connection using IPv6, allowing clients to look up the server status and access resources across platforms. It discusses modules for the project schedule including kernel compilation, DNS configuration, establishing the client-server connection, and cross-platform testing. The conclusion states that the project provides a long-term, scalable, and secure IP network solution while resolving IPv6 and IPv4 name servers.
The document summarizes an academic project to implement IPv6. It discusses replacing the existing IPv4 network with IPv6 to gain a larger address space and integrated security features. The project is split into modules, including kernel compilation, client-server connections, lookup states, and cross-platform access. The conclusion states that IPv6 provides long-term scalability, reliability, and security for IP networks.
Building Linux IPv6 DNS Server (Complete Presentation)Hari
This document summarizes a project that implemented IPv6 and enabled cross-platform communication between IPv6 and IPv4 devices. The project involved compiling a new kernel with IPv6 support, setting up a lookup service to register IPv6 clients, configuring DNS to resolve IPv6 and IPv4 names, and testing cross-platform communication between an IPv6 server and IPv4 and IPv6 clients. The conclusion is that the project provides a scalable and secure IPv6 solution to resolve long-term addressing issues and allows communication across different platforms.
The document provides an overview of fundamental web concepts including:
- The Internet is a global network of interconnected computer networks that allows billions of devices to share information and communicate.
- IP addresses are unique numeric identifiers assigned to devices connected to a network that are used to locate and address specific routers or hosts.
- Domain names are human-friendly names like "google.com" that are translated to IP addresses by the Domain Name System (DNS) to locate websites.
This presentation describes how a web browser works. From the user request, request processing, getting to DNS server, generating the HTTP request, receiving the HTTP response and finally rendering the HTML page and displaying the webpage to the user as requested, this presentation encompasses the entire process of a web browser fetching a webpage.
PS: Pl download to see the entire presentation with the animation involved. Without the animation few slides may seem to be confusing.
Thank you.
To build a web app, one must:
1. Get a unique domain name so users can easily access the site.
2. Set up hosting for a server by renting a computer and web server space from a provider that will assign an IP address.
3. Write application code using common frameworks like LAMP or MEAN stacks, then deploy the code on the rented server. Once the domain is linked to the server IP, the web app will be accessible to users worldwide.
Tim Berners-Lee invented the World Wide Web in 1989-1990 at CERN as a means to transfer text and graphics simultaneously using a client/server data transfer protocol. The web relies on URIs for locating resources, HTTP for accessing resources over the web, and hypertext for easy navigation between resources using HTML. Key components included browsers to send HTTP requests to servers and render returned web pages constructed with HTML, CSS, and other files.
Building a Linux IPv6 DNS Server Project review PPT v3.0Hari
The document summarizes an academic project that implements IPv6 to address limitations in IPv4. The project involves setting up a client-server connection using IPv6, allowing clients to look up the server status and access resources across platforms. It discusses modules for the project schedule including kernel compilation, DNS configuration, establishing the client-server connection, and cross-platform testing. The conclusion states that the project provides a long-term, scalable, and secure IP network solution while resolving IPv6 and IPv4 name servers.
The document summarizes an academic project to implement IPv6. It discusses replacing the existing IPv4 network with IPv6 to gain a larger address space and integrated security features. The project is split into modules, including kernel compilation, client-server connections, lookup states, and cross-platform access. The conclusion states that IPv6 provides long-term scalability, reliability, and security for IP networks.
Building Linux IPv6 DNS Server (Complete Presentation)Hari
This document summarizes a project that implemented IPv6 and enabled cross-platform communication between IPv6 and IPv4 devices. The project involved compiling a new kernel with IPv6 support, setting up a lookup service to register IPv6 clients, configuring DNS to resolve IPv6 and IPv4 names, and testing cross-platform communication between an IPv6 server and IPv4 and IPv6 clients. The conclusion is that the project provides a scalable and secure IPv6 solution to resolve long-term addressing issues and allows communication across different platforms.
The document provides an overview of fundamental web concepts including:
- The Internet is a global network of interconnected computer networks that allows billions of devices to share information and communicate.
- IP addresses are unique numeric identifiers assigned to devices connected to a network that are used to locate and address specific routers or hosts.
- Domain names are human-friendly names like "google.com" that are translated to IP addresses by the Domain Name System (DNS) to locate websites.
This presentation describes how a web browser works. From the user request, request processing, getting to DNS server, generating the HTTP request, receiving the HTTP response and finally rendering the HTML page and displaying the webpage to the user as requested, this presentation encompasses the entire process of a web browser fetching a webpage.
PS: Pl download to see the entire presentation with the animation involved. Without the animation few slides may seem to be confusing.
Thank you.
Opal: Simple Web Services Wrappers for Scientific ApplicationsSriram Krishnan
The grid-based infrastructure enables large-scale scientific applications to be run on distributed resources and coupled in innovative ways. However, in practice, grid resources are not very easy to use for the end-users who have to learn how to generate security credentials, stage inputs and outputs, access grid-based schedulers, and install complex client software. There is an imminent need to provide transparent access to these resources so that the end-users are shielded from the complicated details, and free to concentrate on their domain science. Scientific applications wrapped as Web services alleviate some of these problems by hiding the complexities of the back-end security and computational infrastructure, only exposing a simple SOAP API that can be accessed programmatically by application-specific user interfaces. However, writing the application services that access grid resources can be quite complicated, especially if it has to be replicated for every application. In this presentation, we present Opal which is a toolkit for wrapping scientific applications as Web services in a matter of hours, providing features such as scheduling, standards-based grid security and data management in an easy-to-use and configurable manner
Demystifying SharePoint Infrastructure – for NON-IT People SPC Adriatics
This talk is specifically for NON-SharePoint infrastructure administrators (or for new ones still figuring things out)! Instead it’s for the rest of the SharePoint team – come learn about the basic building blocks of SharePoint infrastructure – things like DNS, load balancing, AD, high availability and disaster recovery, backup options, database options, and some of the core components of Windows in an understandable way so you can speak the lingo and seem really smart!
Zvonimir Mavretić
The document discusses the course outcomes and modules for a Computer Network course. The course aims to help students understand networking concepts and protocols at different layers. It will cover topics like network architectures, protocols, configurations, and analysis of simple networks. The textbook listed is Data Communications and Networking by Forouzan. Module 5 focuses on the application layer and protocols like SMTP, FTP, DNS etc. It also discusses client-server and peer-to-peer paradigms along with HTTP, web clients and servers, URLs, and caching using proxy servers.
Introduction to the Internet and Web.pptxhishamousl
The document provides an introduction to the Internet and the World Wide Web. It defines the Internet as a global network of interconnected computer networks, and notes that no single entity controls it. It describes how the World Wide Web uses common protocols to allow computers to share text, graphics, and multimedia over the Internet. It also defines key concepts like URLs, domains, IP addresses, browsers, servers, and the client-server model.
Aplication and Transport layer- a practical approachSarah R. Dowlath
This presentation was done for a Networking course. It really shows from a more practical standpoint how the application layer and the transport layer communicates with each other and operates on a whole to get the job done. It gives the reader more insight of how the pieces come together in an IT networking world.
The document provides an overview of Hadoop including:
- A brief history of Hadoop and its origins from Nutch.
- An overview of the Hadoop architecture including HDFS and MapReduce.
- Examples of how companies like Yahoo, Facebook and Amazon use Hadoop at large scales to process petabytes of data.
This document provides an overview of the Internet and Java programming. It discusses the evolution of the Internet from early protocols like email and FTP to the development of the World Wide Web. It also explains key Internet technologies like HTML, URLs, and HTTP. The document then introduces Java, covering its portability, object-oriented features, and use for developing interactive applets and applications. It includes examples of simple Java code and discusses how Java code is compiled and executed.
The document provides an introduction to computer networks and the internet. It discusses key topics such as:
- What the internet is and how it allows global communication and information sharing through interconnected networks using protocols like IP.
- Popular internet applications include the world wide web, email, file transfer, and streaming.
- The differences between "internet" and "Internet" and what is needed to visit websites, including having a machine with TCP/IP, a gateway, DNS server, browser, and URL or IP address.
- Other topics covered include IP addresses, domains, URLs, internet browsers, creating basic web pages, and protecting computers from viruses and spyware.
This document summarizes Evans Ye's presentation on using Apache HBase to search network traffic logs. It describes the problem of searching large netflow logs, an initial solution design using HBase, and lessons learned. Performance testing showed the initial design did not scale to their needs. The solution was improved by changing the HBase row keys and using filters to better query the data and meet requirements. Flume was used to ingest netflow logs into HBase.
Advanced Web Design And Development BIT 3207Lori Head
This document provides an overview of the key concepts for the Advanced Web Design and Development course (BIT 3207). It lists recommended books and motivation for learning web design. It then covers network fundamentals including IP, IP addressing, transport layer protocols, TCP connections, HTTP protocol, URLs, status codes and methods. It also discusses client-side components like browsers, HTML, HTML5, XML and JavaScript. On the server-side it covers servers, web servers, CGI, JSP, ASP/PHP and databases. It concludes with an explanation of 3-tier architecture and the layers of presentation, application and data.
The document provides information about the application layer and various application layer protocols. It discusses:
1) The application layer provides services to end users through protocols that allow software to send and receive information for users. Common application layer protocols include HTTP, FTP, SMTP, and DNS.
2) There are two main paradigms for application layer communication - the client-server paradigm where a server continuously provides services to clients, and the peer-to-peer paradigm where nodes can both request and provide services.
3) HTTP is the main application layer protocol for the web and uses either non-persistent or persistent connections. HTTP defines request and response message formats.
CDNs improve response times and enable streaming of audio/video by distributing content across multiple servers located close to users. DNS-based CDNs select the nearest surrogate server using DNS lookups, but this has delays. Service-oriented routers select servers based on packet contents and server loads, improving performance. NetServ routers can dynamically deploy "MicroCDN" modules to cache content on nearby routers for future user requests.
Topics:
- Web Architecture Overview
- HTTP (Hypertext Transfer Protocol)
- REST (Representational State Transfer)
- JSON (JavaScript Object Notation)
Slides for the course of "Ambient Intelligence: Technology and Design" given at Politecnico di Torino during year 2013/2014.
Course website: http://bit.ly/polito-ami
TCP/IP is the standard communication protocol on the internet. It is comprised of several layers including application, transport, internet, and link layers. The transport layer includes TCP and UDP which provide connection-oriented and connectionless data transmission respectively. TCP ensures reliable data delivery through features like connections, acknowledgments, and flow control. IPv6 is the latest version of the Internet Protocol which addresses the shortcomings of IPv4 like limited address space. IPv6 features include a larger 128-bit address space, simplified header format, built-in security, and autoconfiguration capabilities.
This document provides an overview of slides for a chapter on the network layer data plane from the textbook "Computer Networking: A Top-Down Approach". The slides cover topics including network layer services, forwarding versus routing, router architecture, addressing, and longest prefix matching. The author provides the slides freely for educational use and asks that their source and copyright be acknowledged.
Building high performance microservices in finance with Apache ThriftRX-M Enterprises LLC
Apache Roadshow Chicago Talk on May 14, 2019
In this talk we’ll look at the ways Apache Thrift can solve performance problems commonly facing next generation applications deployed in performance sensitive capital markets and banking environments. The talk will include practical examples illustrating the construction, performance and resource utilization benefits of Apache Thrift. Apache Thrift is a high-performance cross platform RPC and serialization framework designed to make it possible for organizations to specify interfaces and application wide data structures suitable for serialization and transport over a wide variety of schemes. Due to the unparalleled set of languages supported by Apache Thrift, these interfaces and structs have similar interoperability to REST type services with an order of magnitude improvement in performance. Apache Thrift services are also a perfect fit for container technology, using considerably fewer resources than traditional application server style deployments. Decomposing applications into microservices, packaging them into containers and orchestrating them on systems like Kubernetes can bring great value to an organization; however, it can also take a very fast monolithic application and turn it into a high latency web of slow, resource hungry services. Apache Thrift is a perfect solution to the performance and resource ills of many microservice based endeavors.
This document provides an overview of Hadoop and related Apache projects. It begins with an introduction to Hadoop, explaining why it was created and who uses it. It then discusses HDFS and its goals of storing huge datasets across commodity hardware. Key components of HDFS like the NameNode, DataNodes and block placement are explained. The document also covers MapReduce and provides a word count example. Finally, it briefly introduces related Apache projects like Pig, HBase and Hive that build upon Hadoop.
This document provides an overview of Hadoop and related Apache projects. It begins with an introduction to Hadoop, explaining why it was created and who uses it. It then discusses HDFS and its goals, architecture, and functions. Next, it covers MapReduce, providing examples and explaining features like locality optimizations. Finally, it briefly introduces related subprojects like Pig, HBase, Hive, and Zookeeper that build upon Hadoop.
The document discusses data warehousing and online analytical processing (OLAP). It covers topics like data warehouse modeling, design, implementation, usage, and efficient processing of OLAP queries. Attribute-oriented induction for data generalization is also introduced, which allows interactive exploration of generalized data relationships through operations like drilling and pivoting. The key aspects and techniques involved in building and analyzing data warehouses are summarized.
The document appears to be a list of candidates who have scored 100/100 or near 100/100 on an exam from various colleges in India. It includes the candidate's name, college, and score. There are over 100 entries in the list with candidates from colleges all over India, primarily from the state of Karnataka. The majority of candidates scored 100/100 but some scored 98/100 or 96/100.
Opal: Simple Web Services Wrappers for Scientific ApplicationsSriram Krishnan
The grid-based infrastructure enables large-scale scientific applications to be run on distributed resources and coupled in innovative ways. However, in practice, grid resources are not very easy to use for the end-users who have to learn how to generate security credentials, stage inputs and outputs, access grid-based schedulers, and install complex client software. There is an imminent need to provide transparent access to these resources so that the end-users are shielded from the complicated details, and free to concentrate on their domain science. Scientific applications wrapped as Web services alleviate some of these problems by hiding the complexities of the back-end security and computational infrastructure, only exposing a simple SOAP API that can be accessed programmatically by application-specific user interfaces. However, writing the application services that access grid resources can be quite complicated, especially if it has to be replicated for every application. In this presentation, we present Opal which is a toolkit for wrapping scientific applications as Web services in a matter of hours, providing features such as scheduling, standards-based grid security and data management in an easy-to-use and configurable manner
Demystifying SharePoint Infrastructure – for NON-IT People SPC Adriatics
This talk is specifically for NON-SharePoint infrastructure administrators (or for new ones still figuring things out)! Instead it’s for the rest of the SharePoint team – come learn about the basic building blocks of SharePoint infrastructure – things like DNS, load balancing, AD, high availability and disaster recovery, backup options, database options, and some of the core components of Windows in an understandable way so you can speak the lingo and seem really smart!
Zvonimir Mavretić
The document discusses the course outcomes and modules for a Computer Network course. The course aims to help students understand networking concepts and protocols at different layers. It will cover topics like network architectures, protocols, configurations, and analysis of simple networks. The textbook listed is Data Communications and Networking by Forouzan. Module 5 focuses on the application layer and protocols like SMTP, FTP, DNS etc. It also discusses client-server and peer-to-peer paradigms along with HTTP, web clients and servers, URLs, and caching using proxy servers.
Introduction to the Internet and Web.pptxhishamousl
The document provides an introduction to the Internet and the World Wide Web. It defines the Internet as a global network of interconnected computer networks, and notes that no single entity controls it. It describes how the World Wide Web uses common protocols to allow computers to share text, graphics, and multimedia over the Internet. It also defines key concepts like URLs, domains, IP addresses, browsers, servers, and the client-server model.
Aplication and Transport layer- a practical approachSarah R. Dowlath
This presentation was done for a Networking course. It really shows from a more practical standpoint how the application layer and the transport layer communicates with each other and operates on a whole to get the job done. It gives the reader more insight of how the pieces come together in an IT networking world.
The document provides an overview of Hadoop including:
- A brief history of Hadoop and its origins from Nutch.
- An overview of the Hadoop architecture including HDFS and MapReduce.
- Examples of how companies like Yahoo, Facebook and Amazon use Hadoop at large scales to process petabytes of data.
This document provides an overview of the Internet and Java programming. It discusses the evolution of the Internet from early protocols like email and FTP to the development of the World Wide Web. It also explains key Internet technologies like HTML, URLs, and HTTP. The document then introduces Java, covering its portability, object-oriented features, and use for developing interactive applets and applications. It includes examples of simple Java code and discusses how Java code is compiled and executed.
The document provides an introduction to computer networks and the internet. It discusses key topics such as:
- What the internet is and how it allows global communication and information sharing through interconnected networks using protocols like IP.
- Popular internet applications include the world wide web, email, file transfer, and streaming.
- The differences between "internet" and "Internet" and what is needed to visit websites, including having a machine with TCP/IP, a gateway, DNS server, browser, and URL or IP address.
- Other topics covered include IP addresses, domains, URLs, internet browsers, creating basic web pages, and protecting computers from viruses and spyware.
This document summarizes Evans Ye's presentation on using Apache HBase to search network traffic logs. It describes the problem of searching large netflow logs, an initial solution design using HBase, and lessons learned. Performance testing showed the initial design did not scale to their needs. The solution was improved by changing the HBase row keys and using filters to better query the data and meet requirements. Flume was used to ingest netflow logs into HBase.
Advanced Web Design And Development BIT 3207Lori Head
This document provides an overview of the key concepts for the Advanced Web Design and Development course (BIT 3207). It lists recommended books and motivation for learning web design. It then covers network fundamentals including IP, IP addressing, transport layer protocols, TCP connections, HTTP protocol, URLs, status codes and methods. It also discusses client-side components like browsers, HTML, HTML5, XML and JavaScript. On the server-side it covers servers, web servers, CGI, JSP, ASP/PHP and databases. It concludes with an explanation of 3-tier architecture and the layers of presentation, application and data.
The document provides information about the application layer and various application layer protocols. It discusses:
1) The application layer provides services to end users through protocols that allow software to send and receive information for users. Common application layer protocols include HTTP, FTP, SMTP, and DNS.
2) There are two main paradigms for application layer communication - the client-server paradigm where a server continuously provides services to clients, and the peer-to-peer paradigm where nodes can both request and provide services.
3) HTTP is the main application layer protocol for the web and uses either non-persistent or persistent connections. HTTP defines request and response message formats.
CDNs improve response times and enable streaming of audio/video by distributing content across multiple servers located close to users. DNS-based CDNs select the nearest surrogate server using DNS lookups, but this has delays. Service-oriented routers select servers based on packet contents and server loads, improving performance. NetServ routers can dynamically deploy "MicroCDN" modules to cache content on nearby routers for future user requests.
Topics:
- Web Architecture Overview
- HTTP (Hypertext Transfer Protocol)
- REST (Representational State Transfer)
- JSON (JavaScript Object Notation)
Slides for the course of "Ambient Intelligence: Technology and Design" given at Politecnico di Torino during year 2013/2014.
Course website: http://bit.ly/polito-ami
TCP/IP is the standard communication protocol on the internet. It is comprised of several layers including application, transport, internet, and link layers. The transport layer includes TCP and UDP which provide connection-oriented and connectionless data transmission respectively. TCP ensures reliable data delivery through features like connections, acknowledgments, and flow control. IPv6 is the latest version of the Internet Protocol which addresses the shortcomings of IPv4 like limited address space. IPv6 features include a larger 128-bit address space, simplified header format, built-in security, and autoconfiguration capabilities.
This document provides an overview of slides for a chapter on the network layer data plane from the textbook "Computer Networking: A Top-Down Approach". The slides cover topics including network layer services, forwarding versus routing, router architecture, addressing, and longest prefix matching. The author provides the slides freely for educational use and asks that their source and copyright be acknowledged.
Building high performance microservices in finance with Apache ThriftRX-M Enterprises LLC
Apache Roadshow Chicago Talk on May 14, 2019
In this talk we’ll look at the ways Apache Thrift can solve performance problems commonly facing next generation applications deployed in performance sensitive capital markets and banking environments. The talk will include practical examples illustrating the construction, performance and resource utilization benefits of Apache Thrift. Apache Thrift is a high-performance cross platform RPC and serialization framework designed to make it possible for organizations to specify interfaces and application wide data structures suitable for serialization and transport over a wide variety of schemes. Due to the unparalleled set of languages supported by Apache Thrift, these interfaces and structs have similar interoperability to REST type services with an order of magnitude improvement in performance. Apache Thrift services are also a perfect fit for container technology, using considerably fewer resources than traditional application server style deployments. Decomposing applications into microservices, packaging them into containers and orchestrating them on systems like Kubernetes can bring great value to an organization; however, it can also take a very fast monolithic application and turn it into a high latency web of slow, resource hungry services. Apache Thrift is a perfect solution to the performance and resource ills of many microservice based endeavors.
This document provides an overview of Hadoop and related Apache projects. It begins with an introduction to Hadoop, explaining why it was created and who uses it. It then discusses HDFS and its goals of storing huge datasets across commodity hardware. Key components of HDFS like the NameNode, DataNodes and block placement are explained. The document also covers MapReduce and provides a word count example. Finally, it briefly introduces related Apache projects like Pig, HBase and Hive that build upon Hadoop.
This document provides an overview of Hadoop and related Apache projects. It begins with an introduction to Hadoop, explaining why it was created and who uses it. It then discusses HDFS and its goals, architecture, and functions. Next, it covers MapReduce, providing examples and explaining features like locality optimizations. Finally, it briefly introduces related subprojects like Pig, HBase, Hive, and Zookeeper that build upon Hadoop.
The document discusses data warehousing and online analytical processing (OLAP). It covers topics like data warehouse modeling, design, implementation, usage, and efficient processing of OLAP queries. Attribute-oriented induction for data generalization is also introduced, which allows interactive exploration of generalized data relationships through operations like drilling and pivoting. The key aspects and techniques involved in building and analyzing data warehouses are summarized.
The document appears to be a list of candidates who have scored 100/100 or near 100/100 on an exam from various colleges in India. It includes the candidate's name, college, and score. There are over 100 entries in the list with candidates from colleges all over India, primarily from the state of Karnataka. The majority of candidates scored 100/100 but some scored 98/100 or 96/100.
The document discusses the data link layer and its responsibilities. Specifically:
1) The data link layer transforms the physical layer into a link responsible for node-to-node communication. It is responsible for framing, addressing, flow control, error control, and media access control.
2) It provides services to the network layer like transferring data packets between network layers on different machines and offering various service models like unacknowledged connectionless, acknowledged connectionless, and acknowledged connection-oriented.
3) Key data link layer protocols are discussed including framing, error detection using CRC, flow control, and elementary protocol examples like unrestricted simplex and stop-and-wait.
Analog and digital data and signals are discussed. Analog data and signals are continuous while digital data and signals are discrete. Periodic analog signals like sine waves are examined. Digital signals can be represented as composite analog signals. For transmission, digital signals must be converted to analog signals if the channel is bandpass. Transmission is impaired by attenuation, distortion, and noise.
This document discusses different methods for digital-to-analog conversion including amplitude shift keying (ASK), frequency shift keying (FSK), phase shift keying (PSK), and quadrature amplitude modulation (QAM). ASK encodes digital data by changing the amplitude of an analog carrier signal. FSK encodes by changing the frequency of the carrier signal. PSK encodes by changing the phase of the carrier signal. QAM encodes multiple bits onto a signal by using two carrier signals that are 90 degrees out of phase. The document provides examples and diagrams to illustrate how each encoding method works.
This document provides an overview of the Database Management Systems -20ISE43A course. It lists the required textbooks and references. It then outlines the 5 modules that will be covered in the course: introduction to databases, entity relationship diagrams, the relational model, relational algebra, and advanced SQL and transaction management. The document also lists the course outcomes and provides brief descriptions of some of the key topics that will be covered, including embedded SQL, dynamic SQL, database stored procedures, transaction concepts, and concurrency issues.
The document discusses object-oriented programming concepts like classes, objects, encapsulation, abstraction, and information hiding. It provides examples of procedural programming versus object-oriented programming. Key topics from the document include defining a C++ class, creating objects from classes, accessing data members of objects, defining member functions within and outside of classes, and the principles of data encapsulation, abstraction, and information hiding in OOP. The document also contrasts procedural, object-oriented, and generic programming techniques.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
1. INDIAN INSTITUTE OF TECHNOLOGY
KHARAGPUR
Department of Computer Science
and Engineering
Sandip Chakraborty
sandipc@cse.iitkgp.ac.in
Rajat Subhra Chakraborty
rschakraborty@cse.iitkgp.ac.in
CS 31006: Computer Networks – Application Layer
2. Indian Institute of Technology Kharagpur
Software, Kernel
Firmware, Device Driver
Hardware
Protocol Stack Implementation in a Host
Physical
Data Link
Network
Transport
Application
3. Indian Institute of Technology Kharagpur
How Application Data Passes Through Different Layers
Physical
Data Link
Network
Transport
Application
HTTP Data
HTTP
Header
HTTP Data
HTTP
Header
TCP
Header
HTTP Data
HTTP
Header
TCP
Header
IP
Header
HTTP Data
HTTP
Header
TCP
Header
IP
Header
MAC
Header
HTTP Data
HTTP
Header
TCP
Header
IP
Header
MAC
Header
PHY
Header
PHY
Trailer
4. Indian Institute of Technology Kharagpur
Application Layer Interfacing
UDP
End to end
packet delivery
TCP
Connection
Establishment
Reliable Data
Delivery
Flow and
Congestion
Control
Ordered Packet
Delivery
Network
Transport
Application - 1
Data Link
Application - 2 Application - 3 Application - 4
5. Indian Institute of Technology Kharagpur
Application Layer Interfacing
UDP
End to end
packet delivery
TCP
Connection
Establishment
Reliable Data
Delivery
Flow and
Congestion
Control
Ordered Packet
Delivery
Network
Transport
Name Service
(DNS)
Data Link
Web
(HTTP)
Email
(SMTP, POP, IMAP)
File Transfer
(FTP)
6. Indian Institute of Technology Kharagpur
• Assign Unique names to an IP address (machine interface)
• ARPANET – a file hosts.txt listed all the computer names and their IP
addresses.
• Resolve the host name conflicts over the Internet – a naming hierarchy needs
to be managed
• Rooted at an organization Internet Corporation for Assigned Names and Numbers
(ICANN)
Domain Name System (DNS)
7. Indian Institute of Technology Kharagpur
The DNS Name Space
• The top level domains are run by registrars appointed by ICANN
• Name registrar for India (.in domain): registry.in (National Internet Exchange
of India – NIXI)
Source: Computer Networks
(5th Edition) by Tanenbaum,
Wetherell
10. Indian Institute of Technology Kharagpur
• The Domain Name Space and Resource Records: Specifications for a tree-
structured namespace and data associated with names.
• Name Servers: Server programs which hold information about the domain
tree’s structure and set information
• A particular name server has complete information about a subset of the domain
space
• Name servers know the parts of the domain tree for which they have complete
information -- a name server is said to be an AUTHORITY for this parts of the
namespace
Elements of DNS (RFC 1034)
11. Indian Institute of Technology Kharagpur
• Divide registries into non-overlapping zones – every zone has a name server
Name Servers
Source: Computer Networks
(5th Edition) by Tanenbaum,
Wetherell
12. Indian Institute of Technology Kharagpur
• The Domain Name Space and Resource Records: Specifications for a tree-
structured namespace and data associated with names.
• Name Servers: Server programs which hold information about the domain
tree’s structure and set information
• A particular name server has complete information about a subset of the domain
space
• Name servers know the parts of the domain tree for which they have complete
information -- a name server is said to be an AUTHORITY for this parts of the
namespace
• Resolvers: Program that extracts information from name servers in response
to client requests
Elements of DNS (RFC 1034)
13. Indian Institute of Technology Kharagpur
• Every domain has a set of resource records associated with it – DNS
database
Domain Resource Records
14. Indian Institute of Technology Kharagpur
• Domain_name: Domain to which the record applies.
• Time_to_live: Time for which this record is active – volatile records may be
assigned a small value
• Class: Normally IN – Internet resources
• Type: What type of record it is
• Value: Value of the record (IP address for A record type)
Domain Resource Records
15. Indian Institute of Technology Kharagpur
• A Type Records:
cse.iitkgp.ac.in 86400 IN A 203.110.245.250
• CNAME Type Records:
iitkgp.ac.in 86400 IN CNAME www.iitkgp.ac.in
• PTR Type Records:
172.16.5.33 86400 IN PTR www.iitkgp.ac.in
Domain Resource Records
16. Indian Institute of Technology Kharagpur
Sample DNS Database
Source: Computer Networks
(5th Edition) by Tanenbaum,
Wetherell
17. Indian Institute of Technology Kharagpur
Name Resolution – Looking Up for the Names
Source: Computer Networks
(5th Edition) by Tanenbaum,
Wetherell
19. Indian Institute of Technology Kharagpur
Name Resolution (nslookup)
One of the name servers for IITKGP
20. Indian Institute of Technology Kharagpur
Name Resolution (dig)
An authoritative record
is one that comes from
the authority that
manages the record,
and thus is always
correct
UDP Message for
4096 bytes
21. Indian Institute of Technology Kharagpur
• UDP is much faster. TCP requires handshake time. DNS uses a cascading
approach for name resolution. With TCP, for every message, a connection
setup is required.
• DNS requests and responses are generally very small, and fits well within one
UDP segment.
• UDP is not reliable. In DNS, reliability is ensured at the application layer.
After timeout, the DNS client sends back the requests. After few consecutive
timeouts (can be set at the client), the request is aborted with an error.
Why DNS Uses UDP
22. Indian Institute of Technology Kharagpur
Application Layer Interfacing
UDP
End to end
packet delivery
TCP
Connection
Establishment
Reliable Data
Delivery
Flow and
Congestion
Control
Ordered Packet
Delivery
Network
Transport
Name Service
(DNS)
Data Link
Web
(HTTP)
Email
(SMTP, POP, IMAP)
File Transfer
(FTP)
23. Indian Institute of Technology Kharagpur
• Hypertext - A way to represent web
content (text along with formatting)
• Hypertext Markup Language (HTML) - A
scripting language to specify web data
along with simple formatting (bold, italics,
new line).
• A way to convert text based information to
graphics based information
• Today’s era: Many graphics, scripts and
other information are embedded inside
HTML – CSS, JavaScript etc.
The Web – Hypertext Transfer Protocol (HTTP)
24. Indian Institute of Technology Kharagpur
Differences between HTML Versions
Source: Computer Networks
(5th Edition) by Tanenbaum,
Wetherell
25. Indian Institute of Technology Kharagpur
• 1989 (CERN – European Center for Nuclear Research) – help large teams to
collaborate using a constantly changing collection of reports, blueprints,
drawings, photos and other documents
• The proposal came from Tim Berners-Lee
• A public demonstration at Hypertext ‘91 conference
• 1993 – The first graphical browser (Mosaic) developed by Marc Andreessen,
University of Illinois
• Andreessen formed the company Netscape Communications Corp
• Microsoft developed Internet Explorer – “browser war” between Internet Explorer
and Netscape Navigator
• 1994 – CERN and MIT signed an agreement to setup World Wide Web
Consortium (W3C)
A History of the Web
26. Indian Institute of Technology Kharagpur
The Web – Architectural Overview
Source: Computer Networks
(5th Edition) by Tanenbaum,
Wetherell
27. Indian Institute of Technology Kharagpur
• Three questions to be answered for accessing a web page
• What is the page called? (courses.html)
• Where is the page located? (cse.iitkgp.ac.in/~sandipc/)
• How can the page be accessed? (http://)
• Uniform Resource Locator (URL): Each page is assigned a URL that effectively
serves the page’s worldwide name
• URL have three components:
• The protocol
• The qualified name of the machine one which the page is located
• The path uniquely indicating the specific page
HTTP – The Client Side
http://cse.iitkgp.ac.in/~sandipc/courses.html
28. Indian Institute of Technology Kharagpur
• The browser determines the URL
• The browser asks DNS for the IP address of the server cse.iitkgp.ac.in
• DNS replies with 203.110.245.250
• The browser makes a TCP connection to 203.110.245.250 on port 80, the
well known port for the HTTP protocol (Note: https uses 443)
• It sends over an HTTP request asking for the page ~sandipc/courses.html
• The cse.iitkgp.ac.in server sends the page as an HTTP response, for example
by sending the file /courses.html
• If the page includes URLS that are needed for display, the browser fetches
the other URLs using the same process
The Steps When You Click http://cse.iitkgp.ac.in/~sandipc/courses.html
29. Indian Institute of Technology Kharagpur
The Steps When You Click http://cse.iitkgp.ac.in/~sandipc/courses.html
30. Indian Institute of Technology Kharagpur
• The browser displays the page courses.html
• The TCP connections are released if there are no other requests to the same
server for a short period.
The Steps When You Click http://cse.iitkgp.ac.in/~sandipc/courses.html
31. Indian Institute of Technology Kharagpur
• Generalization of the URLs – specifies the pages only or partially refers the pages
without complete locations
• /images/iit_kgp.png – may become URL
https://cse.iitkgp.ac.in/images/iit_kgp.png if accessed from cse.iitkgp.ac.in
Uniform Resource Identifier (URI)
32. Indian Institute of Technology Kharagpur
• Accept a TCP connection from a client (a browser).
• Get the path to the page, which is the name of the file requested.
• Get the file (from disk).
• Sends the content of the file to the client.
• Release (close) the TCP connection.
HTTP – The Server Side
33. Indian Institute of Technology Kharagpur
Multi-Threaded Server
Source: Computer Networks (5th Edition) by Tanenbaum, Wetherell
Serves multiple client
requests simultaneously
34. Indian Institute of Technology Kharagpur
• HTTP uses TCP to set up a connection between the server and the client. In
general, HTTP server runs at port 80 (default port) or 8080 (alternate port).
• HTTP 1.0 – After the connections were established, a single request was sent
over and a single response was sent back. Then the TCP connections are
released.
• Create separate connections for every content in the web-page. Overhead is high.
• Persistent Connection (HTTP 1.1) – send additional requests and additional
responses in a single TCP connection (connection reuse).
• It is also possible to pipeline requests.
Connections
35. Indian Institute of Technology Kharagpur
Connections
HTTP 1.0 HTTP 1.1 – Persistent
Connections
HTTP 1.1 – Persistent
Connections (Pipelined)
Source: Computer Networks
(5th Edition) by Tanenbaum,
Wetherell
Persistent Connections:
Set at keep-alive
information in the HTTP
header (HTTP 1.0)
HTTP 1.1 – All connections
are by default persistent
36. Indian Institute of Technology Kharagpur
• Specifies what a HTTP Request will do
• GET filename HTTP/1.1
HTTP Request Methods
37. Indian Institute of Technology Kharagpur
HTTP Request Header Fields (Partial List)
38. Indian Institute of Technology Kharagpur
• Specifies the status of the request message.
HTTP Response
39. Indian Institute of Technology Kharagpur
HTTP Response Header Fields (Partial List)
40. Indian Institute of Technology Kharagpur
HTTP Caching
Source: Computer Networks (5th Edition) by Tanenbaum, Wetherell
41. Indian Institute of Technology Kharagpur
Dynamic Web Applications
Source: Computer Networks (5th Edition) by Tanenbaum, Wetherell
42. Indian Institute of Technology Kharagpur
• HTTP is by default a stateless protocol
• Every Response corresponds to the previous Request only, it does not remember any
state information, such as last page accessed
• Use Cookies to store the state information. Client forwards the additional
information along with the Request message by reading the cookie.
Cookies
43. Indian Institute of Technology Kharagpur
• Intercepts the TCP connections to process the HTTP data.
HTTP Proxy
HTTP Proxy
44. Indian Institute of Technology Kharagpur
Application Layer Interfacing
UDP
End to end
packet delivery
TCP
Connection
Establishment
Reliable Data
Delivery
Flow and
Congestion
Control
Ordered Packet
Delivery
Network
Transport
Name Service
(DNS)
Data Link
Web
(HTTP)
Email
(SMTP, POP, IMAP)
File Transfer
(FTP)
45. Indian Institute of Technology Kharagpur
Electronic Mails – Architecture and Services
• User Agent: Allow people to read and send emails.
• Message Transfer Agents (main servers): Move the message from the source
to the destination
sandipc@cse.iitkgp.ernet.in
sukumar@iitg.ernet.in
cse.iitkgp.ernet.in
iitg.ernet.in
46. Indian Institute of Technology Kharagpur
• System processes run in the background on mail servers (always available).
• Automatically move emails through the system from the originator to the
recipient
• Uses Simple Mail Transfer Protocol (SMTP) – RFC 821, RFC 5321
• Implements mailing lists, an identical copy of message is delivered to
everyone in the list (btech@iitkgp.ac.in)
• Implements Mailboxes, to store all the emails received for a user
Message Transfer Agents
47. Indian Institute of Technology Kharagpur
• An envelope containing message header and message body
Message Format (RFC 5322)
48. Indian Institute of Technology Kharagpur
• Header fields (for message transport):
The Internet Message Format (RFC 5322)
49. Indian Institute of Technology Kharagpur
• Header fields (additional fields for message description):
The Internet Message Format (RFC 5322)
50. Indian Institute of Technology Kharagpur
• ARPANET: email consisted exclusively of text messages written in English and
expressed in ASCII
• MIME: Use multi-language and multimedia contents (audio, image etc.)
inside an email.
• Additional message headers for MIME:
MIME – The Multipurpose Internet Mail Extension
51. Indian Institute of Technology Kharagpur
• Uses SMTP Protocol
• Email is delivered by having the sending computer establishing TCP
connection to port 25 of the receiving computer.
Message Transfer
SMTP Server
(Port 25)
SMTP Client
(Port ANY)
SMTP Server
(Port 25)
SMTP Client
(Port ANY)
UA – SMTP
Client
Port ANY TCP TCP
sandipc@cse.iitkgp.ernet.in cse.iitkgp.ernet.in iitg.ernet.in
SMTP SMTP
52. Indian Institute of Technology Kharagpur
Message Transfer (SMTP)
Source: Computer Networks
(5th Edition) by Tanenbaum,
Wetherell
53. Indian Institute of Technology Kharagpur
• Pull type protocol – UA at the receiver side pulls the emails from mail server
after login.
• Post Office Protocol, Version 3 (POP3) – an earlier protocol for email delivery
• Internet Message Access Protocol, Version 4 (IMAP v4) – RFC 3501
• The email server runs an IMAP server at port 143
• The user agent runs IMAP client
• The client connects to the server and issues mail delivery commands
Final Delivery
55. Indian Institute of Technology Kharagpur
Application Layer Interfacing
UDP
End to end
packet delivery
TCP
Connection
Establishment
Reliable Data
Delivery
Flow and
Congestion
Control
Ordered Packet
Delivery
Network
Transport
Name Service
(DNS)
Data Link
Web
(HTTP)
Email
(SMTP, POP, IMAP)
File Transfer
(FTP)
56. Indian Institute of Technology Kharagpur
• Is built on a client-server model (RFC 959)
• The client requests for the file or send the file to the server
• The server responses with the file data or store the file at the file server
• Works in two modes – Active and Passive
File Transfer Protocol (FTP)
File Server
(FTP Server)
User
(FTP Client)
File Request
File Response Status
File Data
57. Indian Institute of Technology Kharagpur
Active and Passive Modes of File Transfer
FTP server uses two different ports:
Port 21 (Command or Control
Port): For command message
transfer
Port 20 or Client assigned (Data
Port): For data transfer
Image Source: http://henrydu.com/blog/how-to/ftp-active-mode-vs-passive-mode-106.html
58. Indian Institute of Technology Kharagpur
• Specifically to avoid busy waiting, and keep the command channel
lightweight.
• You can always use a multiplexing between command/control and data,
but FTP is used for large file transfer; if command channel is used for data
transfer as well, the commands for other clients may experience a higher
queuing delay while one client is being served.
• The clients can continue sending and receiving control information while
data transfer is being take place
Why There are Two Channels – Command Channel and Data Channel
59. Indian Institute of Technology Kharagpur
Why There are Two Modes in FTP?
Active Mode: Client informs the
port number where it is listening,
and the server initiates the TCP
connection to that port (TCP server
is running at the client side)
What If the client is behind a firewall and can not
accept a connection?
60. Indian Institute of Technology Kharagpur
Why There are Two Modes in FTP?
Passive Mode: The server selects a
random port, and the client
initiates a TCP connection to that
server port.
The server can serve multiple
clients at different server data
ports through different threads.
The clients always initiate the
command and the data transfer.
Image Source: http://henrydu.com/blog/how-to/ftp-active-mode-vs-passive-mode-106.html
61. Indian Institute of Technology Kharagpur
• Stream mode: Data is sent as a continuous stream, relieving FTP from doing
any processing. Rather, all processing is left up to TCP. No End-of-file
indicator is needed, unless the data is divided into records.
• Block mode: FTP breaks the data into several blocks (block header, byte
count, and data field) and then passes it on to TCP.
• Compressed mode: Data is compressed using a simple algorithm
(usually run-length encoding).
FTP Data Transfer Modes
Source: Wikipedia
62. Indian Institute of Technology Kharagpur
FTP Sample Commands and Response Codes
63. Indian Institute of Technology Kharagpur
• Next, we’ll go for the Transport Layer ….