This is a Project Report on Linux Server Admin. It contains key network features which are installed on Linux. This project was conducted on RedHat Enterprise Linux 7.2.
Linux is an open-source operating system that originated as a personal project by Linus Torvalds in 1991. It can run on a variety of devices from servers and desktop computers to smartphones. Some key advantages of Linux include low cost, high performance, strong security, and versatility in being able to run on many system types. Popular Linux distributions include Red Hat Enterprise Linux, Debian, Ubuntu, and Mint. The document provides an overview of the history and development of Linux as well as common myths and facts about the operating system.
Virtual versions of servers, applications, networks and storage can be created through virtualization. Its main types include operating system virtualization (VMs), hardware virtualization, application-server virtualization, storage virtualization, network virtualization, administrative virtualization and application virtualization.
The document discusses DNS (Domain Name System) servers and how they work. It explains that DNS servers translate human-readable domain names to machine-readable IP addresses in 7 steps: 1) A request is made, 2) recursive DNS servers are queried, 3) root nameservers are queried, 4) TLD nameservers are queried, 5) authoritative nameservers are queried, 6) the IP address record is retrieved, and 7) the answer is received. DNS servers act like a phone book to lookup domain names and allow the internet to function by linking names to IP addresses.
Virtualization allows multiple operating systems to run simultaneously on a single computer by transforming hardware into software. It works by installing a virtualization layer, either using a bare-metal hypervisor that does not require an operating system or a hosted hypervisor that runs as an application on an operating system. Each operating system runs within an isolated virtual machine, which appears like a separate computer to users but shares the physical resources of the host computer. Different types of virtualization include full, para, and OS-level virtualization. Virtualization enables server consolidation and transformation of physical servers for multiple applications.
This document provides an overview of the Linux operating system. It discusses that Linux was originally developed in 1991 as a free Unix-like kernel and has since grown significantly through contributions from open source developers worldwide. It describes Linux's origins and key characteristics, such as being free and open source, highly customizable, stable, and secure. The document also outlines popular uses of Linux including on servers, smartphones, and embedded devices, and highlights some of its major advantages over other commercial operating systems.
Server virtualization concepts allow partitioning of physical servers into multiple virtual servers using virtualization software and hardware techniques. This improves resource utilization by running multiple virtual machines on a single physical server. Server virtualization provides benefits like reduced costs, higher efficiency, lower power consumption, and improved availability compared to running each application on its own physical server. Key components of server virtualization include virtual machines, hypervisors, CPU virtualization using techniques like Intel VT-x or AMD-V, memory virtualization, and I/O virtualization through methods like emulated, paravirtualized or direct I/O. KVM and QEMU are popular open source virtualization solutions, with KVM providing kernel-level virtualization support and Q
Learn about the essentials of the Domain Name System (DNS), including name resolution, different record types, roots, zones, authority and recursion.
See the full webinar and the rest of the series at https://www.thousandeyes.com/resources/intro-to-dns-webinar
This ppt gives information about:
1. Administering the server
2. Correcting installation problems
3. Setting up user accounts
4. Connecting to the network
5. Configuring utilities
Linux is an open-source operating system that originated as a personal project by Linus Torvalds in 1991. It can run on a variety of devices from servers and desktop computers to smartphones. Some key advantages of Linux include low cost, high performance, strong security, and versatility in being able to run on many system types. Popular Linux distributions include Red Hat Enterprise Linux, Debian, Ubuntu, and Mint. The document provides an overview of the history and development of Linux as well as common myths and facts about the operating system.
Virtual versions of servers, applications, networks and storage can be created through virtualization. Its main types include operating system virtualization (VMs), hardware virtualization, application-server virtualization, storage virtualization, network virtualization, administrative virtualization and application virtualization.
The document discusses DNS (Domain Name System) servers and how they work. It explains that DNS servers translate human-readable domain names to machine-readable IP addresses in 7 steps: 1) A request is made, 2) recursive DNS servers are queried, 3) root nameservers are queried, 4) TLD nameservers are queried, 5) authoritative nameservers are queried, 6) the IP address record is retrieved, and 7) the answer is received. DNS servers act like a phone book to lookup domain names and allow the internet to function by linking names to IP addresses.
Virtualization allows multiple operating systems to run simultaneously on a single computer by transforming hardware into software. It works by installing a virtualization layer, either using a bare-metal hypervisor that does not require an operating system or a hosted hypervisor that runs as an application on an operating system. Each operating system runs within an isolated virtual machine, which appears like a separate computer to users but shares the physical resources of the host computer. Different types of virtualization include full, para, and OS-level virtualization. Virtualization enables server consolidation and transformation of physical servers for multiple applications.
This document provides an overview of the Linux operating system. It discusses that Linux was originally developed in 1991 as a free Unix-like kernel and has since grown significantly through contributions from open source developers worldwide. It describes Linux's origins and key characteristics, such as being free and open source, highly customizable, stable, and secure. The document also outlines popular uses of Linux including on servers, smartphones, and embedded devices, and highlights some of its major advantages over other commercial operating systems.
Server virtualization concepts allow partitioning of physical servers into multiple virtual servers using virtualization software and hardware techniques. This improves resource utilization by running multiple virtual machines on a single physical server. Server virtualization provides benefits like reduced costs, higher efficiency, lower power consumption, and improved availability compared to running each application on its own physical server. Key components of server virtualization include virtual machines, hypervisors, CPU virtualization using techniques like Intel VT-x or AMD-V, memory virtualization, and I/O virtualization through methods like emulated, paravirtualized or direct I/O. KVM and QEMU are popular open source virtualization solutions, with KVM providing kernel-level virtualization support and Q
Learn about the essentials of the Domain Name System (DNS), including name resolution, different record types, roots, zones, authority and recursion.
See the full webinar and the rest of the series at https://www.thousandeyes.com/resources/intro-to-dns-webinar
This ppt gives information about:
1. Administering the server
2. Correcting installation problems
3. Setting up user accounts
4. Connecting to the network
5. Configuring utilities
This document discusses different virtualization techniques used for cloud computing and data centers. It begins by outlining the needs for virtualization in addressing issues like server underutilization and high power consumption in data centers. It then covers various types of virtualization including full virtualization, paravirtualization, and hardware-assisted virtualization. The document also discusses challenges of virtualizing x86 hardware and solutions like binary translation and using modified guest operating systems to enable paravirtualization. Finally, it mentions how newer CPUs support hardware virtualization to improve the efficiency and security of virtualization.
Linux Tutorial For Beginners | Linux Administration Tutorial | Linux Commands...Edureka!
This Linux Tutorial will help you get started with Linux Administration. This Linux tutorial will also give you an introduction to the basic Linux commands so that you can start using the Linux CLI. Do watch the video till the very end to see all the demonstration. Below are the topics covered in this tutorial:
1) Why go for Linux?
2) Various distributions of Linux
3) Basic Linux commands: ls, cd, pwd, clear commands
4) Working with files & directories: cat, vi, gedit, mkdir, rmdir, rm commands
5) Managing file Permissions: chmod, chgrp, chown commands
6) Updating software packages from Linux repository
7) Compressing & Decompressing files using TAR command
8) Environment variables and Regular expressions
9) Starting and killing processes
10) Managing users
11) SSH protocol for accessing remote hosts
This document provides an introduction to virtualization. It defines virtualization as running multiple operating systems simultaneously on the same machine in isolation. A hypervisor is a software layer that sits between hardware and guest operating systems, allowing resources to be shared. There are two main types of hypervisors - bare-metal and hosted. Virtualization provides benefits like consolidation, redundancy, legacy system support, migration and centralized management. Key types of virtualization include server, desktop, application, memory, storage and network virtualization. Popular virtualization vendors for each type are also listed.
The document outlines a technical seminar on Linux administration presented by Yogesh K S. It discusses key topics like installing Linux, user and group management, security features like firewalls and SELinux, managing services, backups, and package management. The seminar covered essential admin tasks, tools, and commands for system installation, configuration, maintenance and security.
The document discusses Internet protocols and IPTables filtering. It provides an overview of Internet protocols, IP addressing, firewall utilities, and the different types of IPTables - Filter, NAT, and Mangle tables. The Filter table is used for filtering packets. The NAT table is used for network address translation. The Mangle table is used for specialized packet alterations. IPTables works by defining rules within chains to allow or block network traffic based on packet criteria.
This document discusses CPU virtualization and scheduling techniques. It covers topics such as deprivileging the operating system, virtualization-unfriendly architectures like x86, hardware-assisted virtualization using VMX mode, and proportional-share scheduling. It also summarizes research on improving VM scheduling by making it task-aware to prioritize I/O-bound tasks and correlate I/O events with tasks to boost their performance while maintaining inter-VM fairness. The document provides historical context on the evolution of virtualization technologies and research challenges in building lightweight and intelligent VMM schedulers.
Linux is a freely distributed open source operating system based on Unix. It was developed in 1991 by Linus Torvalds and has gained popularity as a free alternative to proprietary operating systems. There are several popular Linux distributions including Red Hat Linux, Linux Mandrake, Debian/GNU, and SuSE Linux. These distributions bundle Linux with common software like the X Window System, KDE, and GNOME desktop environments. Hardware compatibility has improved with Linux supporting many modern components, though some proprietary drivers may need to be obtained from manufacturers.
Dynamic Host Configuration Protocol (DHCP) is used to automatically assign IP addresses, subnet masks, default gateways and other network configuration options to clients on a network. DHCP reduces network configuration workload. It uses a four step packet exchange process during the initial IP address lease and will attempt renewal at 50% and 87.5% of the lease time. DHCP servers must be authorized in Active Directory to lease addresses. Scopes are configured to define address ranges for clients, reservations assign specific addresses by MAC address, and relays allow a single DHCP server to service multiple subnets.
I have tried my best to describe Samba Server through this PPT. I hope you guys will love this and this ppt will be helpful for you all.
Thanks,
Veeral Arora
The document discusses different client/server database architectures including file server architecture, database server architecture, and three-tier architecture. It describes how processing is distributed between clients and servers in each architecture and some advantages and disadvantages of each.
A complete Coverage of DNS and its features. This ppt deals with well balanced practical and theoretical aspects of DNS. The best ppt for a novice learner.
Virtualization allows multiple operating systems to run simultaneously on a single physical server using a hypervisor. This reduces costs by improving hardware utilization, lowering maintenance needs, and providing continuous server uptime. There are two main hypervisor types: native hypervisors have direct access to server hardware while hosted hypervisors run within an operating system. Virtualization offers advantages like zero downtime maintenance, dynamic resource allocation, and automated backups.
System and network administration network servicesUc Man
Network services like DNS, DHCP, FTP, SMTP, SNMP, proxy servers, and Active Directory Services provide shared resources to devices on a network. DNS in particular converts domain names to IP addresses, caching responses for a period of time specified by their Time to Live (TTL) value to reduce server load. However, DNS was not originally designed with security in mind and is vulnerable to issues like cache poisoning. DHCP automatically assigns temporary IP addresses to devices on a network. Active Directory is a directory service used by Windows domains to centrally manage network resources and user access through objects, sites, forests, trees and domains.
This document discusses full virtualization techniques. It defines full virtualization as simulating hardware to allow any OS to run unmodified in a virtual machine. It describes the challenges of virtualizing the x86 architecture and how binary translation is used to allow guest OSes to run at a higher privilege level. The document outlines hosted and bare-metal virtualization architectures and their pros and cons. It provides examples of using full virtualization for desktop and server virtualization/cloud computing. It also gives steps to implement hosted full virtualization using Oracle VM VirtualBox on Windows 7.
The document discusses new features in Windows Server 2019 including Windows Admin Center, System Insight, Storage Migration Service, Storage Spaces Direct, and Storage Replica. It explains that Windows Admin Center is a browser-based tool for managing Windows servers and clients. Storage Migration Service allows migrating servers and data to new hardware or virtual machines. Storage Spaces Direct pools storage across servers for hyperconverged or converged deployments with options for mirroring or parity resiliency. Storage Replica enables replication of volumes for disaster recovery between servers or clusters.
The Network File System (NFS) is the most widely used network-based file system. NFS’s initial simple design and Sun Microsystems’ willingness to publicize the protocol and code samples to the community contributed to making NFS the most successful remote access file system. NFS implementations are available for numerous Unix systems, several Windows-based systems, and others.
An application server provides business logic for application programs and supports the construction of dynamic web pages. It allows applications to run on multiple parallel servers for improved scalability and performance. Key features include clustering for load distribution, failover for automatic switching to redundant servers, and load balancing to optimize resource utilization. Application servers provide advantages like centralized configuration, data integrity, and security. Common application servers include Java Enterprise Edition servers and the Zend platform for PHP applications.
What is Linux?
Command-line Interface, Shell & BASH
Popular commands
File Permissions and Owners
Installing programs
Piping and Scripting
Variables
Common applications in bioinformatics
Conclusion
The document provides an overview of cloud computing, including its key concepts and components. It discusses the different deployment models (public, private, hybrid, community clouds), service models (IaaS, PaaS, SaaS), characteristics, benefits, history and evolution. Communication protocols used in cloud computing like HTTP, HTTPS and various RPC implementations are also mentioned. The role of open standards in cloud architecture including virtualization, SOA, open-source software and web services is assessed.
these are the complete notes of ccna for the students .which can be very very much usefulll while in project report,synopsis and so on which you can use at no cost
The document discusses various topics related to Linux administration. It covers Unix system architecture, the Linux command line, files and directories, running programs, wildcards, text editors, shells, command syntax, filenames, command history, paths, hidden files, home directories, making directories, copying and renaming files, and more. It provides an overview of key Linux concepts and commands for system administration.
Samba allows Windows and Unix systems to share files and printers on a network. It implements the SMB protocol to enable Unix systems to communicate with Windows clients. Samba includes client tools that allow Unix users to access resources shared by Windows systems. It provides reliable file and printer sharing across platforms at a low maintenance cost.
This document discusses different virtualization techniques used for cloud computing and data centers. It begins by outlining the needs for virtualization in addressing issues like server underutilization and high power consumption in data centers. It then covers various types of virtualization including full virtualization, paravirtualization, and hardware-assisted virtualization. The document also discusses challenges of virtualizing x86 hardware and solutions like binary translation and using modified guest operating systems to enable paravirtualization. Finally, it mentions how newer CPUs support hardware virtualization to improve the efficiency and security of virtualization.
Linux Tutorial For Beginners | Linux Administration Tutorial | Linux Commands...Edureka!
This Linux Tutorial will help you get started with Linux Administration. This Linux tutorial will also give you an introduction to the basic Linux commands so that you can start using the Linux CLI. Do watch the video till the very end to see all the demonstration. Below are the topics covered in this tutorial:
1) Why go for Linux?
2) Various distributions of Linux
3) Basic Linux commands: ls, cd, pwd, clear commands
4) Working with files & directories: cat, vi, gedit, mkdir, rmdir, rm commands
5) Managing file Permissions: chmod, chgrp, chown commands
6) Updating software packages from Linux repository
7) Compressing & Decompressing files using TAR command
8) Environment variables and Regular expressions
9) Starting and killing processes
10) Managing users
11) SSH protocol for accessing remote hosts
This document provides an introduction to virtualization. It defines virtualization as running multiple operating systems simultaneously on the same machine in isolation. A hypervisor is a software layer that sits between hardware and guest operating systems, allowing resources to be shared. There are two main types of hypervisors - bare-metal and hosted. Virtualization provides benefits like consolidation, redundancy, legacy system support, migration and centralized management. Key types of virtualization include server, desktop, application, memory, storage and network virtualization. Popular virtualization vendors for each type are also listed.
The document outlines a technical seminar on Linux administration presented by Yogesh K S. It discusses key topics like installing Linux, user and group management, security features like firewalls and SELinux, managing services, backups, and package management. The seminar covered essential admin tasks, tools, and commands for system installation, configuration, maintenance and security.
The document discusses Internet protocols and IPTables filtering. It provides an overview of Internet protocols, IP addressing, firewall utilities, and the different types of IPTables - Filter, NAT, and Mangle tables. The Filter table is used for filtering packets. The NAT table is used for network address translation. The Mangle table is used for specialized packet alterations. IPTables works by defining rules within chains to allow or block network traffic based on packet criteria.
This document discusses CPU virtualization and scheduling techniques. It covers topics such as deprivileging the operating system, virtualization-unfriendly architectures like x86, hardware-assisted virtualization using VMX mode, and proportional-share scheduling. It also summarizes research on improving VM scheduling by making it task-aware to prioritize I/O-bound tasks and correlate I/O events with tasks to boost their performance while maintaining inter-VM fairness. The document provides historical context on the evolution of virtualization technologies and research challenges in building lightweight and intelligent VMM schedulers.
Linux is a freely distributed open source operating system based on Unix. It was developed in 1991 by Linus Torvalds and has gained popularity as a free alternative to proprietary operating systems. There are several popular Linux distributions including Red Hat Linux, Linux Mandrake, Debian/GNU, and SuSE Linux. These distributions bundle Linux with common software like the X Window System, KDE, and GNOME desktop environments. Hardware compatibility has improved with Linux supporting many modern components, though some proprietary drivers may need to be obtained from manufacturers.
Dynamic Host Configuration Protocol (DHCP) is used to automatically assign IP addresses, subnet masks, default gateways and other network configuration options to clients on a network. DHCP reduces network configuration workload. It uses a four step packet exchange process during the initial IP address lease and will attempt renewal at 50% and 87.5% of the lease time. DHCP servers must be authorized in Active Directory to lease addresses. Scopes are configured to define address ranges for clients, reservations assign specific addresses by MAC address, and relays allow a single DHCP server to service multiple subnets.
I have tried my best to describe Samba Server through this PPT. I hope you guys will love this and this ppt will be helpful for you all.
Thanks,
Veeral Arora
The document discusses different client/server database architectures including file server architecture, database server architecture, and three-tier architecture. It describes how processing is distributed between clients and servers in each architecture and some advantages and disadvantages of each.
A complete Coverage of DNS and its features. This ppt deals with well balanced practical and theoretical aspects of DNS. The best ppt for a novice learner.
Virtualization allows multiple operating systems to run simultaneously on a single physical server using a hypervisor. This reduces costs by improving hardware utilization, lowering maintenance needs, and providing continuous server uptime. There are two main hypervisor types: native hypervisors have direct access to server hardware while hosted hypervisors run within an operating system. Virtualization offers advantages like zero downtime maintenance, dynamic resource allocation, and automated backups.
System and network administration network servicesUc Man
Network services like DNS, DHCP, FTP, SMTP, SNMP, proxy servers, and Active Directory Services provide shared resources to devices on a network. DNS in particular converts domain names to IP addresses, caching responses for a period of time specified by their Time to Live (TTL) value to reduce server load. However, DNS was not originally designed with security in mind and is vulnerable to issues like cache poisoning. DHCP automatically assigns temporary IP addresses to devices on a network. Active Directory is a directory service used by Windows domains to centrally manage network resources and user access through objects, sites, forests, trees and domains.
This document discusses full virtualization techniques. It defines full virtualization as simulating hardware to allow any OS to run unmodified in a virtual machine. It describes the challenges of virtualizing the x86 architecture and how binary translation is used to allow guest OSes to run at a higher privilege level. The document outlines hosted and bare-metal virtualization architectures and their pros and cons. It provides examples of using full virtualization for desktop and server virtualization/cloud computing. It also gives steps to implement hosted full virtualization using Oracle VM VirtualBox on Windows 7.
The document discusses new features in Windows Server 2019 including Windows Admin Center, System Insight, Storage Migration Service, Storage Spaces Direct, and Storage Replica. It explains that Windows Admin Center is a browser-based tool for managing Windows servers and clients. Storage Migration Service allows migrating servers and data to new hardware or virtual machines. Storage Spaces Direct pools storage across servers for hyperconverged or converged deployments with options for mirroring or parity resiliency. Storage Replica enables replication of volumes for disaster recovery between servers or clusters.
The Network File System (NFS) is the most widely used network-based file system. NFS’s initial simple design and Sun Microsystems’ willingness to publicize the protocol and code samples to the community contributed to making NFS the most successful remote access file system. NFS implementations are available for numerous Unix systems, several Windows-based systems, and others.
An application server provides business logic for application programs and supports the construction of dynamic web pages. It allows applications to run on multiple parallel servers for improved scalability and performance. Key features include clustering for load distribution, failover for automatic switching to redundant servers, and load balancing to optimize resource utilization. Application servers provide advantages like centralized configuration, data integrity, and security. Common application servers include Java Enterprise Edition servers and the Zend platform for PHP applications.
What is Linux?
Command-line Interface, Shell & BASH
Popular commands
File Permissions and Owners
Installing programs
Piping and Scripting
Variables
Common applications in bioinformatics
Conclusion
The document provides an overview of cloud computing, including its key concepts and components. It discusses the different deployment models (public, private, hybrid, community clouds), service models (IaaS, PaaS, SaaS), characteristics, benefits, history and evolution. Communication protocols used in cloud computing like HTTP, HTTPS and various RPC implementations are also mentioned. The role of open standards in cloud architecture including virtualization, SOA, open-source software and web services is assessed.
these are the complete notes of ccna for the students .which can be very very much usefulll while in project report,synopsis and so on which you can use at no cost
The document discusses various topics related to Linux administration. It covers Unix system architecture, the Linux command line, files and directories, running programs, wildcards, text editors, shells, command syntax, filenames, command history, paths, hidden files, home directories, making directories, copying and renaming files, and more. It provides an overview of key Linux concepts and commands for system administration.
Samba allows Windows and Unix systems to share files and printers on a network. It implements the SMB protocol to enable Unix systems to communicate with Windows clients. Samba includes client tools that allow Unix users to access resources shared by Windows systems. It provides reliable file and printer sharing across platforms at a low maintenance cost.
This document provides an overview of setting up a mail server on Linux. It discusses what Linux is and its features. It then describes the key components needed for a mail server, including Bind for DNS, Httpd for a web server, Dovecot for protocols, Postfix for accepting connections, and Squirrelmail for accessing the IMAP server. Instructions are provided on installing and configuring the necessary software packages to establish a functional mail server on a Linux system.
tybsc it sem 5 Linux administration notes of unit 1,2,3,4,5,6 version 3WE-IT TUTORIALS
Introduction: Introduction to UNIX, Linux, GNU and Linux distributions Duties of the System Administrator, The Linux System Administrator, Installing and Configuring Servers, Installing and Configuring Application Software,
Creating and Maintaining User Accounts, Backing Up and Restoring Files, Monitoring and Tuning Performance, Configuring a Secure System, Using Tools
to Monitor Security Booting and shutting down: Boot loaders-GRUB, LILO, Bootstrapping, Init
process, rc scripts, Enabling and disabling services.
The File System: Understanding the File System Structure, Working with Linux- Supported File Systems, Memory and Virtual
System Configuration Files: System wide Shell Configuration Scripts, System Environmental Settings, Network Configuration Files, Managing the init Scripts,
Configuration Tool, Editing Your Network Configuration
TCP/IP Networking: Understanding Network Classes, Setting Up a Network nterface Card (NIC), Understanding Subnetting, Working with Gateways and Routers, Configuring Dynamic Host Configuration Protocol, Configuring the Network Using the Network
The Network File System: NFS Overview, Planning an NFS Installation, Configuring an NFS Server, Configuring an NFS Client, Using Automount Services, Examining NFS Security
Connecting to Microsoft Networks: Installing Samba, Configuring the Samba Server, Creating Samba Users 3, Starting the Samba Server, Connecting to a Samba
Client, Connecting from a Windows PC to the Samba Server Additional Network Services: Configuring a Time Server, Providing a Caching Proxy Server
Internet Services: Secure Services, SSH, scp, sftp Less Secure Services (Telnet ,FTP, sync,rsh ,rlogin,finger,talk and ntalk, Linux Machine as a Server, Configuring
the xinetd Server, Comparing xinetd and Standalone, Configuring Linux Firewall Packages, Domain Name System: Understanding DNS, Understanding Types of Domain Servers, Examining Server Configuration Files, Configuring a Caching DNS Server, Configuring a Secondary Master DNS Server, Configuring a Primary
Master Server, Checking Configuration
Configuring Mail Services: Tracing the Email Delivery Process, Mail User Agent (MUA), Introducing SMTP, Configuring Sendmail, Using the Postfix Mail Server,
Serving Email with POP3 and IMAP, Maintaining Email Security Configuring FTP Services: Introducing vsftpd, Configuring vsftpd, Advanced FTP Server Configuration, Using SFTP
Configuring a Web Server: Introducing Apache, Configuring Apache, Implementing SSI, Enabling CGI, Enabling PHP, Creating a Secure Server with SSL System Administration: Administering Users and Groups Installing and Upgrading Software Packages
This document provides an overview of Linux including:
- Different pronunciations of Linux and the origins of each pronunciation.
- A definition of Linux as a generic term for Unix-like operating systems with graphical user interfaces.
- Why Linux is significant as a powerful, free, and customizable operating system that runs on multiple hardware platforms.
- An introduction to key Linux concepts like multi-user systems, multiprocessing, multitasking and open source software.
- Examples of common Linux commands for file handling, text processing, and system administration.
MR201411 SELinux in Virtualization and ContainersFFRI, Inc.
• To achieve secure environment requires two surfaces for isolation in virtualization and containers
– An isolation between host OS and guest OS
– An isolation between guests
• libvirt is sophisticated VM management framework, it has already integrated isolation with SELinux and AppArmor
• Docker is familiar to developers, but it includes security risks like execution of untrusted programs
– We absolutely need SELinux for secure development with Docker
GWAVACon 2015: Microsoft MVP - Office 365, Typical Deployment ScenariosGWAVA
This document discusses typical deployment scenarios for Office 2013/2016. It compares Windows Installer-based and Click-to-Run installations of Office, covering licensing, activation, installation process, customization, and Group Policy support. Various deployment methods are also covered, including small/medium businesses using local or network installations, larger companies using networks and Group Policy, and virtualization technologies. A new option discussed is deploying Office via Azure Remote (Office on Demand).
This document discusses going hybrid with an on-premises Exchange deployment and Office 365. It outlines the requirements for setting up a hybrid configuration such as having Exchange 2010 SP3 or higher, Office 365 plans that support hybrid, and configuring DNS records. It describes benefits like being able to move mailboxes between on-premises and cloud, centralized transport through Exchange Online Protection, and using the Mailbox Replication Service to migrate mailboxes. Centralized transport and the optional use of Edge servers are also covered at a high level.
Artificial intelligence is the intelligence exhibited by machines or software. Examples of AI include digital cameras that can recognize scenes and washing machines that can select washing courses based on the type of dirt. The history of AI shows that human-like automatons were created in ancient Egypt and Greece, while the study of mathematical logic in the 20th century provided breakthroughs allowing for modern AI. Today, AI is used for computer chess, recognizing patterns, and more, but challenges remain as machines cannot yet make mistakes unknowingly or fully emulate human emotions.
Magazine adverts and covers share common features to attract attention and promote artists or albums. Some key features include displaying the artist's face prominently, showing the album cover, and including the artist's website. Adverts also list the artist's name in bold text and the release date. Effective magazine covers balance eye-catching images with text that enhances but does not obstruct the visuals. They aim to quickly engage readers through representational images and bright colors that suit the publication's genre.
Web Messenger allows for real-time text-based communication between devices over a network like the Internet. It originated from chat features on the PLATO system developed in 1960 for computer education. PLATO Notes, an early online message board created in 1973, was one of the first examples of online community features. Popular chat software today includes AIM, Windows Live Messenger, Yahoo Messenger, ICQ, Skype, and Tencent QQ.
Proposal to Manage the Upgrade of the IT Infrastructure for PPESAFederico Schiavio
This proposal summarizes the consultant's experience managing IT infrastructure upgrades. The consultant has over 25 years of experience upgrading networks, procuring hardware and software, and improving existing applications like IPRIS. The proposal reviews the terms of reference, outlines the consultant's relevant experience managing large IT projects, and provides a CV and financial proposal.
The document provides an overview of artificial intelligence (AI) including its history, approaches, tools, and applications. It discusses early concepts of artificial beings in myths and how AI has been explored through cybernetics, brain simulation, symbolic approaches, statistical methods, and intelligent agent paradigms. Key problems in AI like search, heuristics, logic, and uncertainty are summarized. The document also reviews successes in games, robotics, and question answering systems to demonstrate progress in AI.
i. A mail server is an application that receives emails and forwards them to their intended recipients. It works with other programs like SMTP and POP3/IMAP to deliver emails.
ii. Mail servers can be broken down into outgoing SMTP servers and incoming POP3/IMAP servers. SMTP sends emails while POP3/IMAP receives emails and stores them locally or on the server.
iii. Administering a mail server involves configuring items like connectors, transport rules, address lists, storage groups and mailbox policies. It also involves using tools like message tracking and queue viewers for maintenance and troubleshooting.
In this video from ChefConf 2014 in San Francisco, Cycle Computing CEO Jason Stowe outlines the biggest challenge facing us today, Climate Change, and suggests how Cloud HPC can help find a solution, including ideas around Climate Engineering, and Renewable Energy.
"As proof points, Jason uses three use cases from Cycle Computing customers, including from companies like HGST (a Western Digital Company), Aerospace Corporation, Novartis, and the University of Southern California. It’s clear that with these new tools that leverage both Cloud Computing, and HPC – the power of Cloud HPC enables researchers, and designers to ask the right questions, to help them find better answers, faster. This all delivers a more powerful future, and means to solving these really difficult problems."
Watch the video presentation: http://insidehpc.com/2014/09/video-hpc-cluster-computing-64-156000-cores/
Zimbra provides an open source collaboration platform that enhances user productivity by streamlining interactions between people, applications, and data. It simplifies platform management for IT administrators and provides universal access to applications and data across private and public clouds. Zimbra migration tools can help migrate email, contacts, and calendars from other systems like Exchange and Domino to reduce administration costs and improve manageability.
It is about mail server using bind9, postfix, dovecot, squirrelmail on ubuntu 14.04 LTS.
Every package has it's own importance as bind9 for DNS. postfix as MTA, dovecot as MDA, squirrelmail as webmail client. If anyone can do advancements in it,then please let me know. so, i will also use them
Servers are large, powerful computers that provide services and resources to other computers connected to it via a network. There are several typical types of servers including web servers, application servers, database servers, and media servers. Servers are optimized for 24/7 operation, support multiple users and applications simultaneously, and have features like redundant power supplies and network connections that make them more reliable than typical workstations. Key components of a physical server include the motherboard, CPU, memory, hard drives, network connection, and power supply.
This training report provides an introduction to servers. A server is a system that responds to requests across a network to provide network services. Servers have hardware requirements like fast network connections and high input/output throughput since they provide services to many users. Servers come in different sizes from rack, tower, and blade servers. The conclusion states that servers provide benefits like reduced paper usage, increased communication and data security, easier data management, and improved reliability.
This training report provides an introduction to servers. A server is a system that responds to requests across a network to provide network services. Servers can operate on dedicated computers or networked devices. They provide services like file sharing, web hosting, email, printing, and more. Server hardware is optimized for high network throughput and I/O over graphical interfaces. Common server operating systems have features that make them suitable for server environments like advanced security, networking, and backup capabilities. Servers come in different sizes from rack, tower, blade, and more.
Types of networks according to securityAmjad Afridi
This document discusses different types of computer networks according to security, including peer-to-peer networks and server-based networks. It provides details on the roles of servers and clients. Peer-to-peer networks are easy to set up but lack security and centralized administration. Server-based networks are more secure and suitable for larger organizations, with a central server providing management of resources and users. The document also covers various types of servers such as file servers, print servers, proxy servers, web servers, and database servers, along with their functions. Finally, it discusses network topologies including bus, star, mesh, tree and ring, providing advantages and disadvantages of the bus topology.
Types of networks according to securityMicrobiology
This document discusses different types of computer networks according to security and roles of computers in a network. It describes peer-to-peer networks as having no central server and being less secure, while server-based networks have a central server for administration and are more secure. The document then discusses advantages and disadvantages of peer-to-peer and server-based networks. It also defines different types of servers such as file, print, proxy, web, database, video, storage, time, access, and fax servers. Finally, it discusses network topologies including bus topology and its advantages of requiring less cabling but disadvantages of being difficult to add devices and isolate faults.
This document discusses different types of computer networks according to security and the roles of computers in a network. It describes peer-to-peer networks as having no central server and being less secure, while server-based networks have a central server for administration and are more secure. The document then discusses advantages and disadvantages of peer-to-peer and server-based networks. It also defines different types of servers such as file servers, print servers, proxy servers, web servers, and database servers. Finally, it discusses network topologies like bus, star, mesh, tree and ring, providing advantages and disadvantages of the bus topology.
Tim Berners-Lee wrote the first proposal for the World Wide Web in 1989 and formalized it with Robert Cailliau in 1990, outlining key concepts like hypertext documents and browsers. By the end of 1990, Berners-Lee had the first web server and browser running at CERN. The main job of a web server is to store, process, and deliver web pages to users through HTTP and other protocols in response to client requests. When a client makes a request, the server finds and retrieves the requested file or returns an error message.
Topic #3 of outline Server Environment.pptxAyeCS11
The document provides information about server environments including Microsoft Windows Server and Linux Server. It defines what a server is, why servers are used, examples of different types of servers, and how servers are connected to other computers on a network. It describes key differences between Windows Server and regular Windows operating systems, and highlights some common roles and management software included in Windows Server but not regular Windows. It also provides an overview of Linux servers, their benefits, and some popular Linux server flavors suited for different use cases.
A network operating system (NOS) provides services to clients over a network, enabling file sharing, printing, and application access. It handles typical network duties like remote access, routing, security, and administration. Well-known NOSes include Windows Server, Linux, and Mac OS X. In a client-server network, servers run the NOS to provide centralized resources to client computers running other operating systems. Common server types are file servers, print servers, mail servers, application servers, and database servers.
The document provides information about server training, including the basics of servers and server hardware components. It defines what a server is, describes different types of servers based on size and use (e.g. rack mount, tower, blade). It also outlines the client-server model. The hardware components of a server are explained, such as the motherboard, hard drives, fans, power supplies, memory, RAID controllers. Different types of RAID configurations are defined. Finally, it discusses server processor diagnostics using tools like light path diagnostics and baseboard management controllers.
A server is a system that responds to requests across a network to provide services. Servers can run on dedicated computers or networked computers. Any computerized process that shares resources to clients is considered a server. While a laptop is not typically thought of as a server, it can fulfill the server role by running server software like a web server. Servers prioritize fast network connections and input/output throughput over absolute CPU speed. Servers often run without monitors or input devices and exclude unnecessary processes to allocate resources to their functions.
The document discusses web servers, including what they are, common web server software like Apache, Microsoft IIS, and Nginx, and factors to consider for web server hardware configuration. Some key points include:
- Web servers are computers that deliver web pages to users and are identified by IP addresses and domain names.
- Popular web server software includes open source options like Apache and Nginx, as well as Microsoft's IIS. These programs manage serving web content.
- Important considerations for web server hardware include capacity for high traffic volumes, expandable RAM for caching content, fast processors and disk storage. Hardware is selected based on expected page views and response time needs.
This document discusses different types of network servers. It describes what a network server is and lists various server types including server platform, application server, audio/video server, chat server, fax server, FTP server, groupware server, IRC server, mail server, proxy server, web server, news server, telnet server, and list server. It provides details on what each server type is used for and key functions.
LESSON 1 - Windows Server 2008 R2 Configuration.pptxJoeyOrale2
Windows Server 2008 R2 is a network operating system designed to support workstations and personal computers connected on a local area network, providing features for security, file sharing, printing, and system management, as well as user administration, maintenance, and resource monitoring across the network. It has minimum requirements of 1GHz processor, 512MB RAM, and 10GB disk space and is available in editions like Standard, Enterprise, and Datacenter to support varying server needs. The document discusses tasks for configuring both a server and client PC on the network.
This document provides information about configuring and administering a server. It begins by outlining the steps for configuring and testing a server, including confirming server specifications, verifying compatibility and interoperability, and configuring and testing the server. It then defines what a server and network operating system are. The document discusses different types of servers like file servers, print servers, application servers, and more. It also covers topics like client support, client/server communication, users and groups, Windows Server 2003 and 2008 editions. Finally, it discusses servers in UNIX/Linux environments and network computer groups.
Building Intranet Assignment 2009 03 14 roshan basnet (1)rosu555
This document provides explanations and definitions related to building an intranet. It discusses client/server models and how they distribute requests and fulfill requests across different locations. It also defines two-tier and three-tier intranet architectures, explaining the differences in functionality between presentation, business, and database layers. Finally, it summarizes key intranet components like file servers, application servers, and database servers.
This document defines and describes different types of servers. A server is a computer process that shares resources with client processes. The document lists and provides brief descriptions of various server types including application servers, catalog servers, communications servers, compute servers, database servers, fax servers, file servers, game servers, home servers, mail servers, media servers, name servers, print servers, proxy servers, and web servers. It provides some additional details about application servers, communications servers, and Java application servers.
This document provides an overview of client-server architecture and web servers. It defines clients as programs that request information from servers, while servers are large computers capable of providing data to many clients simultaneously. The document then discusses how the client-server model is used in the World Wide Web, with web browsers as clients that send HTTP requests to web servers. It also covers network connections, ports, functions of web servers and browsers, and browser plugins.
This document provides an overview of client/server basics and electronic publishing as it relates to web servers. It discusses how clients and servers communicate over a network using protocols like HTTP. A web server is a type of server that understands HTTP and can respond to client requests by returning documents. Early web servers were developed by CERN and NCSA. The first web browser was NCSA Mosaic, which popularized the web through its easy interface and ability to create HTML content without specialized software. Electronic publishing on the web involves creating hypertext documents with links using HTML and publishing them on a web server to be retrieved by browsers over HTTP.
AD, DNS, DHCP, HTTP, HTTPS, SMTP, POP3 and FTP use specific port numbers. The FTP server accepts incoming FTP requests and copies files to a publishing folder for access over the network. Virtual hosting refers to multiple websites hosted on one server, with each site virtually shared and not dedicated. Cloud computing infrastructure differs from traditional client-server models by using a main cloud controller and worker nodes/clusters to process requests from clients.
Similar to A Project Report on Linux Server Administration (20)
Understanding User Behavior with Google Analytics.pdfSEO Article Boost
Unlocking the full potential of Google Analytics is crucial for understanding and optimizing your website’s performance. This guide dives deep into the essential aspects of Google Analytics, from analyzing traffic sources to understanding user demographics and tracking user engagement.
Traffic Sources Analysis:
Discover where your website traffic originates. By examining the Acquisition section, you can identify whether visitors come from organic search, paid campaigns, direct visits, social media, or referral links. This knowledge helps in refining marketing strategies and optimizing resource allocation.
User Demographics Insights:
Gain a comprehensive view of your audience by exploring demographic data in the Audience section. Understand age, gender, and interests to tailor your marketing strategies effectively. Leverage this information to create personalized content and improve user engagement and conversion rates.
Tracking User Engagement:
Learn how to measure user interaction with your site through key metrics like bounce rate, average session duration, and pages per session. Enhance user experience by analyzing engagement metrics and implementing strategies to keep visitors engaged.
Conversion Rate Optimization:
Understand the importance of conversion rates and how to track them using Google Analytics. Set up Goals, analyze conversion funnels, segment your audience, and employ A/B testing to optimize your website for higher conversions. Utilize ecommerce tracking and multi-channel funnels for a detailed view of your sales performance and marketing channel contributions.
Custom Reports and Dashboards:
Create custom reports and dashboards to visualize and interpret data relevant to your business goals. Use advanced filters, segments, and visualization options to gain deeper insights. Incorporate custom dimensions and metrics for tailored data analysis. Integrate external data sources to enrich your analytics and make well-informed decisions.
This guide is designed to help you harness the power of Google Analytics for making data-driven decisions that enhance website performance and achieve your digital marketing objectives. Whether you are looking to improve SEO, refine your social media strategy, or boost conversion rates, understanding and utilizing Google Analytics is essential for your success.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
1. A PROJECT REPORT ON
LINUX SERVER ADMINISTRATION
Splunk is also added
Avinash Kumar
10/11/2016
2. ABSTRACT
Linux Server Administration is important to ensure the proper working of the servers to
provide services to the client. There is a relationship between Server & Client. The
purpose of the server is to fulfil the request made by the client. When there are a lot of
clients to handle for a server, the server needs to be administered by qualified personnel
or authorized operator. For example, suppose there are 30,000 hits per minutes to a server
and those hits requests for different types of services to the server. Then a server has to
determine the number of requests and fulfil their entire request in time without any error
and breakdown. Another instance may be that, if due to increasing number of hits server
gets down. Then there must be qualified personals to inquire the defects and bring back
all the downed servers to online. So Linux Server Administration is totally coined
towards management and deployment of Linux Servers.
Keyword: Linux Server Administration, Server system
2
3. TABLE CONTENTS
CHAPTER TILTE PAGE
ABSTRACT
CONTENTS
LIST OF TABLE
LIST OF FIGURE
1 INTRODUCTION 1
1.1 NEED OF SERVERS 2
1.2 A SERVER-CLIENT RELATIONSHIP 2
1.3 COMPONENTS OF A SERVER
2 SOFTWARE AND HARDWARE REQURIMENTS 3
2.1 SOFTWARE REQURIMENT 3
2.1.1 INSTALLING A LINUX SYSTEM 3
2.1.2 CONFIGURING THE SYSTEM &
INSTALLING ADDITIONAL PACKAGE
2.2 HARDWARE REQUIREMENT 4
3 WEB SERVER DESCRIPTION 5
3.1 HTTPD
3.2 FTP
3.3 NFS
3.4 NIS
3.5 NTP
3.6 SAMBHA
3.7 SSH
3.8 TELNET
3.9 MAIL SERVER
3.10 DHCP
3.11 DNS
4 PROJECT DISCRIPTION 12
5 SPLUNK 24
3
5. LIST OF FIGURE
FIGURES TITLE PAGE
Figure.1.1 A CLIENT-SERVER RELATIONSHIP 11
Figure.1.2 A LOOK OF A SERVER 15
Figure.2.1 INSTALLING REDHAT ENTERPRISE LINUX 7.2 17
Figure.2.2 SOFTWARE & HARDWARE REQUIREMENTS 18
Figure.3.1 THE APACHE WEB SERVER 21
Figure.3.2 THE ACTIVE & PASSIVE FTP WEB SERVER 26
Figure.3.3 THE NFS WEB SERVER 29
Figure.3.4 THE NTP WEB SERVER 34
Figure.3.5 THE SAMBA SERVER 40
Figure.3.6 THE SSH SERVER 46
Figure.3.7 THE TELNET SERVER 51
Figure.3.8 THE MAIL SERVER 56
Figure.3.9 THE DHCP SERVER 61
Figure.3.10 THE DNS SERVER 66
5
6. Chapter-1
INTRODUCTION
1 Introduction:
In a technical sense, a server is an instance of a computer program that
accepts and responds to requests made by another program, known as a
client. Less formally, any device that runs server software could be
considered a server as well. Servers are used to manage network resources.
For example, a user may setup a server to control access to a network,
send/receive e-mail, manage print jobs, or host a website.
Some servers are committed to a specific task, often referred to as dedicated.
As a result, there are a number of dedicated server categories, like print
servers, file servers, network servers, and database servers. However, many
servers today are shared servers which can take on the responsibility of e-
mail, DNS, FTP, and even multiple websites in the case of a web server.
Because they are commonly used to deliver services that are required
constantly, most servers are never turned off. Consequently, when servers
fail, they can cause the network users and company many problems. To
alleviate these issues, servers are commonly high-end computers setup to be
fault tolerant.
1.1 NEED OF SERVERS:
6
7. As we know that internet is an ocean of data. Every nook & cranny of the
world uses internet. There are millions of websites containing text, audio,
video, images etc. the user of internet always access these contents from all
over the world. As we know that each and every website is stored on
someone’s storage device and every one cannot keep their devices online for
a long time. So we need a device that can be kept online for long times
without any discontinuity. That’s comes the need of servers. The server is a
place where we can place our data (websites, images, video, audio etc.) at
one place with 24x7 access to all our users. Following are the other
advantages of server:
• All time access to all users.
• The hardware & software is upgraded according to time. The owner of
any website has not to worry about their technical front.
• All information is at one place.
• No need of technical expertization of any server related term because
the entire tasks are done by server personnel.
• Data processing is fast.
• Can track records of accessing.
1.2A CLIENT-SERVER RELATIONSHIP:
The client–server model is a distributed application structure that partitions
tasks or workloads between the providers of a resource or service, called
servers, and service requesters, called clients. Often clients and servers
communicate over a computer network on separate hardware, but both client
7
8. and server may reside in the same system. A server host runs one or more
server programs which share their resources with clients. A client does not
share any of its resources, but requests a server's content or service function.
Clients therefore initiate communication sessions with servers which await
incoming requests. Examples of computer applications that use the client–
server model are Email, network printing, and the World Wide Web.
The Client-server characteristic describes the relationship of cooperating
programs in an application. The server component provides a function or
service to one or many clients, which initiate requests for such services.
Servers are classified by the services they provide. For instance, a web
server serves web pages and a file server serves computer files. A shared
resource may be any of the server computer's software and electronic
components, from programs and data to processors and storage devices. The
sharing of resources of a server constitutes a service.
Whether a computer is a client, a server, or both, is determined by the nature
of the application that requires the service functions. For example, a single
computer can run web server and file server software at the same time to
serve different data to clients making different kinds of requests. Client
software can also communicate with server software within the same
computer. Communication between servers, such as to synchronize data, is
sometimes called inter-server or server-to-server communication.
8
9. Fig. 1.1 A Client-Server relationship
1.3 COMPONENTS OF A SERVER:
The hardware components that a typical server computer comprises are
similar to the components used in less expensive client computers. However,
server computers are usually built from higher-grade components than client
computers. The following paragraphs describe the typical components of a
server computer.
Motherboard
The motherboard is the computer's main electronic circuit board to which all
the other components of your computer are connected. More than any other
component, the motherboard is the computer. All other components attach to
the motherboard.
9
10. The major components on the motherboard include the processor (or CPU),
supporting circuitry called the chipset, memory, expansion slots, a standard
IDE hard drive controller, and input/output (I/O) ports for devices such as
keyboards, mice, and printers. Some motherboards also include additional
built-in features such as a graphics adapter, SCSI disk controller, or a
network interface.
Processor
The processor, or CPU, is the brain of the computer. Although the processor
isn't the only component that affects overall system performance, it is the
one that most people think of first when deciding what type of server to
purchase. At the time of this writing, Intel had four processor models
designed for use in server computers:
• Itanium 2: 1.60GHz clock speed; 1–2 processor cores
• Xeon: 1.83–2.33GHz clock speed; 1–4 processor cores
• Pentium D: 2.66-3.6GHz clock speed; 2 processor cores
• Pentium 4: 2.4-3.6GHz clock speed; 1 processor core
Each motherboard is designed to support a particular type of processor.
CPUs come in two basic mounting styles: slot or socket. However, you can
choose from several types of slots and sockets, so you have to make sure
that the motherboard supports the specific slot or socket style used by the
CPU. Some server motherboards have two or more slots or sockets to hold
two or more CPUs.
10
11. NOTE: The term clock speed refers to how fast the basic clock that drives
the processor's operation ticks. In theory, the faster the clock speed, the
faster the processor. However, clock speed alone is reliable only for
comparing processors within the same family. In fact, the Itanium processors
are faster than Xeon processors at the same clock speed. The same holds true
for Xeon processors compared with Pentium D processors. That's because
the newer processor models contain more advanced circuitry than the older
models, so they can accomplish more work with each tick of the clock.
The number of processor cores also has a dramatic effect on performance.
Each processor core acts as if it's a separate processor. Most server
computers use dual-core (two processor cores) or quad-core (four cores)
chips.
Memory
Don't scrimp on memory. People rarely complain about servers having too
much memory. Many different types of memory are available, so you have
to pick the right type of memory to match the memory supported by your
motherboard. The total memory capacity of the server depends on the
motherboard. Most new servers can support at least 12GB of memory, and
some can handle up to 32GB.
Hard drives
Most desktop computers use inexpensive hard drives called IDE drives
(sometimes also called ATA). These drives are adequate for individual users,
but because performance is more important for servers, another type of drive
11
12. known as SCSI is usually used instead. For the best performance, use the
SCSI drives along with a high-performance SCSI controller card.
Recently, a new type of inexpensive drive called SATA has been appearing
in desktop computers. SATA drives are also being used more and more in
server computers as well due to their reliability and performance.
Network connection
The network connection is one of the most important parts of any server.
Many servers have network adapters built into the motherboard. If your
server isn't equipped as such, you'll need to add a separate network adapter
card.
Video
Fancy graphics aren't that important for a server computer. You can equip
your servers with inexpensive generic video cards and monitors without
affecting network performance. (This is one of the few areas where it's
acceptable to cut costs on a server.)
Power supply
Because a server usually has more devices than a typical desktop computer,
it requires a larger power supply (300 watts is typical). If the server houses a
large number of hard drives, it may require an even larger power supply.
12
14. Chapter-2
Software and Hardware Requirement
2.1 SOFTWARE REQUIREMENT:
To use your local computer to develop your server, you must install a Linux
system. Windows can also be used to create & deploy servers but carrying
these tasks in windows becomes difficult. It’s recommended to use Linux
system. RedHat Enterprise Linux 7.2 is one of the best Linux OS that can be
used.
2.1.1 INSTALLIG THE REDHAT ENTERPRISE LINUX 7.2:
Installing a Linux system is easy and fast task. There is one more reason to
use Linux system is because it’s free.
14
15. Fig 2.1 Installing Redhat Enterprise Linux 7.2
2.1.2 CONFIGURING THE SYSTEM:
As the Linux system is installed i.e. RHEL 7.2, log in as root. Now we’ve to
configure it by installing some additional packages and upgrading the
system packages.
Open the Terminal and type following commands to install updates:
[root@localhost Desktop] # yum install updates
15
16. 2.2 HARDWARE EQUIREMENT:
Minimum requirement is Pentium 4 or AMD or Celeron Processor. All the
processors above this configuration would be very well working to go with
Linux. So, the processors like Core 2 Duo Processor, Dual Core Processor,
Dual core i3, Dual core i5, Dual core i7, AMD Duron, AMD Sempron,
AMD Turion, MD Opteron, AMD Phenom 1, and Celeron III are
recommended.
Minimum of 512 MB RAM is required and the RAM above this size would
be recommended.
16
17. Fig 2.2: Software & Hardware Requirements
3. WEB SERVER DESCRIPTION:
3.1. HTTPD:
INTRODUCTION:
The Hypertext Transfer Protocol (HTTP) is an application protocol for
distributed, collaborative, hypermedia information systems. HTTP is the
foundation of data communication for the World Wide Web.
17
18. Hypertext is structured text that uses logical links (hyperlinks) between
nodes containing text. HTTP is the protocol to exchange or transfer
hypertext.
Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989.
Standards development of HTTP was coordinated by the Internet
Engineering Task Force (IETF) and the World Wide Web Consortium
(W3C), culminating in the publication of a series of Requests for Comments
(RFCs). The first definition of HTTP/1.1, the version of HTTP in common
use, occurred in RFC 2068 in 1997, although this was obsoleted by RFC
2616 in 1999.
A later version, the successor HTTP/2, was standardized in 2015, and is now
supported by major web servers.
HTTP functions as a request–response protocol in the client–server
computing model. A web browser, for example, may be the client and an
application running on a computer hosting a web site may be the server. The
client submits an HTTP request message to the server. The server, which
provides resources such as HTML files and other content, or performs other
functions on behalf of the client, returns a response message to the client.
The response contains completion status information about the request and
may also contain requested content in its message body.
A web browser is an example of a user agent (UA). Other types of user
agent include the indexing software used by search providers (web
crawlers), voice browsers, mobile apps, and other software that accesses,
consumes, or displays web content.
18
19. HTTP is designed to permit intermediate network elements to improve or
enable communications between clients and servers. High-traffic websites
often benefit from web cache servers that deliver content on behalf of
upstream servers to improve response time. Web browsers cache previously
accessed web resources and reuse them when possible to reduce network
traffic. HTTP proxy servers at private network boundaries can facilitate
communication for clients without a globally routable address, by relaying
messages with external servers.
HTTP is an application layer protocol designed within the framework of the
Internet Protocol Suite. Its definition presumes an underlying and reliable
transport layer protocol, and Transmission Control Protocol (TCP) is
commonly used. However HTTP can be adapted to use unreliable protocols
such as the User Datagram Protocol (UDP), for example in HTTPU and
Simple Service Discovery Protocol (SSDP).
HTTP resources are identified and located on the network by uniform
resource locators (URLs), using the uniform resource identifier (URI)
schemes http and https. URIs and hyperlinks in Hypertext Markup Language
(HTML) documents form inter-linked hypertext documents.
HTTP/1.1 is a revision of the original HTTP (HTTP/1.0). In HTTP/1.0 a
separate connection to the same server is made for every resource request.
HTTP/1.1 can reuse a connection multiple times to download images,
scripts, stylesheets etc. after the page has been delivered. HTTP/1.1
communications therefore experience less latency as the establishment of
TCP connections presents considerable overhead.
19
20. Fig. 3.1 The Apache Web Server
INSTALLATION:
NOTE: Installation of any web server package on RHEL 7.2 or any other
Linux requires only 3-steps:-
Step 1: Install the required software.
20
21. Step 2: Configure the software.
Step 3: Start the service (daemon).
Step 1: Install the httpd package:
Open the terminal. Then write the following command to install the
httpd package.
[root@localhost Desktop] # yum install httpd
Once the httpd package is installed properly then go to the next step.
Step 2: Configure the software:
Configuring the software means changing the internal settings of the
software. Internal settings contain default port no. , default location to look
up for webpages, default type of webpage to accept etc. if there is any need
to configure these settings then type the following command:
[root@localhost Desktop] # vim /etc/httpd/conf/httpd.conf
21
22. Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start httpd
The service of Apache Web Server (httpd) is started.
NOTE: When there is communication over the network, there comes the
concept of firewalls. Firewall prevents any unauthorized connection over
any network. To prevent this intervention caused by the firewall in RHEL
7.2 we write following commands:
[root@localhost Desktop] # setenforce 0
[root@localhost Desktop] # iptables –F
This must be done on each and every server which is going to be created.
22
23. 3.2 FTP:
INTRODUCTION:
File Transfer Protocol (FTP) is a standard Internet protocol for transmitting
files between computers on the Internet over TCP/IP connections.
FTP is a client-server protocol that relies on two communications channels
between client and server: a command channel for controlling the
conversation and a data channel for transmitting file content. Clients initiate
conversations with servers by requesting to download a file. Using FTP, a
client can upload, download, delete, rename, move and copy files on a
server. A user typically needs to log on to the FTP server, although some
23
24. servers make some or all of their content available without login, also
known as anonymous FTP.
FTP sessions work in passive or active modes. In active mode, after a client
initiates a session via a command channel request, the server initiates a data
connection back to the client and begins transferring data. In passive mode,
the server instead uses the command channel to send the client the
information it needs to open a data channel. Because passive mode has the
client initiating all connections, it works well across firewalls and Network
Address Translation (NAT) gateways.
FTP was originally defined in 1971, prior to the definition of TCP and IP,
and has been redefined many times -- e.g., to use TCP/IP (RFC 765 and
RFC 959), and then Internet Protocol Version 6 (IPv6), (RFC 2428). Also,
because it was defined without much concern for security, it has been
extended many times to improve security: for example, versions that encrypt
via a TLS connection (FTPS) or that work with Secure File Transfer
Protocol (SFTP), also known as SSH File Transfer Protocol.
Users can work with FTP via a simple command line interface (for example,
from a console or terminal window in Microsoft Windows, Apple OS X or
Linux) or with a dedicated graphical user interface (GUI). Web browsers can
also serve as FTP clients.
Although a lot of file transfer is now handled using HTTP, FTP is still
commonly used to transfer files "behind the scenes" for other applications --
e.g., hidden behind the user interfaces of banking, a service that helps build a
24
25. website, such as Wix or SquareSpace, or other services. It is also used, via
Web browsers, to download new applications.
Fig. 3.2 The Active & Passive FTP Web Server
25
26. INSTALLATION:
Step 1: Install the vsftpd package:
Open the terminal. Then write the following command to install the vsftpd
package.
[root@localhost Desktop] # yum install vsftpd
Once the vsftpd package is installed properly then go to the next step.
Step 2: Configure the software:
Configuring the software means changing the internal settings of the
software. Internal settings contain default port no. , default location to look
26
27. up for webpages, default type of webpage to accept etc. if there is any need
to configure these settings then type the following command:
[root@localhost Desktop] # vim /etc/vsftpd/vsftpd.conf
Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start vsftpd
The service of FTP Web Server (vsftpd) is started.
3.3. NFS:
INTRODUCTION:
The Network File System (NFS) is a client/server application that lets a
computer user view and optionally store and update files on a remote
computer as though they were on the user's own computer. The
NFS protocol is one of several distributed file system standards for network-
attached storage (NAS).
27
28. NFS allows the user or system administrator to mount (designate as
accessible) all or a portion of a file system on a server. The portion of the
file system that is mounted can be accessed by clients with whatever
privileges are assigned to each file (read-only or read-write). NFS uses
Remote Procedure Calls (RPC) to route requests between clients and
servers.
NFS was originally developed by Sun Microsystems in the 1980's and is
now managed by the Internet Engineering Task Force (IETF). NFSv4.1
(RFC-5661) was ratified in January 2010 to improve scalability by adding
support for parallel access across distributed servers. Network File Sytem
versions 2 and 3 allows the User Datagram Protocol (UDP) running over
an IP network to provide stateless network connections between clients and
server, but NFSv4 requires use of the Transmission Control Protocol (TCP).
28
29. Fig. 3.3 The NFS Web Server
INSTALLATION:
Step 1: Install the nfs-utils package:
Open the terminal. Then write the following command to install the nfs-utils
package.
[root@localhost Desktop] # yum install nfs-utils
Once the nfs-utils package is installed properly then go to the next step.
Step 2: Configure the software:
Configuring the software means changing the internal settings of the
software. Internal settings contain default port no. , default location to look
up for webpages, default type of webpage to accept etc. if there is any need
to configure these settings then type the following command:
29
30. [root@localhost Desktop] # vim /etc/exports
Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start nfs-server
The service of NFS Web Server is started.
30
31. 3.4. NIS:
INTRODUCTION:
NIS (Network Information System) is a network naming and administration
system for smaller networks that was developed by Sun Microsystems. NIS+
is a later version that provides additional security and other facilities. Using
NIS, each host client or server computer in the system has knowledge about
the entire system. A user at any host can get access to files or applications on
any host in the network with a single user identification and password. NIS
is similar to the Internet's domain name system (DNS) but somewhat simpler
and designed for a smaller network. It's intended for use on local area
networks.
NIS uses the client/server model and the Remote Procedure Call (RPC)
interface for communication between hosts. NIS consists of a server, a
library of client programs, and some administrative tools. NIS is often used
with the Network File System (NFS). NIS is a UNIX-based program.
Although Sun and others offer proprietary versions, most NIS code has been
released into the public domain and there are freeware versions available.
NIS was originally called Yellow Pages but because someone already had a
31
32. trademark by that name, it was changed to Network Information System. It
is still sometimes referred to by the initials: "YP".
Sun offers NIS+ together with its NFS product as a solution for Windows
PC networks as well as for its own workstation networks.
INSTALLATION:
Step 1: Install the ypserv package:
Open the terminal. Then write the following command to install the nfs-utils
package.
[root@localhost Desktop] # yum install ypserv
Once the ypserv package is installed properly then go to the next step.
Step 2: Configure the software:
Configuring the software means changing the internal settings of the
software. Internal settings contain default port no. , default location to look
up for webpages, default type of webpage to accept etc. if there is any need
to configure these settings then type the following command:
32
33. [root@localhost Desktop] # vim /etc/yp.conf
Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start ypserv
The service of NIS Web Server is started.
3.5. NTP:
INTRODUCTION:
NTP (Network Time Protocol) is a network protocol that enables you to
synchronize clocks on devices over a network. It works by using one or
more NTP servers that maintain a highly accurate time, and allows clients to
query for that time. These client devices query the server, then automatically
adjust their own internal clock to mirror the NTP server. The NetBurner
NTP server obtains highly accurate time by synchronizing it’s local clock to
33
34. GPS satellites. Once plugged in to your network, the NTP device will allow
your devices to maintain synchronized time.
NTP Servers are generally categorized in to several tiered categories. These
categories are referred to as stratum. As the stratum number increases, the
accuracy of the time generally decreases.
1. Stratum 0 devices are devices such as atomic, GPS, and radio clocks.
These devices offer the highest accuracy, but are not usually publicly
accessible.
2. Stratum 1 devices are network servers that are connected directly to
stratum 0 devices. Some public stratum 1 devices can be found, but they
often come with usage restrictions, including limiting the number of requests
and limiting usage for commercial devices.
3. Stratum 2 devices are network servers that synchronize their time to one
or more stratum 1 or 2 devices. Public, open use NTP servers often fall in to
this category.
Stratum numbers can keep increasing, up to a theoretical stratum 256 device.
However, any device listed as stratum 16 or greater should be considered
inaccurate.
The NetBurner NTP Server is a stratum 1 device connected directly to a
GPS time chip.
34
36. Sometimes Internet NTP servers do not meet your needs. The PK70 NTP
device is a low cost NTP server that can be added to your local network.
Setting up the NetBurner NTP server could not be easier. Unbox the device,
plug in the power cable, network cable, and attach the included antennae.
For optimal usage, the antenna receiver should be placed next to to a
window with a clear view of the sky. Once the device powers up, the red led
light will turn green, indicating the device is synchronized.
Some configuration options, status screens, and XML output can be reached
on the PK70 NTP device by pointing your web browser to the IP address of
the device. Click to see a live demonstration of the NTP device web server.
If you are unsure of the local IP address of your NetBurner NTP server,
download IPSetup, which will can your local network for NetBurner devices
and display their HTTP web address.
Typical Linux distributions include ntpd, the daemon for syncing to an NTP
server. If you are missing ntpd, then you should install ntpd with your
favorite package manager.
Step 1: From the command line, use sudo privileges to edit the /etc/ntp.conf
file. sudo vi /etc/ntp.conf
Step 2: Input one or more ntp servers, one per line. Prepend “server” to
every URL
Example ntp.conf file
server time.apple.com
server time.nist.gov
36
37. server 10.1.1.78
Step 3: Restart ntpd, usually accomplished with /etc/init.d/ntpd restart
Once restarted, you can monitor ntpd with the command ntpq -p. This will
list all of the NTP server in use, and include diagnostic information for all
known NTP servers. It may take several minutes for an NTP server to be
selected and synchronized with. Once an NTP server is selected, it will be
indicated with a * in the ntpq output.
37
38. 3.6. SAMBA:
INTRODUCTION:
Samba is a free software re-implementation of the SMB/CIFS networking
protocol, and was originally developed by Andrew Tridgell. Samba provides
file and print services for various Microsoft Windows clients and can
integrate with a Microsoft Windows Server domain, either as a Domain
Controller (DC) or as a domain member. As of version 4, it supports Active
Directory and Microsoft Windows NT domains.
Samba runs on most Unix, OpenVMS and Unix-like systems, such as Linux,
Solaris, AIX and the BSD variants, including Apple's OS X Server, and OS
X client (version 10.2 and greater). Samba is standard on nearly all
distributions of Linux and is commonly included as a basic system service
on other Unix-based operating systems as well. Samba is released under the
terms of the GNU General Public License. The name Samba comes from
SMB (Server Message Block), the name of the standard protocol used by the
Microsoft Windows network file system.
Samba allows file and print sharing between computers running Microsoft
Windows and computers running Unix. It is an implementation of dozens of
services and a dozen protocols, including:
• NetBIOS over TCP/IP (NBT)
• SMB
• CIFS (an enhanced version of SMB)
38
39. • DCE/RPC or more specifically, MSRPC, the Network Neighborhood
suite of protocols
• A WINS server also known as a NetBIOS Name Server (NBNS)
• The NT Domain suite of protocols which includes NT Domain
Logons
• Security Accounts Manager (SAM) database
• Local Security Authority (LSA) service
• NT-style printing service (SPOOLSS), NTLM and more recently
Active Directory Logon which involves a modified version of
Kerberos and a modified version of LDAP.
• DFS server
All these services and protocols are frequently incorrectly referred to as just
NetBIOS or SMB. The NBT (NetBIOS over TCP/IP) and WINS protocols
are deprecated on Windows.
Samba sets up network shares for chosen Unix directories (including all
contained subdirectories). These appear to Microsoft Windows users as
normal Windows folders accessible via the network. Unix users can either
mount the shares directly as part of their file structure using the smbmount
command or, alternatively, can use a utility, smbclient (libsmb) installed
with Samba to read the shares with a similar interface to a standard
command line FTP program. Each directory can have different access
privileges overlaid on top of the normal Unix file protections. For example:
home directories would have read/write access for all known users, allowing
each to access their own files. However they would still not have access to
the files of others unless that permission would normally exist. Note that the
39
40. netlogon share, typically distributed as a read only share from
/etc/samba/netlogon, is the logon directory for user logon scripts.
Samba services are implemented as two daemons:
• smbd, which provides the file and printer sharing services, and
• nmbd, which provides the NetBIOS-to-IP-address name service.
NetBIOS over TCP/IP requires some method for mapping NetBIOS
computer names to the IP addresses of a TCP/IP network.
Samba configuration is achieved by editing a single file (typically installed
as /etc/smb.conf or /etc/samba/smb.conf). Samba can also provide user
logon scripts and group policy implementation through poledit.
Samba is included in most Linux distributions and is started during the boot
process. On Red Hat, for instance, the /etc/rc.d/init.d/smb script runs at boot
time, and starts both daemons. Samba is not included in Solaris 8, but a
Solaris 8-compatible version is available from the Samba website.
Samba includes a web administration tool called Samba Web Administration
Tool (SWAT). SWAT was removed starting with version 4.1.
40
41. Fig 3.5 The Samba Web Server
INSTALLATION:
Step 1: Install the samba-client package:
Open the terminal. Then write the following command to install the
samba-client package.
[root@localhost Desktop] # yum install samba-client
Once the samba-client package is installed properly then go to the next step.
41
42. Step 2: Configure the software:
Configuring the software means changing the internal settings of the
software. Internal settings contain default port no. , default location to look
up for webpages, default type of webpage to accept etc. if there is any need
to configure these settings then type the following command:
[root@localhost Desktop] # vim /etc/samba/smb.conf
Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start smb
The service of Samba Web Server is started.
42
43. 3.7. SSH:
INTRODUCTION:
Secure Shell (SSH) is a cryptographic network protocol for operating
network services securely over an unsecured network. The best known
example application is for remote login to computer systems by users.
SSH provides a secure channel over an unsecured network in a client-server
architecture, connecting an SSH client application with an SSH server.
Common applications include remote command-line login and remote
command execution, but any network service can be secured with SSH. The
43
44. protocol specification distinguishes between two major versions, referred to
as SSH-1 and SSH-2.
The most visible application of the protocol is for access to shell accounts on
Unix-like operating systems, but it sees some limited use on Windows as
well. In 2015, Microsoft announced that they would include native support
for SSH in a future release.
SSH was designed as a replacement for Telnet and for unsecured remote
shell protocols such as the Berkeley rlogin, rsh, and rexec protocols. Those
protocols send information, notably passwords, in plaintext, rendering them
susceptible to interception and disclosure using packet analysis. The
encryption used by SSH is intended to provide confidentiality and integrity
of data over an unsecured network, such as the Internet, although files
leaked by Edward Snowden indicate that the National Security Agency can
sometimes decrypt SSH, allowing them to read the content of SSH sessions.
SSH uses public-key cryptography to authenticate the remote computer and
allow it to authenticate the user, if necessary. There are several ways to use
SSH; one is to use automatically generated public-private key pairs to
simply encrypt a network connection, and then use password authentication
to log on.
Another is to use a manually generated public-private key pair to perform
the authentication, allowing users or programs to log in without having to
specify a password. In this scenario, anyone can produce a matching pair of
different keys (public and private). The public key is placed on all
computers that must allow access to the owner of the matching private key
44
45. (the owner keeps the private key secret). While authentication is based on
the private key, the key itself is never transferred through the network during
authentication. SSH only verifies whether the same person offering the
public key also owns the matching private key. In all versions of SSH it is
important to verify unknown public keys, i.e. associate the public keys with
identities, before accepting them as valid. Accepting an attacker's public key
without validation will authorize an unauthorized attacker as a valid user.
SSH is typically used to log in to a remote machine and execute commands,
but it also supports tunneling, forwarding TCP ports and X11 connections; it
can transfer files using the associated SSH file transfer (SFTP) or secure
copy (SCP) protocols. SSH uses the client-server model.
The standard TCP port 22 has been assigned for contacting SSH servers. An
SSH client program is typically used for establishing connections to an SSH
daemon accepting remote connections. Both are commonly present on most
modern operating systems, including Mac OS X, most distributions of
Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably,
Windows is one of the few modern desktop/server OSs that does not include
SSH by default. Proprietary, freeware and open source (e.g. PuTTY and the
version of OpenSSH which is part of Cygwin) versions of various levels of
complexity and completeness exist. Native Linux file managers (e.g.
Konqueror) can use the FISH protocol to provide a split-pane GUI with
drag-and-drop. The open source Windows program WinSCP provides
similar file management (synchronization, copy, remote delete) capability
using PuTTY as a back-end. Both WinSCP and PuTTY are available
packaged to run directly off a USB drive, without requiring installation on
45
46. the client machine. Setting up an SSH server in Windows typically involves
installation (e.g. via installing Cygwin ).
SSH is important in cloud computing to solve connectivity problems,
avoiding the security issues of exposing a cloud-based virtual machine
directly on the Internet. An SSH tunnel can provide a secure path over the
Internet, through a firewall to a virtual machine.
SSH is a protocol that can be used for many applications across many
platforms including most Unix variants (Linux, the BSDs including Apple's
OS X, and Solaris), as well as Microsoft Windows. Some of the applications
below may require features that are only available or compatible with
specific SSH clients or servers. For example, using the SSH protocol to
implement a VPN is possible, but presently only with the OpenSSH server
and client implementation.
• For login to a shell on a remote host (replacing Telnet and rlogin)
• For executing a single command on a remote host (replacing rsh)
• For setting up automatic (password less) login to a remote server (for
example, using OpenSSH)
• Secure file transfer
• In combination with rsync to back up, copy and mirror files efficiently
and securely
• For forwarding or tunneling a port (not to be confused with a VPN,
which routes packets between different networks, or bridges two
broadcast domains into one).
• For using as a full-fledged encrypted VPN. Note that only OpenSSH
server and client supports this feature.
46
47. • For forwarding X from a remote host (possible through multiple
intermediate hosts)
• For browsing the web through an encrypted proxy connection with
SSH clients that support the SOCKS protocol.
• For securely mounting a directory on a remote server as a filesystem
on a local computer using SSHFS.
• For automated remote monitoring and management of servers through
one or more of the mechanisms discussed above.
• For development on a mobile or embedded device that supports SSH.
Fig. 3.6 The SSH Web Server
INSTALLATION:
Step 1: Install the openssh-server package:
47
48. Open the terminal. Then write the following command to install the
openssh-server package.
[root@localhost Desktop] # yum install openssh-server
Once the openssh-server package is installed properly then go to the next
step.
Step 2: Configure the software:
Here we don’t need to configure the configuration file because the
configuration file is already configured for the network connection. The
default connection is stable as well as acceptable over any network. The
connection is secure, there is no any worry of breaching of security over any
network.
Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start sshd
48
49. The service of SSH Web Server is started.
3.8. TELNET:
49
50. INTRODUCTION:
Telnet is an application layer protocol used on the Internet or local area
networks to provide a bidirectional interactive text-oriented communication
facility using a virtual terminal connection. User data is interspersed in-band
with Telnet control information in an 8-bit byte oriented data connection
over the Transmission Control Protocol (TCP).
Telnet was developed in 1969 beginning with RFC 15, extended in RFC
854, and standardized as Internet Engineering Task Force (IETF) Internet
Standard STD 8, one of the first Internet standards.
Historically, Telnet provided access to a command-line interface (usually, of
an operating system) on a remote host, including most network equipment
and operating systems with a configuration utility (including systems based
on Windows NT). However, because of serious security concerns when
using Telnet over an open network such as the Internet, its use for this
purpose has waned significantly in favor of SSH.
The term telnet is also used to refer to the software that implements the
client part of the protocol. Telnet client applications are available for
virtually all computer platforms. Telnet is also used as a verb. To telnet
means to establish a connection with the Telnet protocol, either with
command line client or with a programmatic interface. For example, a
common directive might be: "To change your password, telnet to the server,
log in and run the passwd command." Most often, a user will be telnetting to
a Unix-like server system or a network device (such as a router) and
50
51. obtaining a login prompt to a command line text interface or a character-
based full-screen manager.
When Telnet was initially developed in 1969, most users of networked
computers were in the computer departments of academic institutions, or at
large private and government research facilities. In this environment,
security was not nearly as much a concern as it became after the bandwidth
explosion of the 1990s. The rise in the number of people with access to the
Internet, and by extension the number of people attempting to hack other
people's servers, made encrypted alternatives necessary.
Experts in computer security, such as SANS Institute, recommend that the
use of Telnet for remote logins should be discontinued under all normal
circumstances, for the following reasons:
• Telnet, by default, does not encrypt any data sent over the connection
(including passwords), and so it is often feasible to eavesdrop on the
communications and use the password later for malicious purposes;
anybody who has access to a router, switch, hub or gateway located
on the network between the two hosts where Telnet is being used can
intercept the packets passing by and obtain login, password and
whatever else is typed with a packet analyzer.
• Most implementations of Telnet have no authentication that would
ensure communication is carried out between the two desired hosts
and not intercepted in the middle.
• Several vulnerabilities have been discovered over the years in
commonly used Telnet daemons.
51
52. These security-related shortcomings have seen the usage of the Telnet
protocol drop rapidly, especially on the public Internet, in favor of the
Secure Shell (SSH) protocol, first released in 1995. SSH provides much of
the functionality of telnet, with the addition of strong encryption to prevent
sensitive data such as passwords from being intercepted, and public key
authentication, to ensure that the remote computer is actually who it claims
to be. As has happened with other early Internet protocols, extensions to the
Telnet protocol provide Transport Layer Security (TLS) security and Simple
Authentication and Security Layer (SASL) authentication that address the
above concerns. However, most Telnet implementations do not support
these extensions; and there has been relatively little interest in implementing
these as SSH is adequate for most purposes.
It is of note that there are a large number of industrial and scientific devices
which have only Telnet available as a communication option. Some are built
with only a standard RS-232 port and use a serial server hardware appliance
to provide the translation between the TCP/Telnet data and the RS-232 serial
data. In such cases, SSH is not an option unless the interface appliance can
be configured for SSH.
52
53. Fig. 3.7 The TELNET Web Server
INSTALLATION:
Step 1: Install the telnet-server package:
Open the terminal. Then write the following command to install the
telnet-server package.
[root@localhost Desktop] # yum install telnet-server
Once the telnet-server package is installed properly then go to the next step.
53
54. Step 2: Configure the software:
Here we don’t need to configure the configuration file because the
configuration file is already configured for the network connection. The
default connection is stable as well as acceptable over any network. The
connection is secure, there is no any worry of breaching of security over any
network.
Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start telnet.socket
The service of Telnet Web Server is started.
54
55. 3.9. MAIL SERVER:
INTRODUCTION:
Within Internet message handling services (MHS), a message transfer agent
or mail transfer agent (MTA) or mail relay is software that transfers
electronic mail messages from one computer to another using a client–server
application architecture. An MTA implements both the client (sending) and
server (receiving) portions of the Simple Mail Transfer Protocol.
55
56. The terms mail server, mail exchanger, and MX host may also refer to a
computer performing the MTA function. The Domain Name System (DNS)
associates a mail server to a domain with an MX record containing the
domain name of the host(s) providing MTA services.
A mail server is a computer that serves as an electronic post office for email.
Mail exchanged across networks is passed between mail servers that run
specially designed software. This software is built around agreed-upon,
standardized protocols for handling mail messages and any data files (such
as images, multimedia or documents) that might be attached to them.
A message transfer agent receives mail from either another MTA, a mail
submission agent (MSA), or a mail user agent (MUA). The transmission
details are specified by the Simple Mail Transfer Protocol (SMTP). When a
recipient mailbox of a message is not hosted locally, the message is relayed,
that is, forwarded to another MTA. Every time an MTA receives an email
message, it adds a Received trace header field to the top of the header of the
message,[4]
thereby building a sequential record of MTAs handling the
message. The process of choosing a target MTA for the next hop is also
described in SMTP, but can usually be overridden by configuring the MTA
software with specific routes.
An MTA works in the background, while the user usually interacts directly
with a mail user agent. One may distinguish initial submission as first
passing through an MSA – port 587 is used for communication between an
MUA and an MSA while port 25 is used for communication between MTAs,
or from an MSA to an MTA;[5]
this distinction is first made in RFC 2476.
56
57. For recipients hosted locally, the final delivery of email to a recipient
mailbox is the task of a message delivery agent (MDA). For this purpose the
MTA transfers the message to the message handling service component of
the message delivery agent. Upon final delivery, the Return-Path field is
added to the envelope to record the return path.
The function of an MTA is usually complemented with some means for
email clients to access stored messages. This function typically employs a
different protocol. The most widely implemented open protocols for the
MUA are the Post Office Protocol (POP3) and the Internet Message Access
Protocol (IMAP), but many proprietary systems exist for retrieving
messages (e.g. Exchange, Lotus Domino/Notes). Many systems also offer a
web interface for reading and sending email that is independent of any
particular MUA.
At its most basic, an MUA using POP3 downloads messages from the server
mailbox onto the local computer for display in the MUA. Messages are
generally removed from the server at the same time but most systems also
allow a copy to be left behind as a backup. In contrast, an MUA using IMAP
displays messages directly from the server, although a download option for
archive purposes is usually also available. One advantage this gives IMAP is
that the same messages are visible from any computer accessing the email
account, since messages aren't routinely downloaded and deleted from the
server. If set up properly, sent mail can be saved to the server also, in
contrast with POP mail, where sent messages exist only in the local MUA
and are not visible by other MUAs accessing the same account.
57
58. The IMAP protocol has features that allow uploading of mail messages and
there are implementations that can be configured to also send messages like
an MTA,[6]
which combine sending a copy and storing a copy in the Sent
folder in one upload operation.
The reason for using SMTP as a standalone transfer protocol is twofold:
• To cope with discontinuous connections. Historically, inter-network
connections were not continuously available as they are today and
many readers didn't need an access protocol, as they could access their
mailbox directly (as a file) through a terminal connection. SMTP, if
configured to use backup MXes, can transparently cope with
temporary local network outages. A message can be transmitted along
a variable path by choosing the next hop from a preconfigured list of
MXes with no intervention from the originating user.
• Submission policies. Modern systems are designed for users to submit
messages to their local servers for policy, not technical, reasons. It
was not always that way. For example, the original Eudora email
client featured direct delivery of mail to the recipients' servers, out of
necessity. Today, funneling email through MSA systems run by
providers that in principle have some means of holding their users
accountable for the generation of the email is a defense against spam
and other forms of email abuse.[7]
58
59. Fig. 3.8 The Mail Server
INSTALLATION:
Step 1: Install the postfix package:
Open the terminal. Then write the following command to install the posfix
package.
[root@localhost Desktop] # yum install postfix
Once the postfix package is installed properly then go to the next step.
59
60. Step 2: Configure the software:
Configuring the software means changing the internal settings of the
software. Internal settings contain default port no. , default location to look
up for webpages, default type of webpage to accept etc. if there is any need
to configure these settings then type the following command:
[root@localhost Desktop] # vim /etc/postfix/main.conf
This configuration file is configured default to send email to anyone but
can’t receive. To receive we have to disable firewall feature of Linux.
Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start postfix
The service of Mail Server is started.
60
61. 3.10. DHCP:
INTRODUCTION:
As long as you're learning about your IP address, you should learn a little
about something called DHCP—which stands for Dynamic Host
Configuration Protocol. Why bother? Because it has a direct impact on
millions of IP addresses, most likely including yours.
DHCP is at the heart of assigning you (and everyone) their IP address. The
key word in DHCP is protocol—the guiding rules and process for Internet
connections for everyone, everywhere. DHCP is consistent, accurate and
works the same for every computer. Remember that without an IP address,
you would not be able to receive the information you requested. As you've
learned (by reading IP: 101), your IP address tells the Internet to send the
information that you requested (Web page, email, data, etc.) right to the
computer that requested it.
Those incredible protocols
There are more than one billion computers in the world, and each individual
computer needs its own IP address whenever it's online. The TCP/IP
protocols (our computers' built-in, internal networking software) include a
DHCP protocol. It automatically assigns and keeps tabs of IP addresses and
61
62. any "subnetworks" that require them. Nearly all IP addresses are dynamic, as
opposed to "static" IP addresses that never change.
DHCP is a part of the "application layer," which is just one of the several
TCP/IP protocols. All of the processing and figuring out of what to send to
whom happens virtually instantly.
Clients and servers
The networking world classifies computers into two distinctive categories:
1) individual computers, called "hosts," and
2) computers that help process and send data (called "servers"). A DHCP
server is one computer on the network that has a number of IP address at its
disposal to assign to the computers/hosts on that network. If you use a cable
company for Internet access, making them your Internet Service Provider,
they likely are your DHCP server.
Permission slips
Think of getting an IP address as similar to obtaining a special permission
slip from the DHCP server to use the Internet. In this scenario, you are the
DHCP client—whenever you want to go on the Internet, your computer
automatically requests an IP address from the network's DHCP server. If
there's one available, the DHCP server sends a response containing an IP
address to your computer.
How DHCP works
62
63. The key word in DHCP is "dynamic." Because instead of having just one
fixed and specific IP address, most computers will be assigned one that is
available from a subnet or "pool" that is assigned to the network. The
Internet isn't one big computer in one big location. It's an interconnected
network of networks, all created to make one-on-one connections between
any two clients that want to exchange information.
One of the features of DHCP is that it provides IP addresses that "expire."
When DHCP assigns an IP address, it actually leases that connection
identifier to the user's computer for a specific amount of time. The default
lease is five days.
Here is how the DHCP process works when you go online:
1. Your go on your computer to connect to the Internet.
2. The network requests an IP address (this is actually referred to as a
DHCP discover message).
3. On behalf of your computer's request, the DHCP server allocates
(leases) to your computer an IP address. This is referred to as the
DHCP offer message.
4. Your computer (remember—you're the DHCP client) takes the first IP
address offer that comes along. It then responds with a DHCP request
message that verifies the IP address that's been offered and accepted.
5. DHCP then updates the appropriate network servers with the IP
address and other configuration information for your computer.
6. Your computer (or whatever network device you're using) accepts the
IP address for the lease term.
63
64. Typically, a DHCP server renews your lease automatically, without you (or
even a network administrator) having to do anything. However, if that IP
address's lease expires, you'll be assigned a new IP address using the same
DHCP protocols.
Here's the best part: You wouldn't even be aware of it, unless you happened
to check your IP address. Your Internet usage would continue as before.
DHCP takes place rather instantly, and entirely behind the scenes. We, as
everyday, ordinary computer users, never have to think twice about it. We
just get to enjoy this amazing and instantaneous technology that brings the
Internet to our fingertips when we open our browsers. I guess you could say
DHCP stands for "darn handy computer process"...or something like that.
Fig.3.9 The DHCP Server
64
65. INSTALLATION:
Step 1: Install the dhcp package:
Open the terminal. Then write the following command to install the posfix
package.
[root@localhost Desktop] # yum dhcp postfix
Once the dhcp package is installed properly then go to the next step.
Step 2: Configure the software:
Configuring the software means changing the internal settings of the
software. Internal settings contain default port no. , default location to look
65
66. up for webpages, default type of webpage to accept etc. if there is any need
to configure these settings then type the following command:
[root@localhost Desktop] # vim /etc/dhcp/dhcpd.conf
Step 3: Starting the service:
Now start the service i.e. the daemon by typing following command:
[root@localhost Desktop] # systemctl start dhcp
The service of DHCP Server is started.
3.11. DNS:
INTRODUCTION:
The Domain Name System (DNS) is a hierarchical decentralized naming
system for computers, services, or any resource connected to the Internet or
a private network. It associates various information with domain names
66
67. assigned to each of the participating entities. Most prominently, it translates
more readily memorized domain names to the numerical IP addresses
needed for the purpose of locating and identifying computer services and
devices with the underlying network protocols. By providing a worldwide,
distributed directory service, the Domain Name System is an essential
component of the functionality of the Internet.
The Domain Name System delegates the responsibility of assigning domain
names and mapping those names to Internet resources by designating
authoritative name servers for each domain. Network administrators may
delegate authority over sub-domains of their allocated name space to other
name servers. This mechanism provides distributed and fault tolerant service
and was designed to avoid a single large central database.
The Domain Name System also specifies the technical functionality of the
database service which is at its core. It defines the DNS protocol, a detailed
specification of the data structures and data communication exchanges used
in the DNS, as part of the Internet Protocol Suite. Historically, other
directory services preceding DNS were not scalable to large or global
directories as they were originally based on text files, prominently the
HOSTS.TXT resolver. The Domain Name System has been in use since the
1980s.
The Internet maintains two principal namespaces, the domain name
hierarchy and the Internet Protocol (IP) address spaces. The Domain Name
System maintains the domain name hierarchy and provides translation
services between it and the address spaces. Internet name servers and a
communication protocol implement the Domain Name System. A DNS
67
68. name server is a server that stores the DNS records for a domain; a DNS
name server responds with answers to queries against its database.
The most common types of records stored in the DNS database are for Start
of Authority (SOA), IP addresses (A and AAAA), SMTP mail exchangers
(MX), name servers (NS), pointers for reverse DNS lookups (PTR), and
domain name aliases (CNAME). Although not intended to be a general
purpose database, DNS can store records for other types of data for either
automatic lookups, such as DNSSEC records, or for human queries such as
responsible person (RP) records. As a general purpose database, the DNS
has also been used in combating unsolicited email (spam) by storing a real-
time blackhole list. The DNS database is traditionally stored in a structured
zone file.
An often-used analogy to explain the Domain Name System is that it serves
as the phone book for the Internet by translating human-friendly computer
hostnames into IP addresses. For example, the domain name
www.example.com translates to the addresses 93.184.216.119 (IPv4) and
2606:2800:220:6d:26bf:1447:1097:aa7 (IPv6). Unlike a phone book, DNS
can be quickly updated, allowing a service's location on the network to
change without affecting the end users, who continue to use the same host
name. Users take advantage of this when they use meaningful Uniform
Resource Locators (URLs), and e-mail addresses without having to know
how the computer actually locates the services.
Additionally, DNS reflects administrative partitioning. For zones operated
by a registry, also known as public suffix zones, administrative information
is often complemented by the registry's RDAP and WHOIS services. That
68
69. data can be used to gain insight on, and track responsibility for, a given host
on the Internet.
An important and ubiquitous function of DNS is its central role in
distributed Internet services such as cloud services and content delivery
networks. When a user accesses a distributed Internet service using a URL,
the domain name of the URL is translated to the IP address of a server that is
proximal to the user. The key functionality of DNS exploited here is that
different users can simultaneously receive different translations for the same
domain name, a key point of divergence from a traditional "phone book"
view of DNS. This process of using DNS to assign proximal servers to users
is key to providing faster response times on the Internet and is widely used
by most major Internet services today.
69
70. Fig. 3.10 The DNS Server
INSTALLATION:
Step 1 – Install Bind Packages
Bind packages are available under default yum repositories. To install
packages simple execute below command.
# yum install bind bind-chroot
Step 2 – Edit Main Configuration File
Default bind main configuration file is located under /etc directory. But
using chroot environment this file is located at /var/named/chroot/etc
directory. Now edit main configuration file and update content as below.
# vim /var/named/chroot/etc/named.conf
Content for the named.conf file
// /var/named/chroot/etc/named.conf
options {
listen-on port 53 { 127.0.0.1; 192.168.1.0/24; 0.0.0.0/0; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { localhost; 192.168.1.0/24; 0.0.0.0/0; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
70
71. file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
zone "demotecadmin.net" IN {
type master;
file "/var/named/demotecadmin.net.db";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
Step 3 – Create Zone File for Your Domain
After creating bind main configuration file, create a zone file for you domain
as per configuration, for example demotecadmin.net.db in this article.
#vim/var/named/chroot/var/named/demotecadmin.net.db
Content for the zone file
; Zone file for demotecadmin.net
$TTL 14400
@ 86400 IN SOA ns1.tecadmin.net. webmaster.tecadmin.net. (
3013040200 ; serial, todays date+todays
86400 ; refresh, seconds
7200 ; retry, seconds
3600000 ; expire, seconds
86400 ; minimum, seconds
)
demotecadmin.net. 86400 IN NS ns1.tecadmin.net.
demotecadmin.net. 86400 IN NS ns2.tecadmin.net.
demotecadmin.net. IN A 192.168.1.100
demotecadmin.net. IN MX 0 mail.demotecadmin.net.
mail IN CNAME demotecadmin.net.
www IN CNAME demotecadmin.net.
If you are having more domain, its required to create zone files for each
domain individually.
71
72. Step 4 – Add More Domains
To add more domains in dns, create zone files individually for all domain as
above. After that add any entry for all zones in named.conf like below.
Change demotecadmin.net with your domain name.
zone "demotecadmin.net" IN {
type master;
file "/var/named/demotecadmin.net.db";
};
Step 5 – Start Bind Service
Start named (bind) service using following command.
# service named restart
References
72
73. I have studied about PHP, MySQL etc. Dream weaver CS5 was the main source in
working of PHP. I have also used Apache Server and MySQL to store the data in
database. In the making of report I got a lot of help from books and websites.
The sources are:-
[1] www.techmint.net
[2] www.w3schools.com
[3] www.google.com
Books Are:-
[4] PHP-A BEGINNER’S GUIDE (VIKRAM VASWANI)
[5] PHP6 and MySQL (BIBLE)
[6] SAMS Teach Yourself HTML and CSS (Mike Wooldridge & Linda Wooldridge)
73