• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
 COMPUTER BASICS
 

COMPUTER BASICS

on

  • 889 views

 

Statistics

Views

Total Views
889
Views on SlideShare
889
Embed Views
0

Actions

Likes
1
Downloads
11
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

     COMPUTER BASICS COMPUTER BASICS Document Transcript

    • A Project report on COMPUTER BASIC Submitted to The Institute of Chartered Accountant of India XXXXXXX Branch In Partial Fulfillment of the ITT 100 hours TrainingGuided By: Submitted By:XXXXXXXXXXXX XXXXX XXXXCenter in Charge EROXXXXXXXXXXFacultyITT, Institute ofChartered Accountants of India,XXXXXXX Branch Year 2012 ITT 100 Hours Training Under Institute Of Chartered Accountants Of India
    • CERTIFICATEThis is to certify that XXXXX XXXX a student of ITT 100 hours ofInstitute of Chartered Accountants of India has prepared a projecton “COMPUTER BASICS” under my guidance.She has fulfilled all the requirements needed for preparing theproject report.I wish her all success in life.Date:- _________________________Authorised Signature ITT Branch, XXXXXX
    • ACKNOWLEDGEMENTThere is no project that can be completed through an individual effort.It always takes the contribution of a lot of people. The contribution ofsome is direct and of others is indirect. I express my sincere gratitudetowards all those who help me directly and indirectly throughout theproject.First and foremost, I would like to express my sincere appreciationand gratitude to XXXXXXXX and XXXXXXX who in the role ofinstitutional guide offered me their precise guidance, motivation,suggestion in completing this project work.My sincere thanks also goes to my parents who have continuouslysupported me in this effort.Finally, I offer my thanks to my fellow group members as withouttheir co-operation it would not have been possible for the projectreport to materialize. Registration No.:ERO-XXXXXXX
    • INTRODUCTIONThe Internet is a global network of networks.People and organizationsconnect into the Internet so they can access its massive store ofshared information.It is an inherently participative medium. Anybodycan publish information or create new services.The Internet is acooperative endeavor -- no organization is inchargeof the net.By the turn of the century, information, including access to theInternet, will be the basis for personal, economic, and politicaladvancement. The popular name for the Internet is the informationsuperhighway. Whether we want to find the latest financial news,browse through library catalogs, exchange information withcolleagues, or join in a lively political debate, the Internet is the toolthat will take us beyond telephones, faxes, and isolated computers to aburgeoning networked information frontier.The Internet supplements the traditional tools we use to gatherinformation, Data Graphics, News and correspond with other people.Used skillfully, the Internet shrinks the world and brings information,expertise, and knowledge on nearly every subject imaginable straightto your computer.
    • CONTENTSChapter 1:  Introduction  Executive Summary  Objective of the Study  Resarch MethodologyChapter 2:  Search Engine  How search engine works  Web Crawling  Indexing  Searching  Bibliography
    • EXECUTIVESUMMARYTopic: InternetSources of Data: Internet BooksLocation of study: GuwahatiInstitutional Guide: Himanshu Haloi(Centre-in-Charge) Sagar Nath(Faculty)Objective: The project was basically prepared to view the working and importance of Internet.Data Source: Secondary
    • OBJECTIVEOFTHE STUDYTo gain in-depth knowledge about Internet, both of which areimportant tools in the areas of software development andcomputer programming.To understand the structure of flowcharting, the symbols andsteps required to prepare it and the different types of flowcharts.To analyse the advantages and limitations of Internet and itsextensive use in different feilds.To comprehend the meaning and types of decision tables and thesteps in the process of making it.To understand the applications of decision tables in variousfields.
    • RESEARCHMETHODOLGYData is one of the most important and vital aspect of any researchstudies. Researches conducted in different fields of study can bedifferent in methodology but every research is based on data which isanalyzed and interpreted to get information.Data is the basic unit in statistical studies. Statistical information likecensus, population variables, health statistics, and road accidentsrecords are all developed from data.Data is important in computer science. Numbers, images and figuresin computer are all data.Primary Data:Data that has been collected from first-hand-experience is known asprimary data. Primary data has not been published yet and is morereliable, authentic and objective. Primary data has not been changedor altered by human beings, therefore its validity is greater thansecondary data.Following are some of the sources of primary data.Experiments: Experiments require an artificial or natural setting inwhich to perform logical study to collect data. Experiments are moresuitable for medicine, psychological studies, nutrition and for otherscientific studies. In experiments the experimenter has to keep controlover the influence of any extraneous variable on the results.Survey: Survey is most commonly used method in social sciences,management, marketing and psychology to some extent. Surveys canbe conducted in different methods. Questionnaire: is the most commonly used method in survey. Questionnaires are a list of questions either open-ended or close -ended for which the respondent give answers. Questionnaire
    • can be conducted via telephone, mail, live in a public area, or in an institute, through electronic mail or through fax and other methods. Interview: Interview is a face-to-face conversation with the respondent. In interview the main problem arises when the respondent deliberately hides information otherwise it is an in depth source of information. The interviewer can not only record the statements the interviewee speaks but he can observe the body language, expressions and other reactions to the questions too. This enables the interviewer to draw conclusions easily. Observations: Observation can be done while letting the observing person know that he is being observed or without letting him know. Observations can also be made in natural settings as well as in artificially created environment.Secondary Data: Data collected from a source that has already been published in any form is called as secondary data. The review of literature in nay research is based on secondary data. Mostly from books, journals, periodicals,internet and electronic media.The methodology used in preparation of this project is mostlysecondary with the help of books and internet .
    • Search EngineThe World Wide Web is "indexed" through the use of search engines,which are also referred to as "spiders," "robots," "crawlers," or"worms". These search engines comb through the Web documents,identifying text that is the basis for keyword searching.The list below lists several search engines and how each one gathersinformation, plus resources that evaluate the search engines:-Alta VistaAlta Vista, maintained by The Digital Equipment Corp., indexes the fulltext of over 16 million pages including newsgroups. Check out the AltaVista Tips page.Excite NetsearchExcite includes approximately 1.5 million indexed pages, includingnewsgroups. Check outthe Excite NetSearch handbook.InfoSeek Net SearchIndexes full text of web pages, includingselected newsgroups and electronicjournals. Just under one-half million pages indexed.Check out the InfoSeek Search Tips.
    • InktomiAs of December 1995, the Inktomi search engineoffers a database of approximately 2.8 million indexed Webdocuments and promises very fast search retrievals. Results areranked in order of how many of your searched terms are used on theretrieved pages.LycosLycos indexes web pages (1.5 million +), web page titles, headings,subheadings, URLs, and significant text.Search results are returned in a ranked order.MagellanMagellan indexes over 80,000 web sites. Search results are ranked andannotated.Open Text IndexIndexes full text of approximately 1.3 million pages. Check out theOpen Text Help pages for tips on using this search engine.WebCrawlerMaintained by America Online, WebCrawler indexes over 200,000pages on approximately 75,000 web servers. URLs, titles, anddocument content are indexed.WWWW -- World Wide Web WormApproximately 250,000 indexed pages; indexed content includeshypertext, URLs, and document titles.YahooA favorite directory and search engine, Yahoo has organized over80,000 Web sites (including newsgroups) into 14 broad categories.
    • How Search Engine Works?Each search engine works in a different way. Some engines scanfor information in the title or header of the document; others lookat the bold "headings" on the page for their information. However,a search engine operates, in the following three steps: • Web Crawling:A special Software robots called spiders built list of word found on millions of web sites. When a spider is building its list , the process is called web crawling. • Indexing: After crawlin the contents of each page are then analyzed to determine how it should be indexed. • Searching: It means Building a query and submitting it through the search engine.
    • Web CrawlingA Web crawler is a computer program that browses the WorldWide Web in a methodical, automated manner or in an orderlyfashion. This process is calledWeb crawling or spidering.Many sites, in particular search engines, use spidering as ameans of providing up-to-date data. Web crawlers are mainlyused to create a copy of all thevisited pages for later processing by a search engine that willindex the downloaded pages toprovide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site,such as checking links or validating HTML code. Also, crawlerscan be used to gather specific types of information from Webpages, such as harvesting e-mail addresses (usually forsending spam).A Web crawler is one type of bot, or software agent. In general, itstarts with a list of URLs to visit, called the seeds. As the crawlervisits these URLs, it identifies all the hyperlinks in the page andadds them to the list of URLs to visit, called the crawl frontier.URLs from the frontier are recursively visited according to a set ofpolicies.
    • IndexingSearch engine indexing collects, parses, and stores data tofacilitate fast and accurate information retrieval. An alternatename for the process in the context of search engines designed tofind web pages on the Internet is web indexing. The purpose ofstoring an index is tooptimize speed andperformance in findingrelevant documents fora search query. Withoutan index, the search engine would scan every document in the corpus, which would require considerable timeand computing power.For example, while an index of 10,000 documents can be queried withinmilliseconds, a sequential scan of every word in10,000 large documents could take hours. The additionalcomputer storage required to store the index, as well as theconsiderable increase in the time required for an update to takeplace, are traded off for the time saved during informationretrieval.
    • SearchingWhen a user enters a query into a search engine, the engineexamines its index and provides a listing of best-matching webpages according to criteria, usually with a short summarycontaining the documents titleand sometimes part of thetext. Most search enginessupport the use of the Booleanoperators AND, OR andNOT to further specify thesearch query. Some search engines provide an advanced feature called proximitysearch which allows usersto define the distance betweenkeywords. The usefulness ofa search depends on therelevance ofthe result set it gives back.While there may be millions of web pages that include a particular word or phrase,some pages may be morerelevant, popular, orauthoritative than others.Most search engines employ methods to rank the results toprovide the “best” results first.