A survey paper of virtual friend

1,450 views

Published on

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,450
On SlideShare
0
From Embeds
0
Number of Embeds
179
Actions
Shares
0
Downloads
24
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

A survey paper of virtual friend

  1. 1. [Type text] [Type text] [Type text] 2012 A Survey Paper of Virtual Friend Chatbot Siddiq Abu Bakkar [09-13368-1] AMERICAN INTERNA TIONAL UNIVERSITY BANGLADESH (AIUB) CSE DEPARTMENT shaon_sikdar@yahoo.com ; shaon.sikdar@gmail.com Shaon [Type the company name] 3/20/2012
  2. 2. 1 | P age A Survey Paper of Virtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1 AMERICAN INTERNA TIONAL UNIVERSITY BANGLADESH (AIUB) CSE DEPARTMENT shaon_sikdar@yahoo.com ; shaon.sikdar@gmail.comAbstract: A chatter robot, chatterbot, chatbot or chat bot is a computer When the ―USER‖ exceeded theprogram designed to simulate an very small knowledge base, VF mightintelligent conversation with one or more provide a generic response, for example,human users via auditory or textual responding to ―I wont go to universitymethods, primarily for engaging in small today.‖ with ―Why you wont go totalk. The primary aim of such simulation university, are you feeling sick?‖. Thehas been to fool the user into thinking that response to ―Yahoo! I have got 3.94 CGPAthe programs output has been produced by in this semesters. ‖ would bea human (the Turing test). Programs ―Congratulation!! I am very much happyplaying this role are sometimes referred to for your excellent result.‖ VF isas Artificial Conversational Entities, talk implemented using simple patternbots or chatterboxes. In addition, however, matching techniques, but is taken seriouslychatterbots are often integrated into dialog by several of it users, even after explainedsystems for various practical purposes to them how it worked.such as online help, personalized service,or information acquisition. Somechatterbots use sophisticated naturallanguage processing systems, but manysimply scan for keywords within the inputand pull a reply with the most matchingkeywords, or the most similar wordingpattern, from a textual database. Virtual Friend (VF) is a computerprogram and early example of primitivenatural language processing. VF operatedby processing users response to scripts,the most famous of which was DOCTOR,a simulation of a Rogerianpsychotherapist. Eliza, using almost noinformation about human thought oremotion, DOCTOR sometimes provided astartlingly human-like interaction .Elizawas written at MIT by JosephWeizaenbaum between 1964 and 1966. Virtual Friend ResponseVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  3. 3. 2 | P age The program was designed to Natural Language Processing:showcase the digitized voices the cards The history of machine translationwere able to produce, though the quality dates back to the seventeenth century,was far from life-like. Its AI engine was when philosophers suchlikely based on something similar to as Leibniz and Descartes put forwardthe ELIZA algorithm. proposals for codes which would relate words between languages. All of theseContents: proposals remained theoretical, and none resulted in the development of an actual 1. Natural Language Processing machine. [NLP] The first patents for "translating 2. Machine Learning [ML] machines" were applied for in the mid- I. Supervised learning 1930s. One proposal, by Georges algorithms Artsrouni was simply an automatic II. Logic based algo- bilingual dictionary using paper tape. The rithms other proposal, by Peter Troyanskii,  Decision a Russian, was more detailed. It included trees both the bilingual dictionary, and a method for dealing with grammatical roles III. Statistical learning between languages, based on Esperanto. algorithms In 1950, Alan Turing published his famous article "Computing Machinery and Intelligence"[1] which proposed what is  Naive Bayes classifiers now called the Turing test as a criterion of intelligence. This criterion depends on the  Bayesian Networks ability of a computer program to impersonate a human in a real-time written 3. Speech Recognition [SR] conversation with a human judge, sufficiently well that the judge is unable to 4. Turing Test [TT] distinguish reliably - on the basis of the 5. Most Popular Chatbots conversational content alone - between the program and a real human. a. ELIZA In 1957, Noam b. PARRY Chomsky’s Syntactic c. The Chinese Room Structures revolutionized Linguistics with d. SIRI universal grammar, a rule based system of syntactic structures. However, the real i. Details of SIRI progress of NLP was much slower, and ii. Reception Of SIRI after the ALPAC report in 1966, which found that ten years long research had iii. SIRI says some weird things failed to fulfill the expectations, funding was dramatically reduced internationally. 6. References. In 1969 Roger Schank introduced the conceptual dependency theory for natural language understanding. ThisVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  4. 4. 3 | P agemodel, partially influenced by the work take, but rather must discover which ac-of Sydney Lamb, was extensively used by tions yield the best reward, by trying eachSchanks students at Yale University, such action in turn.as Robert Wilensky, Wendy Lehnert,andJanet Kolodner. Numerous ML applications involve tasks that can be set up as supervised. In In 1970, William A. Woods the present paper, we have concentrated onintroduced the augmented transition the techniques necessary to do this. In par-network (ATN) to represent natural ticular, this work is concerned with classi-language input. Instead of phrase structure fication problems in which the output ofrules ATNs used an equivalent set of finite instances admits only discrete, unorderedstate automata that were called recursively. values. Instances with known labels (theATNs and their more general format called corresponding correct outputs)"generalized ATNs" continued to be used We have limited our references to recentfor a number of years. refereed journals, published books and conferences. In addition, we have added some references regarding the originalMachine Learning: work that started the particular line of re- There are several applications for search under discussion. A brief review ofMachine Learning (ML), the most signifi- what ML includes can be found in (Duttoncant of which is data mining. People are & Conroy, 1996). De Mantaras and Ar- mengol (1998) also presented a historicaloften prone to making mistakesduringanalyses or, possibly, when trying to survey of logic and instance based learningestablish Relationships between multiple classifiers. The reader should be cautionedfeatures. This makes it difficult for them to that a single article cannot be a compre-find solutions to certain problems. Ma- hensive review of all classification learn- ing algorithms. Instead, our goal has beenchine learning can often be successfullyapplied to these problems, improving the to provide a representative sample of exist-efficiency of systems and the designs of ing lines of research in each learning tech-machines. nique. In each of our listed areas, there are Every instance in any dataset used many other papers that more comprehen- sively detail relevant work.by machine learning algorithms is repre-sented using the same set of features. Thefeatures may be continuous, categorical or Supervised learning algorithmsbinary. If instances are given with knownlabels (the corresponding correct outputs) Inductive machine learning is thethen the learning is called supervised, in process of learning a set of rules from in-contrast to unsupervised learning, where stances (examples in a training set), orinstances are unlabeled. By applying these more generally speaking, creating a classi-unsupervised (clustering) algorithms, re- fier that can be used to generalize fromsearchers hope to discover unknown, but new instances. The process of applyinguseful, classes of items (Jain et al., 1999). supervised ML to a real-world problem is Another kind of machine learning described in Figureis reinforcement learning (Barto & Sutton,1997). The training information providedto the learning system by the environment(external trainer) is in the form of a scalarreinforcement signal that constitutes ameasure of how well the system operates.The learner is not told which actions toVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  5. 5. 4 | P age only used to handle noise but to cope with the infeasibility of learning from very large datasets. Instance selection in these datasets is an optimization problem that attempts to maintain the mining quality while minimizing the sample size (Liu and Motoda, 2001). It reduces data and enables a data mining algorithm to function and work effectively with very large datasets. There is a variety of procedures for sam- pling instances from a large dataset (Reinartz, 2002). Feature subset selection is the process of identifying and removing as many irrelevant and redundant features as possible (Yu & Liu, 2004). This reduces the dimensionality of the data and enables data mining algorithms to operate faster and more effectively. The fact that many features depend on one another often unduly influences the accuracy of super- vised ML classification models. This prob-Figure: The process of supervised ML lem can be addressed by constructing new features from the basic feature set (Mar- The first step is collecting the da- kovitch & Rosenstein, 2002). This tech-taset. If a requisite expert is available, then nique is called feature construc-s/he could suggest which fields (attributes, tion/transformation. These newly generat-features) are the most informative. If not, ed features may lead to the creation ofthen the simplest method is that of ―brute- more concise and accurate classifiers. Inforce,‖ which means measuring everything addition, the discovery of meaningful fea-available in the hope that the right (in- tures contributes to better comprehensibil-formative, relevant) features can be isolat- ity of the produced class.ed. However, a dataset collected by the―brute-force‖ method is not directly suita- Logic based algorithms:ble for induction. It contains in most casesnoise and missing feature values, and Decision trees:therefore requires significant pre-processing (Zhang et al., 2002). Murthy (1998) provided an over- view of work indecision trees and a sample The second step is the data prepara- of their usefulness to newcomers as well astion and data preprocessing. Depending on practitioners in the field of machine learn-the circumstances, researchers have a ing. Thus, in this work, apart from a briefnumber of methods to choose from to han- description of decision trees, we will referdle missing data (Batista & Monard, to some more recent works than those in2003). Hodge & Austin (2004) have re- Murthy’s article as well as few very im-cently introduced a survey of contempo- portant articles that were published earlier.rary techniques for outlier (noise) detec- Decision trees are trees that classify in-tion. These researchers have identified the stances by sorting them based on featuretechniques’ advantages and disadvantages. values. Each node in a decision tree repre-Instance selection is not sents a feature in an instance to be classi- fied, and each branch represents a valueVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  6. 6. 5 | P agethat the node can assume. Instances are analysis (LDA) and the related Fishersclassified starting at the root node linear discriminant are simple methodsand sorted based on their feature values. used in statistics and machine learning toFigure is an example of a decision tree for find the linear combination of featuresthe training set of Table. which best separate two or more classes of object (Friedman, 1989). LDA works when the measurements made on each ob- servation are continuous quantities. When dealing with categorical variables, the equivalent technique is Discriminant Correspondence Analysis (Mika et al.1999). Maximum entropy is another general technique for estimating probabil- ity distributions from data. The overriding principle in maximum entropy is that when nothing is known, the distribution should be as uniform as possible, that is, have maximal entropy. Labeled training data is used to derive a set of constraints for the model that characterize the class-specific expectations for the distribution. Csiszar (1996) provides a good tutorial introduc- tion to maximum entropy techniques. Bayesian networks are the most well- known representative of statistical learning algorithms. A comprehensive book on Bayesian networks is Jensen’s (1996). Thus, in this study, apart from our brief description of Bayesian networks, we mainly refer to more recent works.Using the decision tree depicted in Figureas an example, the instance 〈at1 = a1, at2 = Naive Bayes classifiers:b2, at3 = a3, at4 =b4〉nodes: at1, at2, and finally at3, which Naive Bayesian networks (NB) arewould classify the instance as being posi- very simple Bayesian networks which aretive (represented by the values ―Yes‖). The composed of directed acyclic graphs withproblem of constructing optimal binary only one parent (representing the unob-decision trees is an NPcomplete problem served node) and several children (corre-and thus theoreticians have searched sponding to observed nodes) with a strongfor efficient heuristics for constructing assumption of independence among childnear-optimal decision trees. nodes in the context of their parent (Good, 1950).Thus, the independence modelStatistical Learning Algorithms: (Naive Bayes) is based on estimating (Nilsson, 1965): Conversely to ANNs, statisticalapproaches are characterized by having an R= ( )explicit underlying probability model, ()which provides a probability that an ()()instance belongs in each class, rather than ()()simply a classification. Linear discriminantVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  7. 7. 6 | P age()() network has the limitation that each fea-()() ture can be related to only one other fea-||| ture. Semi-naive Bayesian classifier is an-||| other important attempt to avoid ther independence assumption. (Kononenko,r 1991), in which attributes are partitionedPiXPiPXiPiPXi into groups and it is assumed that xi isPjXPjPXjPjPXj conditionally independent of xj if and only= = ΠΠ if they are in different groups. Comparing these two probabilities,the larger probability indicates that the The major advantage of the naiveclass label value that is more likely to be Bayes classifier is its short computationalthe actual label (if R>1: predict i time for training. In addition, since thepredict j). Cestnik et al (1987) first used model has the form of a product, it can bethe Naive Bayes in ML community. Since converted into a sum through the use ofthe Bayes classification algorithm uses a logarithms – with significant consequentproduct operation to compute the probabil- computational advantages. If a feature isities P(X, i), it is especially prone to being numerical, the usual procedure is to discre-unduly impacted by probabilities of 0. This tize it during data pre-processing (Yang &can be avoided by using Laplace estimator Webb, 2003), although a researcher canor m-esimate, by adding one to all numera- use the normal distribution to calculatetors and adding the number of added ones probabilities (Bouckaert, 2004).to the denominator (Cestnik, 1990). Bayesian Networks: The assumption of independenceamong child nodes is clearly almost al- A Bayesian Network (BN) is aways wrong and for this reason naive graphical model for probability relation-Bayes classifiers are usually less accurate ships among a set of variables (features).that other more sophisticated learning al- The Bayesian network structure S is a di-gorithms (such ANNs). rected acyclic graph (DAG) and the nodes in S are in one-to-one correspondence with However, Domingos & Pazzani the features X. The arcs represent casual(1997) performed a large-scale comparison influences among the features while theof the naive Bayes classifier with state-of- lack of possible arcs in S encodes condi-the-art algorithms for decision tree induc- tional independencies. Moreover, a featuretion, instance-based learning, and rule in- (node) is conditionally independent fromduction on standard benchmark datasets, its non-descendants given its parents (X1 isand found it to be sometimes superior to conditionally independent from X2 giventhe other learning schemes, even on da- X3 if P(X1|X2,X3)=P(X1|X3) for all possi-tasets with substantial feature dependen- ble values of X1, X2, X3).cies. The basic independent Bayes mod- Speech recognition:el has been modified in various ways inattempts to improve its performance. At- In Computer Science, Speechtempts to overcome the independence recognition is the translation of spokenassumption are mainly based on adding words into text. It is also known asextra edges to include some of the depend- "automatic speech recognition", "ASR",encies between the features, for example "computer speech recognition", "speech to(Friedman et al. 1997). In this case, the text", or just "STT".Virtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  8. 8. 7 | P age Speech Recognition is technology generate performance indistinguishablethat can translate spoken words into from that of a human being. Alltext. Some SR systems use "training" participants are separated from onewhere an individual speaker reads sections another. If the judge cannot reliably tell theof text into the SR system. These systems machine from the human, the machine isanalyze the persons specific voice and use said to have passed the test. The test doesit to fine tune the recognition of that not check the ability to give the correctpersons speech, resulting in more accurate answer; it checks how closely the answertranscription. Systems that do not use resembles typical human answers. Thetraining are called "Speaker Independent" conversation is limited to a text-onlysystems. Systems that use training are channel such as a computer keyboard and screen so that the result iscalled "Speaker Dependent" systems. not dependent on the machines ability to Speech recognition applications render words into audio.include voice user interfaces such as voicedialing (e.g., "Call home"), call routing ("Iwould like to make a collectcall"), demotic appliance control, search(e.g., find a podcast where particularwords were spoken), simple data entry(e.g., entering a credit card number),preparation of structured documents (e.g.,a radiology report), speech-to-textprocessing (e.g., wordprocessors or emails), and aircraft (usuallytermed Direct Voice Input). The term voice recognition refersto finding the identity of "who" isspeaking, rather than what they aresaying. Recognizing the speaker voice The test was introduced by Alanrecognition can simplify the task of Turing in his 1950 paper Computingtranslating speech in systems that have Machinery and Intelligence, which opensbeen trained on specific persons voices or with the words: "I propose to consider theit can be used to authenticate or verify the question, Can machines think?" Sinceidentity of a speaker as part of a security "thinking" is difficult to define, Turingprocess. "Voice recognition" means chooses to "replace the question by"recognizing by voice", something humans another, which is closely related to it anddo all the time over the phone. As soon as is expressed in relatively unambiguoussomeone familiar says "hello" the listener words." Turings new question is: "Arecan identify them by the sound of their there imaginable digital computers whichvoice alone. would do well in the imitation game?" This question, Turing believed, isTuring Test: one that can actually be answered. In the The Turing test is a test of remainder of the paper, he argued againsta machines ability to exhibit intelligent all the major objections to the propositionbehavior. In Turings original illustrative that "machines can think".example, a human judge engages in anatural language conversation witha human and a machine designed toVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  9. 9. 8 | P age cent of the time — a figure consistent with random guessing.ELIZA and PARRY In the 21st century, versions of In 1966, Joseph these programs (now known asWeizenbaum created a program which "chatterbots") continue to fool people.appeared to pass the Turing test. The "CyberLover", a malware program, preysprogram, known as ELIZA, worked by on Internet users by convincing them toexamining a users typed comments for "reveal information about their identitieskeywords. If a keyword is found, a rule or to lead them to visit a web site that willthat transforms the users comments is deliver malicious content to theirapplied, and the resulting sentence is computers".The program has emerged as areturned. If a keyword is not found, ELIZA "Valentine-risk" flirting with peopleresponds either with a generic riposte or by "seeking relationships online in order torepeating one of the earlier comments. In collect their personal data".addition, Weizenbaum developed ELIZAto replicate the behaviour of a Rogerian The Chinese Roompsychotherapist, allowing ELIZA to be Main article: Chinese room"free to assume the pose of knowingalmost nothing of the real world." With John Searles 1980 paper Minds,these techniques, Weizenbaums program Brains, and Programs proposed anwas able to fool some people into argument against the Turing Test known asbelieving that they were talking to a real the "Chinese room" thought experiment.person, with some subjects being "very Searle argued that software (such ashard to convince that ELIZA ELIZA) could pass the Turing Test simplyis nothuman." Thus, ELIZA is claimed by by manipulating symbols of which theysome to be one of the programs (perhaps had no understanding. Withoutthe first) able to pass the Turing understanding, they could not be describedTest, although this view is highly as "thinking" in the same sense people do.contentious (see below). Therefore—Searle concludes—the Turing Test cannot prove that a machine can Kenneth Colby created PARRY in think. Searles argument has been widely1972, a program described as "ELIZA with criticized, but it has been endorsed as well.attitude".[26] It attempted to model thebehaviour of a paranoidschizophrenic, Arguments such as that proposedusing a similar (if more advanced) by Searle and others working onapproach to that employed by the philosophy of mind sparked off a moreWeizenbaum. In order to validate the intense debate about the nature ofwork, PARRY was tested in the early intelligence, the possibility of intelligent1970s using a variation of the Turing Test. machines and the value of the Turing testA group of experienced psychiatrists that continued through the 1980s andanalysed a combination of real patients 1990s.and computers running PARRYthrough teleprinters. Another group of 33psychiatrists were shown transcripts of theconversations. The two groups were thenasked to identify which of the "patients"were human and which were computerprograms. The psychiatrists were able tomake the correct identification only 48 perVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  10. 10. 9 | P ageSiri (Speech Interpretation and CEO of Siri at Apple after the launch ofRecognition Interface) the iPhone 4S. Siri (Speech Interpretation andRecognition Interface)(pronounced /ˈsɪri/) is an intelligentpersonal assistant and knowledgenavigator which works as an application Reception Of Siri:for Apples iOS. The application uses Siri was met with a very positivea natural language user interface to answer reaction for its ease of use and practicality,questions, make recommendations, and as well as its apparentperform actions by delegating requests to a "personality". Google’s executiveset of web services. Apple claims that the chairman and former chief, Eric Schmidt,software adapts to the users individual has conceded that Siri could pose apreferences over time and personalizes "competitive threat" to the company’s coreresults, and performing tasks such as search business. Google generates a largefinding recommendations for nearby portion of its revenue from clickable adrestaurants, or getting directions. links returned in the context of searches. Siri was originally introduced as an The threat comes from the fact that Siri isiOS application available in the App a non-visual medium, therefore notStore by Siri Inc. Siri Inc. was acquired by affording users with the opportunity to beApple on April 28, 2010. Siri Inc. had exposed to the clickable ad links. Writingannounced that their software would be in The Guardian, journalist Charlieavailable for BlackBerry and for Android- Brooker described Siris tone as "servile"powered phones, but all development while also noting that it workedefforts for non-Apple platforms were "annoyingly well."cancelled after the acquisition by Apple. Siri is now an integral part of iOS5, and available only on the iPhone 4S,launched on October 4, 2011. Despite this,hackers were able to adapt Siri in prioriPhones. On November 8, 2011, Applepublicly announced that it had no plans tosupport Siri on any of its older devices. Siri Inc. was founded in 2007by Dag Kittlaus (CEO), Adam Cheyer (VPEngineering), andTom Gruber (CTO/VPDesign), together with Norman Winarskyfrom SRI Internationals venture group. OnOctober 13, 2008, Siri announced it hadraised an $8.5 million Series A financinground, led by MenloVentures and Morgenthaler Ventures. InNovember 2009, Siri raised a $15.5million Series B financing round from thesame investors as in their previous round, However, Siri was criticized bybut led by Hong-Kong billionaire Li Ka- organizations such as the American Civilshing. Dag Kittlaus left his position as Liberties Union and NARAL Pro-ChoiceVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  11. 11. 10 | P a g e Despite many functions still requiring the use of the touchscreen, the National Federation of the Blind describes the iPhone as "the only fully accessible handset that a blind person can buy".America after users found that it would notprovide information about the location ofbirth control or abortion providers,sometimes directing users to anti-abortion crisis pregnancy centers instead.Apple responded that this was a glitchwhich would be fixed in the final version.It was suggested that abortion providerscould not be found in a Siri search becausethey did not use "abortion" in theirdescriptions. At the time the controversyarose, Siri would suggest locations to buyillegal drugs, hire a prostitute, or dump acorpse, but not find birth control orabortion services. Apple responded thatthis behavior is not intentional and willimprove as the product moves from beta tofinal product. Siri has not been well received bysome English speakers with distinctiveaccents, including Scottish and Americansfrom Boston or the South. Apples SiriFAQ states that, "as more people use Siriand it’s exposed to more variations of alanguage, its overall recognition of dialectsand accents will continue to improve, andSiri will work even better."Virtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  12. 12. 11 | P a g eSiri says some weird things t e x S i r i s a y s s o m e w e i r d t h i n g sVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  13. 13. 12 | P a g e 13. http://www.statsoft.com/textbReferences: ook/naive-bayes-classifier/1. http://en.wikipedia.org/wiki/H 14. http://bionicspirit.com/blog/20istory_of_Natural_language_process 12/02/09/howto-build-naive-bayes-ing classifier.html2. http://en.wikipedia.org/wiki/N 15.atural_language_processing http://en.wikipedia.org/wiki/Bayesia n_network3. http://research.microsoft.com/en-us/groups/nlp/ 16. http://research.microsoft.com/ apps/pubs/default.aspx?id=695884. http://www.mitpressjournals.org/doi/abs/10.1162/coli.2000.27.4.60 17. http://www.norsys.com/tutoria2 ls/netica/nt_toc_A.htm5. http://see.stanford.edu/see/cou 18. http://www.artificial-rseinfo.aspx?coll=63480b48-8819- solutions.com/products/virtual-4efd-8412-263f1a472f5a assistant/virtual-assistant-automated- speech-recognition/6. http://www.cs.uccs.edu/~kalita/reu.html 19. http://en.wikipedia.org/wiki/S peech_Recognition7. http://en.wikipedia.org/wiki/Supervised_learning 20. http://articles.latimes.com/201 1/dec/04/business/la-fi-voice-flubs-8. http://www.mathworks.com/h 20111204elp/toolbox/stats/bsvjxt5-1.html 21. http://www.chatbots.org/featur9. http://www.gabormelli.com/R es/speech_recognition/KB/Supervised_Learning_Algorithm 22. http://en.wikipedia.org/wiki/L ist_of_chatterbots10. http://en.wikipedia.org/wiki/Decision_tree 23. http://en.wikipedia.org/wiki/T uring_test11. http://en.wikipedia.org/wiki/Decision_tree_learning 24. http://www.webopedia.com/TERM/12. http://en.wikipedia.org/wiki/N T/Turing_test.htmlaive_Bayes_classifier 25. http://en.wikipedia.org/wiki/EVirtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1
  14. 14. 13 | P a g eLIZA 40. http://www.apple.com/iphone/ features/siri.html26. http://nlp-addiction.com/eliza/ 41. http://www.theverge.com/20127. http://www-ai.ijs.si/eliza-cgi- 1/10/12/2486618/siri-weird-iphone-bin/eliza_script 4s 42. http://lifehacker.com/584654328. http://en.wikipedia.org/wiki/P /all-about-siri-your-iphones-new-ARRY assistant29. 43. Supervised Machine Learninghttp://www.chatbots.org/chatbot/parr Survey Papery/ 44. Survey of Artificial30. http://en.wikipedia.org/wiki/R Intelligence for Prognosticacter 45. A SURVEY ON ARTIFICIAL31. http://en.wikipedia.org/wiki/ INTELLIGENCE BASED BRAINMark_V_Shaney PATHOLOGY IDENTIFICATION32. http://nlp- TECHNIQUES INaddiction.com/chatbot/ MAGNETIC RESONANCE IMAGES33. http://www.chatbots.org/ 46. http://dx.doi.org/10.1145%2F34. http://www.simonlaven.com/ 365153.36516835. http://www.esotericarticles.co 47. http://en.wikipedia.org/wiki/Tm/list_of_chatterbots.html uring_test#cite_noteFOOTNOTEWe izenbaum196637-2236. http://www.chatterbotcollection.com/category_contents.php?id_cat 48. http://en.wikipedia.org/wiki/T=70 uring_test#cite_note- FOOTNOTEWeizenbaum196642-37. http://www.chatterbotcollectio 23n.com/item_display.php?id=2954 49. http://en.wikipedia.org/wiki/E38. http://en.wikipedia.org/wiki/C ric_Schmidthatterbot 50. http://www.norsys.com/tutoria39. http://en.wikipedia.org/wiki/Si ls/netica/nt_toc_A.htmri_(software)Virtual Friend Chatbot Siddiq Abu Bakkar 09-13368-1

×