For full details, including the address, and to RSVP see: http://www.meetup.com/bostonpython/calendar/15547287/ NLTK is the Natural Language Toolkit, an extensive Python library for processing natural language. Shankar Ambady will give us a tour of just a few of its extensive capabilities, including sentence parsing, synonym finding, spam detection, and more. Linguistic expertise is not required, though if you know the difference between a hyponym and a hypernym, you might be able to help the rest of us! Socializing at 6:30, Shankar's presentation at 7:00. See you at the NERD.
Natural Language Processing (NLP) & Text Mining Tutorial Using NLTK | NLP Tra...Edureka!
** NLP Using Python: - https://www.edureka.co/python-natural-language-processing-course **
This Edureka PPT will provide you with a comprehensive and detailed knowledge of Natural Language Processing, popularly known as NLP. You will also learn about the different steps involved in processing the human language like Tokenization, Stemming, Lemmatization and much more along with a demo on each one of the topics.
The following topics covered in this PPT:
1. The Evolution of Human Language
2. What is Text Mining?
3. What is Natural Language Processing?
4. Applications of NLP
5. NLP Components and Demo
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
NLTK: Natural Language Processing made easyoutsider2
Natural Language Toolkit(NLTK), an open source library which simplifies the implementation of Natural Language Processing(NLP) in Python is introduced. It is useful for getting started with NLP and also for research/teaching.
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the world of natural language - unstructured data that by its very nature has important latent information for humans. NLP practitioners have benefitted from machine learning techniques to unlock meaning from large corpora, and in this class we’ll explore how to do that particularly with Python, the Natural Language Toolkit (NLTK), and to a lesser extent, the Gensim Library.
NLTK is an excellent library for machine learning-based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. Gensim provides vector-based topic modeling, which is currently absent in both NLTK and Scikit-Learn. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
A sprint thru Python's Natural Language ToolKit, presented at SFPython on 9/14/2011. Covers tokenization, part of speech tagging, chunking & NER, text classification, and training text classifiers with nltk-trainer.
Natural Language Processing (NLP) & Text Mining Tutorial Using NLTK | NLP Tra...Edureka!
** NLP Using Python: - https://www.edureka.co/python-natural-language-processing-course **
This Edureka PPT will provide you with a comprehensive and detailed knowledge of Natural Language Processing, popularly known as NLP. You will also learn about the different steps involved in processing the human language like Tokenization, Stemming, Lemmatization and much more along with a demo on each one of the topics.
The following topics covered in this PPT:
1. The Evolution of Human Language
2. What is Text Mining?
3. What is Natural Language Processing?
4. Applications of NLP
5. NLP Components and Demo
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
NLTK: Natural Language Processing made easyoutsider2
Natural Language Toolkit(NLTK), an open source library which simplifies the implementation of Natural Language Processing(NLP) in Python is introduced. It is useful for getting started with NLP and also for research/teaching.
Natural Language Processing (NLP) is often taught at the academic level from the perspective of computational linguists. However, as data scientists, we have a richer view of the world of natural language - unstructured data that by its very nature has important latent information for humans. NLP practitioners have benefitted from machine learning techniques to unlock meaning from large corpora, and in this class we’ll explore how to do that particularly with Python, the Natural Language Toolkit (NLTK), and to a lesser extent, the Gensim Library.
NLTK is an excellent library for machine learning-based NLP, written in Python by experts from both academia and industry. Python allows you to create rich data applications rapidly, iterating on hypotheses. Gensim provides vector-based topic modeling, which is currently absent in both NLTK and Scikit-Learn. The combination of Python + NLTK means that you can easily add language-aware data products to your larger analytical workflows and applications.
A sprint thru Python's Natural Language ToolKit, presented at SFPython on 9/14/2011. Covers tokenization, part of speech tagging, chunking & NER, text classification, and training text classifiers with nltk-trainer.
Text Classification in Python – using Pandas, scikit-learn, IPython Notebook ...Jimmy Lai
Big data analysis relies on exploiting various handy tools to gain insight from data easily. In this talk, the speaker demonstrates a data mining flow for text classification using many Python tools. The flow consists of feature extraction/selection, model training/tuning and evaluation. Various tools are used in the flow, including: Pandas for feature processing, scikit-learn for classification, IPython, Notebook for fast sketching, matplotlib for visualization.
Natural Language Processing(NLP) is a subset Of AI.It is the ability of a computer program to understand human language as it is spoken.
Contents
What Is NLP?
Why NLP?
Levels In NLP
Components Of NLP
Approaches To NLP
Stages In NLP
NLTK
Setting Up NLP Environment
Some Applications Of NLP
These slides are an introduction to the understanding of the domain NLP and the basic NLP pipeline that are commonly used in the field of Computational Linguistics.
Introduction to Natural Language Processingrohitnayak
Natural Language Processing has matured a lot recently. With the availability of great open source tools complementing the needs of the Semantic Web we believe this field should be on the radar of all software engineering professionals.
Word embedding, Vector space model, language modelling, Neural language model, Word2Vec, GloVe, Fasttext, ELMo, BERT, distilBER, roBERTa, sBERT, Transformer, Attention
Natural language processing PPT presentationSai Mohith
A ppt presentation for technicial seminar on the topic Natural Language processing
References used:
Slideshare.net
wikipedia.org NLP
Stanford NLP website
Shankar Ambady of Session M will give a tutorial on the Python NLTK (Natural Language Tool Kit). Shankar had previously presented a comprehensive overview of the NLTK last December at the Python meetup. The Python NLTK is a very powerful collection of libraries that can be applied to a variety of NLP applications such as sentiment analysis. His presentation from last December may be found here (click on Boston Python Meetup Materials) : http://www.shankarambady.com/
This is a very short 30-minutes talk that I gave to a barcelona python developers meeting.
It explain a proposal to use doctest for biopython documentation (and in general, in bioinformatics).
It also contains an introduction and the use of automated build tools in bioinformatics, like make and scons.
Text Classification in Python – using Pandas, scikit-learn, IPython Notebook ...Jimmy Lai
Big data analysis relies on exploiting various handy tools to gain insight from data easily. In this talk, the speaker demonstrates a data mining flow for text classification using many Python tools. The flow consists of feature extraction/selection, model training/tuning and evaluation. Various tools are used in the flow, including: Pandas for feature processing, scikit-learn for classification, IPython, Notebook for fast sketching, matplotlib for visualization.
Natural Language Processing(NLP) is a subset Of AI.It is the ability of a computer program to understand human language as it is spoken.
Contents
What Is NLP?
Why NLP?
Levels In NLP
Components Of NLP
Approaches To NLP
Stages In NLP
NLTK
Setting Up NLP Environment
Some Applications Of NLP
These slides are an introduction to the understanding of the domain NLP and the basic NLP pipeline that are commonly used in the field of Computational Linguistics.
Introduction to Natural Language Processingrohitnayak
Natural Language Processing has matured a lot recently. With the availability of great open source tools complementing the needs of the Semantic Web we believe this field should be on the radar of all software engineering professionals.
Word embedding, Vector space model, language modelling, Neural language model, Word2Vec, GloVe, Fasttext, ELMo, BERT, distilBER, roBERTa, sBERT, Transformer, Attention
Natural language processing PPT presentationSai Mohith
A ppt presentation for technicial seminar on the topic Natural Language processing
References used:
Slideshare.net
wikipedia.org NLP
Stanford NLP website
Shankar Ambady of Session M will give a tutorial on the Python NLTK (Natural Language Tool Kit). Shankar had previously presented a comprehensive overview of the NLTK last December at the Python meetup. The Python NLTK is a very powerful collection of libraries that can be applied to a variety of NLP applications such as sentiment analysis. His presentation from last December may be found here (click on Boston Python Meetup Materials) : http://www.shankarambady.com/
This is a very short 30-minutes talk that I gave to a barcelona python developers meeting.
It explain a proposal to use doctest for biopython documentation (and in general, in bioinformatics).
It also contains an introduction and the use of automated build tools in bioinformatics, like make and scons.
Natural Language Processing (NLP) practitioners often have to deal with analyzing large corpora of unstructured documents and this is often a tedious process. Python tools like NLTK do not scale to large production data sets and cannot be plugged into a distributed scalable framework like Apache Spark or Apache Flink.
The Apache OpenNLP library is a popular machine learning based toolkit for processing unstructured text. Combining a permissive licence, a easy-to-use API and set of components which are highly customize and trainable to achieve a very high accuracy on a particular dataset. Built-in evaluation allows to measure and tune OpenNLP’s performance for the documents that need to be processed.
From sentence detection and tokenization to parsing and named entity finder, Apache OpenNLP has the tools to address all tasks in a natural language processing workflow. It applies Machine Learning algorithms such as Perceptron and Maxent, combined with tools such as word2vec to achieve state of the art results. In this talk, we’ll be seeing a demo of large scale Name Entity extraction and Text classification using the various Apache OpenNLP components wrapped into Apache Flink stream processing pipeline and as an Apache NiFI processor.
NLP practitioners will come away from this talk with a better understanding of how the various Apache OpenNLP components can help in processing large reams of unstructured data using a highly scalable and distributed framework like Apache Spark/Apache Flink/Apache NiFi.
Large scale nlp using python's nltk on azurecloudbeatsch
This presentation provides an introduction to natural language processing (nlp) using python's natural language toolkit (nltk). Furthermore it describes how to run python (and more specifically nltk) as an elastic webjob on Azure.
In this presentation presented in AI & ML meetup on 2nd Feb, Sangram Mishra develops the same NLP solution using NLTK and OpenNLP, Sangram compares and contrasts the two open source technologies for deeper understanding and insights on choosing and using them for real-world projects.
Big Data Spain 2017 - Deriving Actionable Insights from High Volume Media St...Apache OpenNLP
Media analysts have to deal with with analyzing high volumes of real-time news feeds and social media streams which is often a tedious process because they need to write search profiles for entities. Python tools like NLTK do not scale to large production data sets and cannot be plugged into a distributed scalable frameworks like Apache Flink. Apache Flink being a streaming first engine is ideally suited for ingesting multiple streams of news feeds, social media, blogs etc.. and for being able to do streaming analytics on the various feeds. Natural Language Processing tools like Apache OpenNLP can be plugged into Flink streaming pipelines so as to be able to perform common NLP tasks like Named Entity Recognition (NER), Chunking, and text classification. In this talk, we’ll be building a real-time media analyzer which does Named Entity Recognition (NER) on the individual incoming streams, calculates the co-occurrences of the named entities and aggregates them across multiple streams; index the results into a search engine and being able to query the results for actionable insights. We’ll also be showing as to how to handle multilingual documents for calculating co-occurrences. NLP practitioners will come away from this talk with a better understanding of how the various Apache OpenNLP components can help in processing large streams of data feeds and can easily be plugged into a highly scalable and distributed framework like Apache Flink.
LESSON 1: INTRODUCTION TO PYTHON, VARIABLES, DNA CODING, AI
Introduction to Python, how to download (Python 3), Create your own Chat Bot. Introducing variables, sequence, programs, Alan Turing and Artificial Intelligence. Big ideas to discuss: DNA Coding and Intelligent design. Create apps which include the use of random number and item generation. Suggested videos on ‘Introducing Python’ and History of Computing. Learn about Mathematical and comparison operators and the importance of indentation in Python. Includes a suggested videos, ‘Big ideas’ discussion, and HW/research projects section.
Past, Present, and Future: Machine Translation & Natural Language Processing ...John Tinsley
This was a presentation given at the European Patent Office's annual Patent Information Conference in Madrid, Spain on November 10th, 2016.
In it, we give an overview of how machine translation works, latest advances in neural MT, and how this can be applied to patents and intellectual property content, not only for translations but also information extraction and other NLP applications.
This was a presentation given at the European Patent Office's annual Patent Information Conference in Madrid, Spain on November 10th, 2016.
In it, we give an overview of how machine translation works, latest advances in neural MT, and how this can be applied to patents and intellectual property content, not only for translations but also information extraction and other NLP applications.
Named entity recognition (ner) with nltkJanu Jahnavi
https://www.learntek.org/blog/named-entity-recognition-ner-with-nltk/
Learntek is global online training provider on Big Data Analytics, Hadoop, Machine Learning, Deep Learning, IOT, AI, Cloud Technology, DEVOPS, Digital Marketing and other IT and Management courses.
Un actor (model) per amico - Alessandro Melchiori - Codemotion Milan 2016Codemotion
Erlang, Elixir, Scala con Akka, solo per fare gli esempi piu' famosi, sono implementazioni di un modello matematico, formulato nel 1973, ma del tutto attuale: l'Actor Model. Utilizzato per sviluppare soluzioni "concorrenti e distribuite", il concetto di "attore" si sposa alla perfezione con il mondo del "cloud". In questa sessione, dopo una breve introduzione teorica sull'Actor Model, analizzeremo le possibilità messe a disposizione dell'ecosistema .net
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
NLTK - Natural Language Processing in Python
1. Natural Language Processing and Machine Learning Using Python Shankar Ambady Microsoft New England Research and Development Center, December 14, 2010
2. Example Files Hosted on Github https://github.com/shanbady/NLTK-Boston-Python-Meetup
3. What is “Natural Language Processing”? Where is this stuff used? The Machine learning paradox A look at a few key terms Quick start – creating NLP apps in Python
4.
5. The goal is to enable machines to understand human language and extract meaning from text.
6. It is a field of study which falls under the category of machine learning and more specifically computational linguistics.
14. The problem with communication is the illusion that it has occurred The problem with communication is the illusion, which developed it EPIC FAIL
15. The “Human Test” Turing test A test proposed to demonstrate that truly intelligent machines capable of understanding and comprehending human language should be indistinguishable from humans performing the same task. I am also human I am a human
20. Setting up NLTK Source downloads available for mac and linux as well as installable packages for windows. Currently only available for Python 2.5 – 2.6 http://www.nltk.org/download `easy_install nltk` Prerequisites NumPy SciPy
21. First steps NLTK comes with packages of corpora that are required for many modules. Open a python interpreter: importnltk nltk.download() If you do not want to use the downloader with a gui (requires TKInter module) Run: python -m nltk.downloader <name of package or “all”>
24. Part of Speech Tagging fromnltkimportpos_tag,word_tokenize sentence1='this is a demo that will show you how to detects parts of speech with little effort using NLTK!' tokenized_sent=word_tokenize(sentence1) printpos_tag(tokenized_sent) [('this', 'DT'), ('is', 'VBZ'), ('a', 'DT'), ('demo', 'NN'), ('that', 'WDT'), ('will', 'MD'), ('show', 'VB'), ('you', 'PRP'), ('how', 'WRB'), ('to', 'TO'), ('detects', 'NNS'), ('parts', 'NNS'), ('of', 'IN'), ('speech', 'NN'), ('with', 'IN'), ('little', 'JJ'), ('effort', 'NN'), ('using', 'VBG'), ('NLTK', 'NNP'),('!', '.')]
26. NLTK Text nltk.clean_html(rawhtml) from nltk.corpus import brown from nltk import Text brown_words = brown.words(categories='humor') brownText = Text(brown_words) brownText.collocations() brownText.count("car") brownText.concordance("oil") brownText.dispersion_plot(['car', 'document', 'funny', 'oil']) brownText.similar('humor')
27. Find similar terms (word definitions) using Wordnet importnltk fromnltk.corpusimportwordnetaswn synsets=wn.synsets('phone') print[str(syns.definition)forsynsinsynsets] 'electronic equipment that converts sound into electrical signals that can be transmitted over distances and then converts received signals back into sounds‘ '(phonetics) an individual sound unit of speech without concern as to whether or not it is a phoneme of some language‘ 'electro-acoustic transducer for converting electric signals into sounds; it is held over or inserted into the ear‘ 'get or try to get into communication (with someone) by telephone'
29. Meronyms and Holonyms are better described in relation to computer science terms as: Meronym terms: “has a” relationship Holonym terms: “part of” relationship Hyponym terms: “Is a” relationship Meronyms and holonyms are opposites Hyponyms and hypernyms are opposites
32. Going back to the previous example … from nltk.corpus import wordnet as wn synsets = wn.synsets('phone') print [str( syns.definition ) for syns in synsets] “syns.definition” can be modified to output hypernyms , meronyms, holonyms etc:
43. Feeling lonely? Eliza is there to talk to you all day! What human could ever do that for you?? fromnltk.chatimporteliza eliza.eliza_chat() ……starts the chatbot Therapist --------- Talk to the program by typing in plain English, using normal upper- and lower-case letters and punctuation. Enter "quit" when done. ======================================================================== Hello. How are you feeling today?
44. Englisch to German to Englisch to German…… fromnltk.bookimport* babelize_shell() Babel> the internet is a series of tubes Babel> german Babel> run 0> the internet is a series of tubes 1> das Internet ist eine Reihe SchlSuche 2> the Internet is a number of hoses 3> das Internet ist einige SchlSuche 4> the Internet is some hoses Babel>
45. Do you Speak Girbbhsi?? Take the following meaningful text A new study in the journal Animal Behavior shows that dogs rely a great deal on face recognition to tell their own person from other people. Researchers describe how dogs in the study had difficulty recognizing their owners when two human subjects had their faces covered. Let’s have NLTK analyze and generate some gibberish! importnltk words=‘text' tokens=nltk.word_tokenize(words) text=nltk.Text(tokens) printtext.generate()
46. Results May Vary but these were mine A new study in the study had difficulty recognizing their owners when two human subjects had their faces covered . their owners when two human subjects had their faces covered . on face recognition to tell their own person from other people. Researchers describe how dogs in the journal Animal Behavior shows that dogs rely a great deal on face recognition to tell their own person from other people. Researchers describe how dogs in the study had difficulty recognizingtheir owners when two human subjects had their faces covered . subjects had their faces covered . dogs rely a great A new study in the journal Animal Behavior shows that dogs rely a great deal on face recognition to tell their own person from other people. Researchers describe how dogs in the study had difficulty recognizing their owners when two human subjects had their faces covered.
47. How Similar? #!/usr/bin/env python fromnltk.corpusimportwordnetaswn similars =[] Aword=‘language' Bword=‘barrier‘
48. “how similar” (continued) # grab synsets of each word synsetsA=wn.synsets(Aword) synsetsB=wn.synsets(Bword) groupA=[wn.synset(str(synset.name))forsynsetinsynsetsA] groupB=[wn.synset(str(synset.name))forsynsetinsynsetsB]
49. path_similarity() “Path Distance Similarity: Return a score denoting how similar two word senses are, based on the shortest path that connects the senses in the is-a (hypernym/hypnoym) taxonomy.” wup_similarity() “Wu-Palmer Similarity: Return a score denoting how similar two word senses are, based on the depth of the two senses in the taxonomy and that of their Least Common Subsumer (most specific ancestor node).” Source: http://nltk.googlecode.com/svn/trunk/doc/api/nltk.corpus.reader.wordnet.WordNetCorpusReader-class.html
52. Language the cognitive processes involved in producing and understanding linguistic communication Similarity: 0.111~ Barrier any condition that makes it difficult to make progress or to achieve an objective
53. “how similar” (continued) It trickles down from there Synset('linguistic_process.n.02') the cognitive processes involved in producing and understanding linguistic communication Synset('barrier.n.02') any condition that makes it difficult to make progress or to achieve an objective Path similarity - 0.111111111111 Synset('language.n.05') the mental faculty or power of vocal communication Synset('barrier.n.02') any condition that makes it difficult to make progress or to achieve an objective Path similarity - 0.111111111111 Synset('language.n.01') a systematic means of communicating by the use of sounds or conventional symbols Synset('barrier.n.02') any condition that makes it difficult to make progress or to achieve an objective Path similarity - 0.1 Synset('language.n.01') a systematic means of communicating by the use of sounds or conventional symbols Synset('barrier.n.03') anything serving to maintain separation by obstructing vision or access Path similarity - 0.1
54. Poetic Programming We will create a program to extract “Haikus” from any given English text. A haiku is a poem in which each stanza consists of three lines. The first line has 5 syllables, the second has 7 and the last line has 5.
60. “poetic programming” (continued) textchunk +='‘‘# throw in a few “bush-isms” I want to share with you an interesting program for two reasons, one, it's interesting, and two, my wife thought of it or has actually been involved with it; she didn't think of it. But she thought of it for this speech. This is my maiden voyage.My first speech since I was the president of the United States and I couldn't think of a betterplace to give it than Calgary, Canada. ''‘
61. “poetic programming” (continued) poem='' wordmap=[] words=word_tokenize(textchunk) foriter,wordinenumerate(words): # if it is a word, add a append a space to it ifword.isalpha(): word+=" " syls=syllables_en.count(word) wordmap.append((word,syls)) Tokenize the words NLTK function to count syllables
62. “poetic programming” (continued) Define a function to provide a fallback word in case we end up with lines that do not have the syllable count we need. deffindSyllableWord(word,syllableSize): synsets=wn.synsets(word) forsynsinsynsets: name=syns.name lemmas=syns.lemma_names forwordstringinlemmas: if(syllables_en.count(wordstring)==syllableSizeandwordstring!=word): return{'word':word,'syllable':syllableSize} return{'word':word,'syllable':syllables_en.count(word)} Given a word , this function tries to find similar words from WordNet that match the required syllable size
63. “poetic programming” (continued) We loop through the each word keeping tally of syllables and breaking each line when it reaches the appropriate threshold lineNo=1 charNo=0 tally=0 forsyllabicwordinwordmap: s=syllabicword[1] wordtoAdd=syllabicword[0] iflineNo==1: iftally<5: iftally+int(s)>5andwordtoAdd.isalpha(): num=5-tally similarterm=findSyllableWord(wordtoAdd,num) wordtoAdd=similarterm['word'] s=similarterm['syllable'] tally+=int(s) poem+=wordtoAdd else: poem+=" ---"+str(tally)+"" ifwordtoAdd.isalpha(): poem+=wordtoAdd tally=s lineNo=2 iflineNo==2: …. Abridged
64. “poetic programming” (continued) printpoem Its not perfect but its still pretty funny! I want to share with ---5 you an interesting program ---8 for two reasons ,one ---5 it 's interesting ---5 and two ,my wife thought of it ---7 or has actually ---5 been involved with ---5 it ;she didn't think of it. But she thought ---7 of it for this speech. ---5 This is my maiden ---5 voyage. My first speech since I ---7 was the president of ---5 …. Abridged
66. Lets write a spam filter! A program that analyzes legitimate emails “Ham” as well as “spam” and learns the features that are associated with each. Once trained, we should be able to run this program on incoming mail and have it reliably label each one with the appropriate category.
67. What you will need NLTK (of course) as well as the “stopwords” corpus A good dataset of emails; Both spam and ham Patience and a cup of coffee (these programs tend to take a while to complete)
68. Finding Great Data: The Enron Emails A dataset of 200,000+ emails made publicly available in 2003 after the Enron scandal. Contains both spam and actual corporate ham mail. For this reason it is one of the most popular datasets used for testing and developing anti-spam software. The dataset we will use is located at the following url: http://labs-repos.iit.demokritos.gr/skel/i-config/downloads/enron-spam/preprocessed/ It contains a list of archived files that contain plaintext emails in two folders , Spam and Ham.
69. Extract one of the archives from the site into your working directory. Create a python script, lets call it “spambot.py”. Your working directory should contain the “spambot” script and the folders “spam” and “ham”. “Spambot.py” fromnltkimportword_tokenize,WordNetLemmatizer,NaiveBayesClassifierbr />,classify,MaxentClassifier fromnltk.corpusimportstopwords importrandom importos,glob,re
70. “Spambot.py” (continued) wordlemmatizer = WordNetLemmatizer() commonwords = stopwords.words('english') hamtexts=[] spamtexts=[] forinfileinglob.glob(os.path.join('ham/','*.txt')): text_file=open(infile,"r") hamtexts.append(text_file.read()) text_file.close() forinfileinglob.glob(os.path.join('spam/','*.txt')): text_file=open(infile,"r") spamtexts.append(text_file.read()) text_file.close() load common English words into list start globbing the files into the appropriate lists
71. “Spambot.py” (continued) mixedemails=([(email,'spam')foremailinspamtexts] mixedemails+= [(email,'ham')foremailinhamtexts]) random.shuffle(mixedemails) label each item with the appropriate label and store them as a list of tuples From this list of random but labeled emails, we will defined a “feature extractor” which outputs a feature set that our program can use to statistically compare spam and ham. lets give them a nice shuffle
72. “Spambot.py” (continued) defemail_features(sent): features={} wordtokens=[wordlemmatizer.lemmatize(word.lower())forwordinword_tokenize(sent)] forwordinwordtokens: ifwordnotincommonwords: features[word]=True returnfeatures featuresets=[(email_features(n),g)for(n,g)inmixedemails] Normalize words If the word is not a stop-word then lets consider it a “feature” Let’s run each email through the feature extractor and collect it in a “featureset” list
73.
74. To use features that are non-binary such as number values, you must convert it to a binary feature. This process is called “binning”.
75.
76. “Spambot.py” (continued) print classifier.labels() This will output the labels that our classifier will use to tag new data ['ham', 'spam'] The purpose of create a “training set” and a “test set” is to test the accuracy of our classifier on a separate sample from the same data source. printclassify.accuracy(classifier,test_set) 0.983589566419
78. “Spambot.py” (continued) WhileTrue: featset=email_features(raw_input("Enter text to classify: ")) printclassifier.classify(featset) We can now directly input new email and have it classified as either Spam or Ham
79.
80.
81.
82. Complete sentences are composed of two or more “phrases”. Noun phrase: Jack and Jill went up the hill Prepositional phrase: Contains a noun, preposition and in most cases an adjective The NLTK book is on the table but perhaps it is best kept in a bookshelf Gerund Phrase: Phrases that contain “–ing” verbs Jack fell down and broke his crown and Jill came tumbling after
83. Take the following sentence ….. Jack and Jillwent up the hill Noun phrase Noun Phrase
84. Chunkers will get us this far: [ Jack and Jill ] went up [ the hill ] Chunk tokens are non-recursive – meaning, there is no overlap when chunking The recursive form for the same sentence is: ( Jack and Jill went up (the hill ) )
85. Verb phrase chunking Jack and Jill went up the hill to fetch a pail of water Verb Phrase Verb Phrase
86. fromnltk.chunkimport* fromnltk.chunk.utilimport* fromnltk.chunk.regexpimport* fromnltkimportword_tokenize,pos_tag text=''' Jack and Jill went up the hill to fetch a pail of water ''' tokens=pos_tag(word_tokenize(text)) chunk=ChunkRule("<.*>+","Chunk all the text") chink=ChinkRule("<VBD|IN|>",“Verbs/Props") split=SplitRule("<DT><NN>","<DT><NN>","determiner+noun") chunker=RegexpChunkParser([chunk,chink,split],chunk_node='NP') chunked=chunker.parse(tokens) chunked.draw()
87. Chunkers and Parsers ignore the words and instead use part of speech tags to create chunks.
88. Chunking and Named Entity Recognition fromnltkimportne_chunk,pos_tag fromnltk.tokenize.punktimportPunktSentenceTokenizer fromnltk.tokenize.treebankimportTreebankWordTokenizer TreeBankTokenizer=TreebankWordTokenizer() PunktTokenizer=PunktSentenceTokenizer() text=''' text on next slide ''‘ sentences=PunktTokenizer.tokenize(text) tokens=[TreeBankTokenizer.tokenize(sentence)forsentenceinsentences] tagged=[pos_tag(token)fortokenintokens] chunked=[ne_chunk(taggedToken)fortaggedTokenintagged]
89. text=''' The Boston Celtics are a National Basketball Association (NBA) team based in Boston, MA. They play in the Atlantic Division of the Eastern Conference. Founded in 1946, the team is currently owned by Boston Basketball Partners LLC. The Celtics play their home games at the TD Garden, which they share with the Boston Blazers (NLL), and the Boston Bruins of the NHL. The Celtics have dominated the league during the late 50's and through the mid 80's, with the help of many Hall of Famers which include Bill Russell, Bob Cousy, John Havlicek, Larry Bird and legendary Celtics coach Red Auerbach, combined for a 795 - 397 record that helped the Celtics win 16 Championships.
92. Thank you for coming! Special thanks Ned Batchelder and Microsoft My latest creation. Check out weatherzombie.com on your iphone or android!
93. Further Resources: I will be uploading this presentation to my site: http://www.shankarambady.com “Natural Language Processing with Python” by Steven Bird, Ewan Klein, and Edward Loper http://www.nltk.org/book API reference : http://nltk.googlecode.com/svn/trunk/doc/api/index.html Great NLTK blog: http://streamhacker.com/
Editor's Notes
You may also do #print [str(syns.examples) for syns in synsets] for usage examples of each definition