diff --git a/notebooks/natural_language_processing/index.ipynb b/notebooks/natural_language_processing/index.ipynb
new file mode 100644
index 0000000..0616123
--- /dev/null
+++ b/notebooks/natural_language_processing/index.ipynb
@@ -0,0 +1,2038 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "name": "NLP-intro.ipynb",
+ "provenance": [],
+ "collapsed_sections": []
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.6"
+ }
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "l3Z4UEAHDCM1"
+ },
+ "source": [
+ "# Natural Language Processing\n",
+ "**Natural Language Processing (NLP)** is a confluence of Artificial Intelligence and Linguistics which tries to enable computers to understand natural language data, including text, speech, etc. Tasks like [Speech Recognition](https://en.wikipedia.org/wiki/Speech_recognition), [Machine Translation](https://en.wikipedia.org/wiki/Machine_translation), [Text-to-speech](https://en.wikipedia.org/wiki/Speech_synthesis) and [Part-of-speech Tagging](https://en.wikipedia.org/wiki/Part-of-speech_tagging) are just some of NLP's branches.\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "\n",
+ "Historically, [Turing test]() can be considered as a starting point in the realm of Natural Language Processing. Some single-purpose systems like [SHRDLU](https://en.wikipedia.org/wiki/SHRDLU) and [PARRY](https://en.wikipedia.org/wiki/PARRY) were developed by rule-based methods.\n",
+ "\n",
+ "There are two revolutions in NLP, the first one happened in late 1980's with introduction of machine learning which came up with statistical models and caused remarkable successes especially in machine translation. Deep learning methods which were introduced in 2010's outperformed previous methods and thus they are considered as second revoloution in NLP.\n",
+ "\n",
+ "Through this notebook, we will study main challenges and problem-solving approaches in NLP and introduce some related libraries in Python.\n",
+ "\n",
+ "##### Contents:\n",
+ "- [Challenges](#challenges)\n",
+ " - [Similar Words and Homophones](#homophones)\n",
+ " - [Sentence Boundary Detection](#sbd)\n",
+ " - [Ambiguity](#ambiguity)\n",
+ "- [Approaches](#approaches)\n",
+ " - [Rule-Based Methods](#rule-based)\n",
+ " - [Machine Learning Methods](#machine-learning)\n",
+ " - [Deep Learning Methods](#deep-learning)\n",
+ "- [Useful Links](#links)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PO2QiBzxDCM6"
+ },
+ "source": [
+ "\n",
+ "## Challenges"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "meYlcXzpDCM7"
+ },
+ "source": [
+ "There are number of challenges and limitations in NLP that we should be aware of. Throughout this section, we will study some of these challenges. Some of these challenges are not completely solved yet."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "qa4ROLgfDCM9"
+ },
+ "source": [
+ "\n",
+ "### Similar Words and Homophones\n",
+ "Same words can have different meanings according the context of the context of a sentence. For example, consider *apple* which can refer to both the fruit and the company. Or another example is \"*He can can a can!*\" which contains same word \"*can*\" with three different meanings. Humans can understand the meaning related to the context but differentiating between these meanings for a computer may be challenging. \n",
+ "\n",
+ "As another case, consider [homophones](https://en.wikipedia.org/wiki/Homophone) which are words or phrases sharing same pronounciation while having different meanings, words like \"*by*\", \"*bye*\" and \"*buy*\" and phrases like \"*some others*\" and \"*some mothers*\". Detecting these homophones are sometimes hard even for people. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3Z4RbP9OxwVy"
+ },
+ "source": [
+ "\n",
+ "### Sentence Boundary Detection\n",
+ "One of challenges in NLP is deciding where sentences begin and end. This is mostly because of using punctuation marks which can create ambiguity. As an example, if we simply define full stop as the end of a sentence, then we may face counterexamples as this character may refer to an abbreviation or a decimal number. Rule-based and deep learning approaches are used to solve this problem."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8OWXY19BdsrU"
+ },
+ "source": [
+ "\n",
+ "### Ambiguity\n",
+ "Sometimes group of words can have two or more interpretations. Consider the following statement:\n",
+ "**I saw a man on a hill with a telescope.**\n",
+ "\n",
+ "which can means \"*There was a man on the hill and I saw him using my telescope*\" while it can be interpreted as \"*I saw a man on the hill and he had a telescope*\". These ambiguities are sometimes hard to be cleared up since they should be interpreted according to the context. Part-of-speech tagging is one NLP soloution which can help solving this problem. \n",
+ "\n",
+ "Above challenges were just some examples of existing challenges in NLP. Irony and sarcasm, colloquialisms and slang, etc. are some other examples of problems in NLP. For further information you can checkout links provided in [Useful Links](#links) section."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KBqKjKv9DCND"
+ },
+ "source": [
+ "\n",
+ "## Approaches"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nCxRNWnVDCND"
+ },
+ "source": [
+ "\n",
+ "### Rule-based Methods\n",
+ "[Regular expressions](https://en.wikipedia.org/wiki/Regular_expression) and [context free grammars](https://en.wikipedia.org/wiki/Context-free_grammar) are famous rule-based methods which can be beneficial for some tasks like [parsing](https://en.wikipedia.org/wiki/Parsing). Let's contemplate search queries for plane tickets. A suggested context free grammar for parsing these queries is provided below:\n",
+ "\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n",
+ "** S → SHOW FLIGHTS ORIGIN DESTINATION DEPARDATE | ... **\n",
+ "** SHOW → Show me | I want | Can I see | ... **\n",
+ "** FLIGHTS → (a) flight | flights **\n",
+ "** ORIGIN → from CITY **\n",
+ "** DESTINATION → to CITY **\n",
+ "** CITY → Boston | Denver | ... **\n",
+ "\n",
+ "There are some problems with rule-based methods. First, these rules must be generated manually. In addition, the person who defines these rules probably should have high linguistic skills. The other problem is that rule-based methods are not scalable. Imagine how hard it would be if we want to put all cities' names in CITY grammar in above example; however, rule-based methods usually achieve high accuracy if rules are defined precisely.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "_WJ0FZrLDCNE"
+ },
+ "source": [
+ "\n",
+ "### Machine Learning Methods\n",
+ "\n",
+ "This is exactly like what you've seen before in other machine learning tasks. So, first we should have a dataset which is usually a corpus. Then we should do some feature engineering to find features related to our desired task. For example *Does this word begin with a capital letter?* or *What words came before and after this word?*. Next a model like [naive Bayes classifier](https://en.wikipedia.org/wiki/Naive_Bayes_classifier), [random forest](https://en.wikipedia.org/wiki/Random_forest) or etc. should be trained. \n",
+ "\n",
+ "In following cells we will build a [sentiment analysis](https://en.wikipedia.org/wiki/Sentiment_analysis) classifier using Python. Throughout these codes, we will introduce [**Natural Language Toolkit (NLTK)**](https://www.nltk.org/) that contains many useful classes and functions related to NLP tasks."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "TFJx9bBXinzO",
+ "outputId": "aef90c59-7fc5-45da-ae41-9de9332867b7"
+ },
+ "source": [
+ "# Loading dataset\n",
+ "import nltk\n",
+ "nltk.download('movie_reviews') \n",
+ "from nltk.corpus import movie_reviews\n",
+ "from random import shuffle\n",
+ "\n",
+ "movie_reviews.categories()\n",
+ "documents = [(list(movie_reviews.words(fileid)), category)\n",
+ " for category in movie_reviews.categories()\n",
+ " for fileid in movie_reviews.fileids(category)]\n",
+ "# Documents are now saved as a tuple: (words list, label)\n",
+ "shuffle(documents)\n",
+ "documents[0]"
+ ],
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "[nltk_data] Downloading package movie_reviews to /root/nltk_data...\n",
+ "[nltk_data] Package movie_reviews is already up-to-date!\n"
+ ]
+ },
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "(['you',\n",
+ " 'know',\n",
+ " 'the',\n",
+ " 'plot',\n",
+ " ':',\n",
+ " 'a',\n",
+ " 'dimwit',\n",
+ " 'with',\n",
+ " 'a',\n",
+ " 'shady',\n",
+ " 'past',\n",
+ " 'is',\n",
+ " 'seduced',\n",
+ " 'into',\n",
+ " 'committing',\n",
+ " 'a',\n",
+ " 'crime',\n",
+ " 'only',\n",
+ " 'to',\n",
+ " 'be',\n",
+ " 'double',\n",
+ " '-',\n",
+ " 'crossed',\n",
+ " 'by',\n",
+ " 'a',\n",
+ " 'fatal',\n",
+ " 'femme',\n",
+ " '.',\n",
+ " 'in',\n",
+ " '\"',\n",
+ " 'palmetto',\n",
+ " ',',\n",
+ " '\"',\n",
+ " 'the',\n",
+ " 'dimwit',\n",
+ " 'is',\n",
+ " 'harry',\n",
+ " 'barber',\n",
+ " '(',\n",
+ " 'woody',\n",
+ " 'harrelson',\n",
+ " ')',\n",
+ " ',',\n",
+ " 'a',\n",
+ " 'reporter',\n",
+ " 'who',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'just',\n",
+ " 'been',\n",
+ " 'released',\n",
+ " 'from',\n",
+ " 'prison',\n",
+ " '(',\n",
+ " 'he',\n",
+ " 'was',\n",
+ " 'framed',\n",
+ " 'by',\n",
+ " 'the',\n",
+ " 'gangsters',\n",
+ " 'and',\n",
+ " 'corrupt',\n",
+ " 'officials',\n",
+ " 'he',\n",
+ " 'was',\n",
+ " 'investigating',\n",
+ " ')',\n",
+ " '.',\n",
+ " 'enter',\n",
+ " 'la',\n",
+ " 'femme',\n",
+ " ':',\n",
+ " 'rhea',\n",
+ " 'malroux',\n",
+ " '(',\n",
+ " 'elisabeth',\n",
+ " 'shue',\n",
+ " ')',\n",
+ " ',',\n",
+ " 'the',\n",
+ " 'sexy',\n",
+ " 'young',\n",
+ " 'wife',\n",
+ " 'of',\n",
+ " 'the',\n",
+ " 'richest',\n",
+ " 'man',\n",
+ " 'in',\n",
+ " 'palmetto',\n",
+ " ',',\n",
+ " 'florida',\n",
+ " '(',\n",
+ " 'rolf',\n",
+ " 'hoppe',\n",
+ " ')',\n",
+ " '.',\n",
+ " 'she',\n",
+ " 'and',\n",
+ " 'her',\n",
+ " 'stepdaughter',\n",
+ " 'odette',\n",
+ " '(',\n",
+ " 'chlo',\n",
+ " '?',\n",
+ " 'sevigny',\n",
+ " ')',\n",
+ " 'have',\n",
+ " 'a',\n",
+ " 'plot',\n",
+ " 'to',\n",
+ " 'extort',\n",
+ " '500k',\n",
+ " 'from',\n",
+ " 'the',\n",
+ " 'old',\n",
+ " 'man',\n",
+ " ':',\n",
+ " 'harry',\n",
+ " 'will',\n",
+ " '\"',\n",
+ " 'kidnap',\n",
+ " '\"',\n",
+ " 'odette',\n",
+ " '.',\n",
+ " 'after',\n",
+ " 'groping',\n",
+ " 'both',\n",
+ " 'women',\n",
+ " ',',\n",
+ " 'harry',\n",
+ " 'agrees',\n",
+ " '.',\n",
+ " 'as',\n",
+ " 'everyone',\n",
+ " 'except',\n",
+ " 'harry',\n",
+ " 'can',\n",
+ " 'see',\n",
+ " ',',\n",
+ " 'he',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'being',\n",
+ " 'set',\n",
+ " 'up',\n",
+ " 'as',\n",
+ " 'a',\n",
+ " 'fall',\n",
+ " 'guy',\n",
+ " '.',\n",
+ " 'sure',\n",
+ " 'enough',\n",
+ " ',',\n",
+ " 'before',\n",
+ " 'long',\n",
+ " ',',\n",
+ " 'harry',\n",
+ " 'has',\n",
+ " 'a',\n",
+ " 'dead',\n",
+ " 'body',\n",
+ " 'in',\n",
+ " 'his',\n",
+ " 'trunk',\n",
+ " 'and',\n",
+ " 'the',\n",
+ " 'cops',\n",
+ " 'on',\n",
+ " 'his',\n",
+ " 'tail',\n",
+ " '.',\n",
+ " 'his',\n",
+ " 'brother',\n",
+ " '-',\n",
+ " 'in',\n",
+ " '-',\n",
+ " 'law',\n",
+ " '(',\n",
+ " 'tom',\n",
+ " 'wright',\n",
+ " ')',\n",
+ " ',',\n",
+ " 'an',\n",
+ " 'assistant',\n",
+ " 'da',\n",
+ " ',',\n",
+ " 'has',\n",
+ " 'hired',\n",
+ " 'harry',\n",
+ " 'to',\n",
+ " 'be',\n",
+ " 'the',\n",
+ " 'press',\n",
+ " 'liaison',\n",
+ " 'for',\n",
+ " 'the',\n",
+ " 'case',\n",
+ " ',',\n",
+ " 'so',\n",
+ " 'harry',\n",
+ " 'gets',\n",
+ " 'a',\n",
+ " 'front',\n",
+ " 'row',\n",
+ " 'seat',\n",
+ " 'for',\n",
+ " 'his',\n",
+ " 'own',\n",
+ " 'manhunt',\n",
+ " '(',\n",
+ " 'and',\n",
+ " 'we',\n",
+ " 'get',\n",
+ " 'to',\n",
+ " 'watch',\n",
+ " 'him',\n",
+ " 'sweat',\n",
+ " '-',\n",
+ " 'literally',\n",
+ " ')',\n",
+ " '.',\n",
+ " 'there',\n",
+ " 'are',\n",
+ " 'several',\n",
+ " 'plot',\n",
+ " 'twists',\n",
+ " ',',\n",
+ " 'of',\n",
+ " 'course',\n",
+ " '-',\n",
+ " 'a',\n",
+ " 'couple',\n",
+ " 'of',\n",
+ " 'them',\n",
+ " 'even',\n",
+ " 'took',\n",
+ " 'me',\n",
+ " 'by',\n",
+ " 'surprise',\n",
+ " '.',\n",
+ " 'apparently',\n",
+ " 'every',\n",
+ " 'woman',\n",
+ " 'in',\n",
+ " 'palmetto',\n",
+ " 'is',\n",
+ " 'a',\n",
+ " 'raving',\n",
+ " 'horndog',\n",
+ " ',',\n",
+ " 'and',\n",
+ " 'they',\n",
+ " \"'\",\n",
+ " 're',\n",
+ " 'on',\n",
+ " 'harry',\n",
+ " 'like',\n",
+ " 'he',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'the',\n",
+ " 'only',\n",
+ " 'bone',\n",
+ " 'in',\n",
+ " 'the',\n",
+ " 'kennel',\n",
+ " '.',\n",
+ " 'shue',\n",
+ " 'vamps',\n",
+ " 'so',\n",
+ " 'broadly',\n",
+ " 'that',\n",
+ " 'i',\n",
+ " 'expected',\n",
+ " 'tex',\n",
+ " 'avery',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'wolf',\n",
+ " 'to',\n",
+ " 'show',\n",
+ " 'up',\n",
+ " '.',\n",
+ " 'her',\n",
+ " 'incredible',\n",
+ " 'performance',\n",
+ " 'in',\n",
+ " '\"',\n",
+ " 'leaving',\n",
+ " 'las',\n",
+ " 'vegas',\n",
+ " '\"',\n",
+ " 'seems',\n",
+ " 'to',\n",
+ " 'have',\n",
+ " 'been',\n",
+ " 'a',\n",
+ " 'fluke',\n",
+ " '.',\n",
+ " 'here',\n",
+ " ',',\n",
+ " 'she',\n",
+ " 'could',\n",
+ " 'easily',\n",
+ " 'be',\n",
+ " 'mistaken',\n",
+ " 'for',\n",
+ " 'melanie',\n",
+ " 'griffith',\n",
+ " '.',\n",
+ " 'shue',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'character',\n",
+ " 'is',\n",
+ " 'supposed',\n",
+ " 'to',\n",
+ " 'be',\n",
+ " 'a',\n",
+ " 'savvy',\n",
+ " 'schemer',\n",
+ " 'but',\n",
+ " 'she',\n",
+ " 'comes',\n",
+ " 'off',\n",
+ " 'as',\n",
+ " 'a',\n",
+ " 'brainless',\n",
+ " 'bimbo',\n",
+ " '.',\n",
+ " 'in',\n",
+ " 'addition',\n",
+ " 'to',\n",
+ " 'shue',\n",
+ " 'and',\n",
+ " 'sevigny',\n",
+ " ',',\n",
+ " 'the',\n",
+ " 'kennel',\n",
+ " 'includes',\n",
+ " 'gina',\n",
+ " 'gershon',\n",
+ " '(',\n",
+ " 'who',\n",
+ " 'filled',\n",
+ " 'the',\n",
+ " 'dimwit',\n",
+ " '-',\n",
+ " 'with',\n",
+ " '-',\n",
+ " 'a',\n",
+ " '-',\n",
+ " 'shady',\n",
+ " '-',\n",
+ " 'past',\n",
+ " 'role',\n",
+ " 'in',\n",
+ " '\"',\n",
+ " 'bound',\n",
+ " '\"',\n",
+ " ')',\n",
+ " 'as',\n",
+ " 'harry',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'girlfriend',\n",
+ " 'nina',\n",
+ " ';',\n",
+ " 'when',\n",
+ " 'harry',\n",
+ " 'gets',\n",
+ " 'out',\n",
+ " 'of',\n",
+ " 'jail',\n",
+ " ',',\n",
+ " 'she',\n",
+ " 'licks',\n",
+ " 'his',\n",
+ " 'face',\n",
+ " '(',\n",
+ " 'now',\n",
+ " 'there',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'a',\n",
+ " 'horndog',\n",
+ " ')',\n",
+ " '.',\n",
+ " 'the',\n",
+ " 'parts',\n",
+ " 'are',\n",
+ " 'so',\n",
+ " 'overplayed',\n",
+ " 'that',\n",
+ " 'with',\n",
+ " 'a',\n",
+ " 'little',\n",
+ " 'push',\n",
+ " '\"',\n",
+ " 'palmetto',\n",
+ " '\"',\n",
+ " 'could',\n",
+ " 'have',\n",
+ " 'been',\n",
+ " 'an',\n",
+ " 'over',\n",
+ " '-',\n",
+ " 'the',\n",
+ " '-',\n",
+ " 'top',\n",
+ " 'parody',\n",
+ " 'of',\n",
+ " 'film',\n",
+ " 'noir',\n",
+ " 'a',\n",
+ " 'la',\n",
+ " '\"',\n",
+ " 'romeo',\n",
+ " 'is',\n",
+ " 'bleeding',\n",
+ " '.',\n",
+ " '\"',\n",
+ " 'as',\n",
+ " 'it',\n",
+ " 'is',\n",
+ " ',',\n",
+ " 'it',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'best',\n",
+ " 'watched',\n",
+ " 'at',\n",
+ " '2am',\n",
+ " 'on',\n",
+ " 'showtime',\n",
+ " '(',\n",
+ " 'the',\n",
+ " 'love',\n",
+ " 'scenes',\n",
+ " 'seem',\n",
+ " 'to',\n",
+ " 'have',\n",
+ " 'been',\n",
+ " 'written',\n",
+ " 'for',\n",
+ " 'one',\n",
+ " 'of',\n",
+ " 'that',\n",
+ " 'channel',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'soft',\n",
+ " 'porn',\n",
+ " 'programs',\n",
+ " 'anyway',\n",
+ " ')',\n",
+ " '.',\n",
+ " '\"',\n",
+ " 'palmetto',\n",
+ " '\"',\n",
+ " 'has',\n",
+ " 'a',\n",
+ " 'well',\n",
+ " '-',\n",
+ " 'known',\n",
+ " 'director',\n",
+ " ',',\n",
+ " 'volker',\n",
+ " 'schl',\n",
+ " '?',\n",
+ " 'ndorff',\n",
+ " ',',\n",
+ " 'who',\n",
+ " \"'\",\n",
+ " 's',\n",
+ " 'best',\n",
+ " 'known',\n",
+ " 'for',\n",
+ " 'his',\n",
+ " 'adaptations',\n",
+ " 'of',\n",
+ " 'major',\n",
+ " 'literary',\n",
+ " 'works',\n",
+ " ',',\n",
+ " 'especially',\n",
+ " '\"',\n",
+ " 'the',\n",
+ " 'tin',\n",
+ " 'drum',\n",
+ " '.',\n",
+ " '\"',\n",
+ " 'i',\n",
+ " 'suppose',\n",
+ " 'he',\n",
+ " 'must',\n",
+ " 'have',\n",
+ " 'been',\n",
+ " 'drawn',\n",
+ " 'to',\n",
+ " 'this',\n",
+ " 'plot',\n",
+ " '-',\n",
+ " 'by',\n",
+ " '-',\n",
+ " 'numbers',\n",
+ " 'script',\n",
+ " 'by',\n",
+ " 'the',\n",
+ " 'same',\n",
+ " 'admiration',\n",
+ " 'for',\n",
+ " 'classic',\n",
+ " 'film',\n",
+ " 'noir',\n",
+ " 'that',\n",
+ " 'led',\n",
+ " 'scorsese',\n",
+ " 'to',\n",
+ " 'remake',\n",
+ " '\"',\n",
+ " 'cape',\n",
+ " 'fear',\n",
+ " '.',\n",
+ " '\"',\n",
+ " 'schl',\n",
+ " '?',\n",
+ " 'ndorff',\n",
+ " 'tries',\n",
+ " 'hard',\n",
+ " '-',\n",
+ " 'he',\n",
+ " 'makes',\n",
+ " 'an',\n",
+ " 'interesting',\n",
+ " 'motif',\n",
+ " 'out',\n",
+ " 'of',\n",
+ " 'the',\n",
+ " 'ubiquitous',\n",
+ " 'palmetto',\n",
+ " 'bugs',\n",
+ " '-',\n",
+ " 'but',\n",
+ " 'nothing',\n",
+ " 'can',\n",
+ " 'freshen',\n",
+ " 'up',\n",
+ " 'this',\n",
+ " 'stale',\n",
+ " 'script',\n",
+ " '.'],\n",
+ " 'neg')"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 51
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "57Akxaafj9d2",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "51d580e9-d0a4-468e-877b-650bd839e9dd"
+ },
+ "source": [
+ "# Selecting 5000 words from whole to be word features (removing punctuations and stopwords)\n",
+ "nltk.download(\"stopwords\")\n",
+ "stopwords = nltk.corpus.stopwords.words(\"english\")\n",
+ "all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words() if w.isalpha() and not w.lower() in stopwords)\n",
+ "word_features = list(all_words)[:3000]\n",
+ "word_features"
+ ],
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "[nltk_data] Downloading package stopwords to /root/nltk_data...\n",
+ "[nltk_data] Package stopwords is already up-to-date!\n"
+ ]
+ },
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "['plot',\n",
+ " 'two',\n",
+ " 'teen',\n",
+ " 'couples',\n",
+ " 'go',\n",
+ " 'church',\n",
+ " 'party',\n",
+ " 'drink',\n",
+ " 'drive',\n",
+ " 'get',\n",
+ " 'accident',\n",
+ " 'one',\n",
+ " 'guys',\n",
+ " 'dies',\n",
+ " 'girlfriend',\n",
+ " 'continues',\n",
+ " 'see',\n",
+ " 'life',\n",
+ " 'nightmares',\n",
+ " 'deal',\n",
+ " 'watch',\n",
+ " 'movie',\n",
+ " 'sorta',\n",
+ " 'find',\n",
+ " 'critique',\n",
+ " 'mind',\n",
+ " 'fuck',\n",
+ " 'generation',\n",
+ " 'touches',\n",
+ " 'cool',\n",
+ " 'idea',\n",
+ " 'presents',\n",
+ " 'bad',\n",
+ " 'package',\n",
+ " 'makes',\n",
+ " 'review',\n",
+ " 'even',\n",
+ " 'harder',\n",
+ " 'write',\n",
+ " 'since',\n",
+ " 'generally',\n",
+ " 'applaud',\n",
+ " 'films',\n",
+ " 'attempt',\n",
+ " 'break',\n",
+ " 'mold',\n",
+ " 'mess',\n",
+ " 'head',\n",
+ " 'lost',\n",
+ " 'highway',\n",
+ " 'memento',\n",
+ " 'good',\n",
+ " 'ways',\n",
+ " 'making',\n",
+ " 'types',\n",
+ " 'folks',\n",
+ " 'snag',\n",
+ " 'correctly',\n",
+ " 'seem',\n",
+ " 'taken',\n",
+ " 'pretty',\n",
+ " 'neat',\n",
+ " 'concept',\n",
+ " 'executed',\n",
+ " 'terribly',\n",
+ " 'problems',\n",
+ " 'well',\n",
+ " 'main',\n",
+ " 'problem',\n",
+ " 'simply',\n",
+ " 'jumbled',\n",
+ " 'starts',\n",
+ " 'normal',\n",
+ " 'downshifts',\n",
+ " 'fantasy',\n",
+ " 'world',\n",
+ " 'audience',\n",
+ " 'member',\n",
+ " 'going',\n",
+ " 'dreams',\n",
+ " 'characters',\n",
+ " 'coming',\n",
+ " 'back',\n",
+ " 'dead',\n",
+ " 'others',\n",
+ " 'look',\n",
+ " 'like',\n",
+ " 'strange',\n",
+ " 'apparitions',\n",
+ " 'disappearances',\n",
+ " 'looooot',\n",
+ " 'chase',\n",
+ " 'scenes',\n",
+ " 'tons',\n",
+ " 'weird',\n",
+ " 'things',\n",
+ " 'happen',\n",
+ " 'explained',\n",
+ " 'personally',\n",
+ " 'trying',\n",
+ " 'unravel',\n",
+ " 'film',\n",
+ " 'every',\n",
+ " 'give',\n",
+ " 'clue',\n",
+ " 'kind',\n",
+ " 'fed',\n",
+ " 'biggest',\n",
+ " 'obviously',\n",
+ " 'got',\n",
+ " 'big',\n",
+ " 'secret',\n",
+ " 'hide',\n",
+ " 'seems',\n",
+ " 'want',\n",
+ " 'completely',\n",
+ " 'final',\n",
+ " 'five',\n",
+ " 'minutes',\n",
+ " 'make',\n",
+ " 'entertaining',\n",
+ " 'thrilling',\n",
+ " 'engaging',\n",
+ " 'meantime',\n",
+ " 'really',\n",
+ " 'sad',\n",
+ " 'part',\n",
+ " 'arrow',\n",
+ " 'dig',\n",
+ " 'flicks',\n",
+ " 'actually',\n",
+ " 'figured',\n",
+ " 'half',\n",
+ " 'way',\n",
+ " 'point',\n",
+ " 'strangeness',\n",
+ " 'start',\n",
+ " 'little',\n",
+ " 'bit',\n",
+ " 'sense',\n",
+ " 'still',\n",
+ " 'guess',\n",
+ " 'bottom',\n",
+ " 'line',\n",
+ " 'movies',\n",
+ " 'always',\n",
+ " 'sure',\n",
+ " 'given',\n",
+ " 'password',\n",
+ " 'enter',\n",
+ " 'understanding',\n",
+ " 'mean',\n",
+ " 'showing',\n",
+ " 'melissa',\n",
+ " 'sagemiller',\n",
+ " 'running',\n",
+ " 'away',\n",
+ " 'visions',\n",
+ " 'throughout',\n",
+ " 'plain',\n",
+ " 'lazy',\n",
+ " 'okay',\n",
+ " 'people',\n",
+ " 'chasing',\n",
+ " 'know',\n",
+ " 'need',\n",
+ " 'giving',\n",
+ " 'us',\n",
+ " 'different',\n",
+ " 'offering',\n",
+ " 'insight',\n",
+ " 'apparently',\n",
+ " 'studio',\n",
+ " 'took',\n",
+ " 'director',\n",
+ " 'chopped',\n",
+ " 'shows',\n",
+ " 'might',\n",
+ " 'decent',\n",
+ " 'somewhere',\n",
+ " 'suits',\n",
+ " 'decided',\n",
+ " 'turning',\n",
+ " 'music',\n",
+ " 'video',\n",
+ " 'edge',\n",
+ " 'would',\n",
+ " 'actors',\n",
+ " 'although',\n",
+ " 'wes',\n",
+ " 'bentley',\n",
+ " 'seemed',\n",
+ " 'playing',\n",
+ " 'exact',\n",
+ " 'character',\n",
+ " 'american',\n",
+ " 'beauty',\n",
+ " 'new',\n",
+ " 'neighborhood',\n",
+ " 'kudos',\n",
+ " 'holds',\n",
+ " 'entire',\n",
+ " 'feeling',\n",
+ " 'unraveling',\n",
+ " 'overall',\n",
+ " 'stick',\n",
+ " 'entertain',\n",
+ " 'confusing',\n",
+ " 'rarely',\n",
+ " 'excites',\n",
+ " 'feels',\n",
+ " 'redundant',\n",
+ " 'runtime',\n",
+ " 'despite',\n",
+ " 'ending',\n",
+ " 'explanation',\n",
+ " 'craziness',\n",
+ " 'came',\n",
+ " 'oh',\n",
+ " 'horror',\n",
+ " 'slasher',\n",
+ " 'flick',\n",
+ " 'packaged',\n",
+ " 'someone',\n",
+ " 'assuming',\n",
+ " 'genre',\n",
+ " 'hot',\n",
+ " 'kids',\n",
+ " 'also',\n",
+ " 'wrapped',\n",
+ " 'production',\n",
+ " 'years',\n",
+ " 'ago',\n",
+ " 'sitting',\n",
+ " 'shelves',\n",
+ " 'ever',\n",
+ " 'whatever',\n",
+ " 'skip',\n",
+ " 'joblo',\n",
+ " 'nightmare',\n",
+ " 'elm',\n",
+ " 'street',\n",
+ " 'blair',\n",
+ " 'witch',\n",
+ " 'crow',\n",
+ " 'salvation',\n",
+ " 'stir',\n",
+ " 'echoes',\n",
+ " 'happy',\n",
+ " 'bastard',\n",
+ " 'quick',\n",
+ " 'damn',\n",
+ " 'bug',\n",
+ " 'starring',\n",
+ " 'jamie',\n",
+ " 'lee',\n",
+ " 'curtis',\n",
+ " 'another',\n",
+ " 'baldwin',\n",
+ " 'brother',\n",
+ " 'william',\n",
+ " 'time',\n",
+ " 'story',\n",
+ " 'regarding',\n",
+ " 'crew',\n",
+ " 'tugboat',\n",
+ " 'comes',\n",
+ " 'across',\n",
+ " 'deserted',\n",
+ " 'russian',\n",
+ " 'tech',\n",
+ " 'ship',\n",
+ " 'kick',\n",
+ " 'power',\n",
+ " 'within',\n",
+ " 'gore',\n",
+ " 'bringing',\n",
+ " 'action',\n",
+ " 'sequences',\n",
+ " 'virus',\n",
+ " 'empty',\n",
+ " 'flash',\n",
+ " 'substance',\n",
+ " 'middle',\n",
+ " 'nowhere',\n",
+ " 'origin',\n",
+ " 'pink',\n",
+ " 'flashy',\n",
+ " 'thing',\n",
+ " 'hit',\n",
+ " 'mir',\n",
+ " 'course',\n",
+ " 'donald',\n",
+ " 'sutherland',\n",
+ " 'stumbling',\n",
+ " 'around',\n",
+ " 'drunkenly',\n",
+ " 'hey',\n",
+ " 'let',\n",
+ " 'robots',\n",
+ " 'acting',\n",
+ " 'average',\n",
+ " 'likes',\n",
+ " 'likely',\n",
+ " 'work',\n",
+ " 'halloween',\n",
+ " 'wasted',\n",
+ " 'real',\n",
+ " 'star',\n",
+ " 'stan',\n",
+ " 'winston',\n",
+ " 'robot',\n",
+ " 'design',\n",
+ " 'schnazzy',\n",
+ " 'cgi',\n",
+ " 'occasional',\n",
+ " 'shot',\n",
+ " 'picking',\n",
+ " 'brain',\n",
+ " 'body',\n",
+ " 'parts',\n",
+ " 'turn',\n",
+ " 'otherwise',\n",
+ " 'much',\n",
+ " 'sunken',\n",
+ " 'jaded',\n",
+ " 'viewer',\n",
+ " 'thankful',\n",
+ " 'invention',\n",
+ " 'timex',\n",
+ " 'indiglo',\n",
+ " 'based',\n",
+ " 'late',\n",
+ " 'television',\n",
+ " 'show',\n",
+ " 'name',\n",
+ " 'mod',\n",
+ " 'squad',\n",
+ " 'tells',\n",
+ " 'tale',\n",
+ " 'three',\n",
+ " 'reformed',\n",
+ " 'criminals',\n",
+ " 'employ',\n",
+ " 'police',\n",
+ " 'undercover',\n",
+ " 'however',\n",
+ " 'wrong',\n",
+ " 'evidence',\n",
+ " 'gets',\n",
+ " 'stolen',\n",
+ " 'immediately',\n",
+ " 'suspicion',\n",
+ " 'ads',\n",
+ " 'cuts',\n",
+ " 'claire',\n",
+ " 'dane',\n",
+ " 'nice',\n",
+ " 'hair',\n",
+ " 'cute',\n",
+ " 'outfits',\n",
+ " 'car',\n",
+ " 'chases',\n",
+ " 'stuff',\n",
+ " 'blowing',\n",
+ " 'sounds',\n",
+ " 'first',\n",
+ " 'fifteen',\n",
+ " 'quickly',\n",
+ " 'becomes',\n",
+ " 'apparent',\n",
+ " 'certainly',\n",
+ " 'slick',\n",
+ " 'looking',\n",
+ " 'complete',\n",
+ " 'costumes',\n",
+ " 'enough',\n",
+ " 'best',\n",
+ " 'described',\n",
+ " 'cross',\n",
+ " 'hour',\n",
+ " 'long',\n",
+ " 'cop',\n",
+ " 'stretched',\n",
+ " 'span',\n",
+ " 'single',\n",
+ " 'clich',\n",
+ " 'matter',\n",
+ " 'elements',\n",
+ " 'recycled',\n",
+ " 'everything',\n",
+ " 'already',\n",
+ " 'seen',\n",
+ " 'nothing',\n",
+ " 'spectacular',\n",
+ " 'sometimes',\n",
+ " 'bordering',\n",
+ " 'wooden',\n",
+ " 'danes',\n",
+ " 'omar',\n",
+ " 'epps',\n",
+ " 'deliver',\n",
+ " 'lines',\n",
+ " 'bored',\n",
+ " 'transfers',\n",
+ " 'onto',\n",
+ " 'escape',\n",
+ " 'relatively',\n",
+ " 'unscathed',\n",
+ " 'giovanni',\n",
+ " 'ribisi',\n",
+ " 'plays',\n",
+ " 'resident',\n",
+ " 'crazy',\n",
+ " 'man',\n",
+ " 'ultimately',\n",
+ " 'worth',\n",
+ " 'watching',\n",
+ " 'unfortunately',\n",
+ " 'save',\n",
+ " 'convoluted',\n",
+ " 'apart',\n",
+ " 'occupying',\n",
+ " 'screen',\n",
+ " 'young',\n",
+ " 'cast',\n",
+ " 'clothes',\n",
+ " 'hip',\n",
+ " 'soundtrack',\n",
+ " 'appears',\n",
+ " 'geared',\n",
+ " 'towards',\n",
+ " 'teenage',\n",
+ " 'mindset',\n",
+ " 'r',\n",
+ " 'rating',\n",
+ " 'content',\n",
+ " 'justify',\n",
+ " 'juvenile',\n",
+ " 'older',\n",
+ " 'information',\n",
+ " 'literally',\n",
+ " 'spoon',\n",
+ " 'hard',\n",
+ " 'instead',\n",
+ " 'telling',\n",
+ " 'dialogue',\n",
+ " 'poorly',\n",
+ " 'written',\n",
+ " 'extremely',\n",
+ " 'predictable',\n",
+ " 'progresses',\n",
+ " 'care',\n",
+ " 'heroes',\n",
+ " 'jeopardy',\n",
+ " 'basing',\n",
+ " 'nobody',\n",
+ " 'remembers',\n",
+ " 'questionable',\n",
+ " 'wisdom',\n",
+ " 'especially',\n",
+ " 'considers',\n",
+ " 'target',\n",
+ " 'fact',\n",
+ " 'number',\n",
+ " 'memorable',\n",
+ " 'counted',\n",
+ " 'hand',\n",
+ " 'missing',\n",
+ " 'finger',\n",
+ " 'times',\n",
+ " 'checked',\n",
+ " 'six',\n",
+ " 'clear',\n",
+ " 'indication',\n",
+ " 'cash',\n",
+ " 'spending',\n",
+ " 'dollar',\n",
+ " 'judging',\n",
+ " 'rash',\n",
+ " 'awful',\n",
+ " 'seeing',\n",
+ " 'avoid',\n",
+ " 'costs',\n",
+ " 'quest',\n",
+ " 'camelot',\n",
+ " 'warner',\n",
+ " 'bros',\n",
+ " 'feature',\n",
+ " 'length',\n",
+ " 'fully',\n",
+ " 'animated',\n",
+ " 'steal',\n",
+ " 'clout',\n",
+ " 'disney',\n",
+ " 'cartoon',\n",
+ " 'empire',\n",
+ " 'mouse',\n",
+ " 'reason',\n",
+ " 'worried',\n",
+ " 'recent',\n",
+ " 'challenger',\n",
+ " 'throne',\n",
+ " 'last',\n",
+ " 'fall',\n",
+ " 'promising',\n",
+ " 'flawed',\n",
+ " 'century',\n",
+ " 'fox',\n",
+ " 'anastasia',\n",
+ " 'hercules',\n",
+ " 'lively',\n",
+ " 'colorful',\n",
+ " 'palate',\n",
+ " 'beat',\n",
+ " 'hands',\n",
+ " 'crown',\n",
+ " 'piece',\n",
+ " 'animation',\n",
+ " 'year',\n",
+ " 'contest',\n",
+ " 'arrival',\n",
+ " 'magic',\n",
+ " 'kingdom',\n",
+ " 'mediocre',\n",
+ " 'pocahontas',\n",
+ " 'keeping',\n",
+ " 'score',\n",
+ " 'nearly',\n",
+ " 'dull',\n",
+ " 'revolves',\n",
+ " 'adventures',\n",
+ " 'free',\n",
+ " 'spirited',\n",
+ " 'kayley',\n",
+ " 'voiced',\n",
+ " 'jessalyn',\n",
+ " 'gilsig',\n",
+ " 'early',\n",
+ " 'daughter',\n",
+ " 'belated',\n",
+ " 'knight',\n",
+ " 'king',\n",
+ " 'arthur',\n",
+ " 'round',\n",
+ " 'table',\n",
+ " 'dream',\n",
+ " 'follow',\n",
+ " 'father',\n",
+ " 'footsteps',\n",
+ " 'chance',\n",
+ " 'evil',\n",
+ " 'warlord',\n",
+ " 'ruber',\n",
+ " 'gary',\n",
+ " 'oldman',\n",
+ " 'ex',\n",
+ " 'gone',\n",
+ " 'steals',\n",
+ " 'magical',\n",
+ " 'sword',\n",
+ " 'excalibur',\n",
+ " 'accidentally',\n",
+ " 'loses',\n",
+ " 'dangerous',\n",
+ " 'booby',\n",
+ " 'trapped',\n",
+ " 'forest',\n",
+ " 'help',\n",
+ " 'hunky',\n",
+ " 'blind',\n",
+ " 'timberland',\n",
+ " 'dweller',\n",
+ " 'garrett',\n",
+ " 'carey',\n",
+ " 'elwes',\n",
+ " 'headed',\n",
+ " 'dragon',\n",
+ " 'eric',\n",
+ " 'idle',\n",
+ " 'rickles',\n",
+ " 'arguing',\n",
+ " 'able',\n",
+ " 'medieval',\n",
+ " 'sexist',\n",
+ " 'prove',\n",
+ " 'fighter',\n",
+ " 'side',\n",
+ " 'pure',\n",
+ " 'showmanship',\n",
+ " 'essential',\n",
+ " 'element',\n",
+ " 'expected',\n",
+ " 'climb',\n",
+ " 'high',\n",
+ " 'ranks',\n",
+ " 'differentiates',\n",
+ " 'something',\n",
+ " 'saturday',\n",
+ " 'morning',\n",
+ " 'subpar',\n",
+ " 'instantly',\n",
+ " 'forgettable',\n",
+ " 'songs',\n",
+ " 'integrated',\n",
+ " 'computerized',\n",
+ " 'footage',\n",
+ " 'compare',\n",
+ " 'run',\n",
+ " 'angry',\n",
+ " 'ogre',\n",
+ " 'herc',\n",
+ " 'battle',\n",
+ " 'hydra',\n",
+ " 'rest',\n",
+ " 'case',\n",
+ " 'stink',\n",
+ " 'none',\n",
+ " 'remotely',\n",
+ " 'interesting',\n",
+ " 'race',\n",
+ " 'bland',\n",
+ " 'end',\n",
+ " 'tie',\n",
+ " 'win',\n",
+ " 'comedy',\n",
+ " 'shtick',\n",
+ " 'awfully',\n",
+ " 'cloying',\n",
+ " 'least',\n",
+ " 'signs',\n",
+ " 'pulse',\n",
+ " 'fans',\n",
+ " 'tgif',\n",
+ " 'thrilled',\n",
+ " 'jaleel',\n",
+ " 'urkel',\n",
+ " 'white',\n",
+ " 'bronson',\n",
+ " 'balki',\n",
+ " 'pinchot',\n",
+ " 'sharing',\n",
+ " 'nicely',\n",
+ " 'realized',\n",
+ " 'though',\n",
+ " 'loss',\n",
+ " 'recall',\n",
+ " 'specific',\n",
+ " 'providing',\n",
+ " 'voice',\n",
+ " 'talent',\n",
+ " 'enthusiastic',\n",
+ " 'paired',\n",
+ " 'singers',\n",
+ " 'sound',\n",
+ " 'musical',\n",
+ " 'moments',\n",
+ " 'jane',\n",
+ " 'seymour',\n",
+ " 'celine',\n",
+ " 'dion',\n",
+ " 'must',\n",
+ " 'strain',\n",
+ " 'aside',\n",
+ " 'children',\n",
+ " 'probably',\n",
+ " 'adults',\n",
+ " 'grievous',\n",
+ " 'error',\n",
+ " 'lack',\n",
+ " 'personality',\n",
+ " 'learn',\n",
+ " 'goes',\n",
+ " 'synopsis',\n",
+ " 'mentally',\n",
+ " 'unstable',\n",
+ " 'undergoing',\n",
+ " 'psychotherapy',\n",
+ " 'saves',\n",
+ " 'boy',\n",
+ " 'potentially',\n",
+ " 'fatal',\n",
+ " 'falls',\n",
+ " 'love',\n",
+ " 'mother',\n",
+ " 'fledgling',\n",
+ " 'restauranteur',\n",
+ " 'unsuccessfully',\n",
+ " 'attempting',\n",
+ " 'gain',\n",
+ " 'woman',\n",
+ " 'favor',\n",
+ " 'takes',\n",
+ " 'pictures',\n",
+ " 'kills',\n",
+ " 'comments',\n",
+ " 'stalked',\n",
+ " 'yet',\n",
+ " 'seemingly',\n",
+ " 'endless',\n",
+ " 'string',\n",
+ " 'spurned',\n",
+ " 'psychos',\n",
+ " 'getting',\n",
+ " 'revenge',\n",
+ " 'type',\n",
+ " 'stable',\n",
+ " 'category',\n",
+ " 'industry',\n",
+ " 'theatrical',\n",
+ " 'direct',\n",
+ " 'proliferation',\n",
+ " 'may',\n",
+ " 'due',\n",
+ " 'typically',\n",
+ " 'inexpensive',\n",
+ " 'produce',\n",
+ " 'special',\n",
+ " 'effects',\n",
+ " 'stars',\n",
+ " 'serve',\n",
+ " 'vehicles',\n",
+ " 'nudity',\n",
+ " 'allowing',\n",
+ " 'frequent',\n",
+ " 'night',\n",
+ " 'cable',\n",
+ " 'wavers',\n",
+ " 'slightly',\n",
+ " 'norm',\n",
+ " 'respect',\n",
+ " 'psycho',\n",
+ " 'never',\n",
+ " 'affair',\n",
+ " 'contrary',\n",
+ " 'rejected',\n",
+ " 'rather',\n",
+ " 'lover',\n",
+ " 'wife',\n",
+ " 'husband',\n",
+ " 'entry',\n",
+ " 'doomed',\n",
+ " 'collect',\n",
+ " 'dust',\n",
+ " 'viewed',\n",
+ " 'midnight',\n",
+ " 'provide',\n",
+ " 'suspense',\n",
+ " 'sets',\n",
+ " 'interspersed',\n",
+ " 'opening',\n",
+ " 'credits',\n",
+ " 'instance',\n",
+ " 'serious',\n",
+ " 'sounding',\n",
+ " 'narrator',\n",
+ " 'spouts',\n",
+ " 'statistics',\n",
+ " 'stalkers',\n",
+ " 'ponders',\n",
+ " 'cause',\n",
+ " 'stalk',\n",
+ " 'implicitly',\n",
+ " 'implied',\n",
+ " 'men',\n",
+ " 'shown',\n",
+ " 'snapshot',\n",
+ " 'actor',\n",
+ " 'jay',\n",
+ " 'underwood',\n",
+ " 'states',\n",
+ " 'daryl',\n",
+ " 'gleason',\n",
+ " 'stalker',\n",
+ " 'brooke',\n",
+ " 'daniels',\n",
+ " 'meant',\n",
+ " 'called',\n",
+ " 'guesswork',\n",
+ " 'required',\n",
+ " 'proceeds',\n",
+ " 'begins',\n",
+ " 'obvious',\n",
+ " 'sequence',\n",
+ " 'contrived',\n",
+ " 'quite',\n",
+ " 'brings',\n",
+ " 'victim',\n",
+ " 'together',\n",
+ " 'obsesses',\n",
+ " 'follows',\n",
+ " 'tries',\n",
+ " 'woo',\n",
+ " 'plans',\n",
+ " 'become',\n",
+ " 'desperate',\n",
+ " 'elaborate',\n",
+ " 'include',\n",
+ " 'cliche',\n",
+ " 'murdered',\n",
+ " 'pet',\n",
+ " 'require',\n",
+ " 'found',\n",
+ " 'exception',\n",
+ " 'cat',\n",
+ " 'shower',\n",
+ " 'events',\n",
+ " 'lead',\n",
+ " 'inevitable',\n",
+ " 'showdown',\n",
+ " 'survives',\n",
+ " 'invariably',\n",
+ " 'conclusion',\n",
+ " 'turkey',\n",
+ " 'uniformly',\n",
+ " 'adequate',\n",
+ " 'anything',\n",
+ " 'home',\n",
+ " 'either',\n",
+ " 'turns',\n",
+ " 'toward',\n",
+ " 'melodrama',\n",
+ " 'overdoes',\n",
+ " 'words',\n",
+ " 'manages',\n",
+ " 'creepy',\n",
+ " 'pass',\n",
+ " 'demands',\n",
+ " 'maryam',\n",
+ " 'abo',\n",
+ " 'close',\n",
+ " 'played',\n",
+ " 'bond',\n",
+ " 'chick',\n",
+ " 'living',\n",
+ " 'daylights',\n",
+ " 'equally',\n",
+ " 'title',\n",
+ " 'ditzy',\n",
+ " 'strong',\n",
+ " 'independent',\n",
+ " 'business',\n",
+ " 'owner',\n",
+ " 'needs',\n",
+ " 'proceed',\n",
+ " 'example',\n",
+ " 'suspicions',\n",
+ " 'ensure',\n",
+ " 'use',\n",
+ " 'excuse',\n",
+ " 'decides',\n",
+ " 'return',\n",
+ " 'toolbox',\n",
+ " 'left',\n",
+ " 'place',\n",
+ " 'house',\n",
+ " 'leave',\n",
+ " 'door',\n",
+ " 'answers',\n",
+ " 'opens',\n",
+ " 'wanders',\n",
+ " 'returns',\n",
+ " 'enters',\n",
+ " 'heroine',\n",
+ " 'danger',\n",
+ " 'somehow',\n",
+ " 'parked',\n",
+ " 'front',\n",
+ " 'right',\n",
+ " 'oblivious',\n",
+ " 'presence',\n",
+ " 'inside',\n",
+ " 'whole',\n",
+ " 'episode',\n",
+ " 'places',\n",
+ " 'incredible',\n",
+ " 'suspension',\n",
+ " 'disbelief',\n",
+ " 'questions',\n",
+ " 'validity',\n",
+ " 'intelligence',\n",
+ " 'receives',\n",
+ " 'highly',\n",
+ " 'derivative',\n",
+ " 'somewhat',\n",
+ " 'boring',\n",
+ " 'cannot',\n",
+ " 'watched',\n",
+ " 'rated',\n",
+ " 'mostly',\n",
+ " 'several',\n",
+ " 'murder',\n",
+ " 'brief',\n",
+ " 'strip',\n",
+ " 'bar',\n",
+ " 'offensive',\n",
+ " 'many',\n",
+ " 'thrillers',\n",
+ " 'mood',\n",
+ " 'stake',\n",
+ " 'else',\n",
+ " 'capsule',\n",
+ " 'planet',\n",
+ " 'mars',\n",
+ " 'taking',\n",
+ " 'custody',\n",
+ " 'accused',\n",
+ " 'murderer',\n",
+ " 'face',\n",
+ " 'menace',\n",
+ " 'lot',\n",
+ " 'fighting',\n",
+ " 'john',\n",
+ " 'carpenter',\n",
+ " 'reprises',\n",
+ " 'ideas',\n",
+ " 'previous',\n",
+ " 'assault',\n",
+ " 'precinct',\n",
+ " 'homage',\n",
+ " 'believes',\n",
+ " 'fight',\n",
+ " 'horrible',\n",
+ " 'writer',\n",
+ " 'supposedly',\n",
+ " 'expert',\n",
+ " 'mistake',\n",
+ " 'ghosts',\n",
+ " 'drawn',\n",
+ " 'humans',\n",
+ " 'surprisingly',\n",
+ " 'low',\n",
+ " 'powered',\n",
+ " 'alien',\n",
+ " 'addition',\n",
+ " 'anybody',\n",
+ " 'made',\n",
+ " 'grounds',\n",
+ " 'sue',\n",
+ " 'chock',\n",
+ " 'full',\n",
+ " 'pieces',\n",
+ " 'prince',\n",
+ " 'darkness',\n",
+ " 'surprising',\n",
+ " 'managed',\n",
+ " 'fit',\n",
+ " 'admittedly',\n",
+ " 'novel',\n",
+ " 'science',\n",
+ " 'fiction',\n",
+ " 'experience',\n",
+ " 'terraformed',\n",
+ " 'walk',\n",
+ " 'surface',\n",
+ " 'without',\n",
+ " 'breathing',\n",
+ " 'gear',\n",
+ " 'budget',\n",
+ " 'mentioned',\n",
+ " 'gravity',\n",
+ " 'increased',\n",
+ " 'earth',\n",
+ " 'easier',\n",
+ " 'society',\n",
+ " 'changed',\n",
+ " 'advanced',\n",
+ " 'culture',\n",
+ " 'women',\n",
+ " 'positions',\n",
+ " 'control',\n",
+ " 'view',\n",
+ " 'stagnated',\n",
+ " 'female',\n",
+ " 'beyond',\n",
+ " 'minor',\n",
+ " 'technological',\n",
+ " 'advances',\n",
+ " 'less',\n",
+ " 'expect',\n",
+ " 'change',\n",
+ " 'ten',\n",
+ " 'basic',\n",
+ " 'common',\n",
+ " 'except',\n",
+ " 'yes',\n",
+ " 'replaced',\n",
+ " 'tacky',\n",
+ " 'rundown',\n",
+ " 'martian',\n",
+ " 'mining',\n",
+ " 'colony',\n",
+ " 'criminal',\n",
+ " 'napolean',\n",
+ " 'wilson',\n",
+ " 'desolation',\n",
+ " 'williams',\n",
+ " 'facing',\n",
+ " 'hoodlums',\n",
+ " 'automatic',\n",
+ " ...]"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 52
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "C-jr0YDA6n-O"
+ },
+ "source": [
+ "# We simply define 3000 word features indicating whether document contains that word or not\n",
+ "def extract_features(document):\n",
+ " document_words = set(document)\n",
+ " features = {}\n",
+ " for word in word_features:\n",
+ " features[f'contains ({word})'] = (word in document_words)\n",
+ " return features"
+ ],
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "faIGzYhG8A4H"
+ },
+ "source": [
+ "# Using naive Bayes classifier\n",
+ "final_dataset = [(extract_features(d), c) for (d,c) in documents]\n",
+ "train_set, test_set = final_dataset[:int(0.9 * len(documents))], final_dataset[int(0.9 * len(documents)):]\n",
+ "classifier = nltk.NaiveBayesClassifier.train(train_set)"
+ ],
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "mAHjfv3V9hEm",
+ "outputId": "e9ad92e7-01b5-417c-ea57-1760287aa747"
+ },
+ "source": [
+ "nltk.classify.accuracy(classifier, test_set)"
+ ],
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "0.84"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 55
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "11zD4tnO9nnO"
+ },
+ "source": [
+ "As you see, we achieved accuracy of 84% using simple features and without any parameter tuning! Now let's see which words are most informative features."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Fbk4DiBK_Ctv",
+ "outputId": "4d94adb4-f902-4ea2-de64-e7e30635cb72"
+ },
+ "source": [
+ "classifier.show_most_informative_features(10)"
+ ],
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Most Informative Features\n",
+ " contains (sucks) = True neg : pos = 10.0 : 1.0\n",
+ "contains (unimaginative) = True neg : pos = 8.5 : 1.0\n",
+ " contains (annual) = True pos : neg = 8.2 : 1.0\n",
+ " contains (frances) = True pos : neg = 7.5 : 1.0\n",
+ " contains (silverstone) = True neg : pos = 7.1 : 1.0\n",
+ " contains (schumacher) = True neg : pos = 7.1 : 1.0\n",
+ " contains (atrocious) = True neg : pos = 6.7 : 1.0\n",
+ " contains (chambers) = True neg : pos = 6.4 : 1.0\n",
+ " contains (crappy) = True neg : pos = 6.4 : 1.0\n",
+ " contains (turkey) = True neg : pos = 6.4 : 1.0\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VhklUGE2iqcO"
+ },
+ "source": [
+ "\n",
+ "### Deep Learning Methods\n",
+ "Neural networks architectures are now widely used in different NLP tasks. [Recurrent Neural Networks (RNNs)](https://en.wikipedia.org/wiki/Recurrent_neural_network) are able to process sequential information. Many-to-one RNNs can be used for text classification problems, one-to-many RNNs are good for text generation tasks and many-to-many RNNs are useful in machine translation. \n",
+ "\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n",
+ "Other approaches like [Long Short-term Memory (LSTM)](https://en.wikipedia.org/wiki/Long_short-term_memory), [Attention Mechanism](https://en.wikipedia.org/wiki/Attention_(machine_learning) and [Deep Generative Models](https://towardsdatascience.com/deep-generative-models-25ab2821afd3) are used in different NLP tasks."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "27RHd--qFVtb"
+ },
+ "source": [
+ "Now it's time to see how deep models can be useful in representing words. In NLP tasks we usually need to show words numerically, e.g., using vectors. [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) approach -which doesn't use neural networks- can show significance of each word in the document using its frequency in the given document and whole corpus. But it doesn't capture similarities between words. Furthermore, vectors are high dimensional since every word is a feature."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JmZZstEJFX8Q"
+ },
+ "source": [
+ "Word2Vec is an alternative approach which uses neural networks to find word embeddings. It can discover similarities between words such that words which are semantically close together have similar embeddings. Vector size is much less than vocabulary size and is usually selected according to corpus size.\n",
+ "\n",
+ "Two famous Word2Vec architectures are [continuous bag-of-words (CBOW)](https://en.wikipedia.org/wiki/Bag-of-words_model#CBOW) and [skip-gram](https://en.wikipedia.org/wiki/N-gram#Skip-gram). CBOW uses surrounding words to predict current word while skip-gram aims to predict surrounding words using current word.\n",
+ "\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n",
+ "In following cell we will try to create a Word2Vec model using *movie_reviews* dataset which was imported in last section."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "_TQL6NxXNS_A"
+ },
+ "source": [
+ "from gensim.models import Word2Vec\n",
+ "\n",
+ "documents_words = [doc[0] for doc in documents]\n",
+ "model = Word2Vec(sentences=documents_words, size=100, window=5, min_count=1, workers=4)"
+ ],
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "40FtTwnzgWxQ"
+ },
+ "source": [
+ "Now let's see which words are mostly similar to the word *ship* using Word2Vec."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "PaMnDW7TbHoq",
+ "outputId": "ec2413fd-53d1-4eab-b5a3-091173056ff3"
+ },
+ "source": [
+ "sims = model.wv.most_similar('ship', topn=10) # get other similar words\n",
+ "sims"
+ ],
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "[('island', 0.9006446003913879),\n",
+ " ('plane', 0.8913903832435608),\n",
+ " ('country', 0.886518120765686),\n",
+ " ('land', 0.8806939125061035),\n",
+ " ('room', 0.8713586926460266),\n",
+ " ('planet', 0.8674205541610718),\n",
+ " ('floor', 0.8587629199028015),\n",
+ " ('government', 0.856345534324646),\n",
+ " ('boat', 0.8548795580863953),\n",
+ " ('fire', 0.8527746200561523)]"
+ ]
+ },
+ "metadata": {},
+ "execution_count": 79
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Ha4TcsMUg0iL"
+ },
+ "source": [
+ "Interesting! As we expected, we see words which are semantically close to the word *ship*, such as *island*, *boat*, *plane*, *room*, etc.\n",
+ "\n",
+ "You can test other words using same syntax."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "sIO1h-oFjz2n"
+ },
+ "source": [
+ "\n",
+ "## Useful Links\n",
+ "- [Major challenges of Natural Language Processing (NLP)](https://monkeylearn.com/blog/natural-language-processing-challenges/)\n",
+ "\n",
+ "- [Machine Learning vs. Rule Based Systems in NLP](https://medium.com/friendly-data/machine-learning-vs-rule-based-systems-in-nlp-5476de53c3b8)\n",
+ "\n",
+ "- [POS Tagging with NLTK and Chunking in NLP](https://www.guru99.com/pos-tagging-chunking-nltk.html)\n",
+ "\n",
+ "- [Deep Learning for NLP: An Overview of Recent Trends](https://medium.com/dair-ai/deep-learning-for-nlp-an-overview-of-recent-trends-d0d8f40a776d)\n",
+ "\n",
+ "- [Word Embedding Techniques: Word2Vec and TF-IDF Explained](https://towardsdatascience.com/word-embedding-techniques-word2vec-and-tf-idf-explained-c5d02e34d08)\n",
+ "\n",
+ "- [RNN in NLP using Python (example)](https://www.codeastar.com/recurrent-neural-network-rnn-in-nlp-and-python-part-2/)"
+ ]
+ }
+ ]
+}
\ No newline at end of file
diff --git a/notebooks/natural_language_processing/metadata.yml b/notebooks/natural_language_processing/metadata.yml
new file mode 100644
index 0000000..07933c7
--- /dev/null
+++ b/notebooks/natural_language_processing/metadata.yml
@@ -0,0 +1,29 @@
+title: Natural Language Processing
+
+meta:
+ - name: keywords
+ content: Artificial Intelligence, NLP, Deep Learning, Word2Vec
+
+header:
+ title: Natural Language Processing
+ description: |
+ In this notebook we talk about Natural Language Processing, potential challenges and different approaches.
+authors:
+ label:
+ position: top
+ text: Authors
+ kind: people
+ content:
+ - name: Nima Jamali
+ role: Author
+ contact:
+ - link: https://github.com/nimajam41
+ icon: fab fa-github
+ - link: https://www.linkedin.com/in/nima-jamali-5b1521195/
+ icon: fab fa-linkedin
+ - link: mailto://nimxj4141@gmail.com
+ icon: fas fa-envelope
+
+comments:
+ label: false
+ kind: comments