Scrape for Words¶
I use python to scrape for words.
Scraping is actually misleading, as I didn't need to scrape from online. Just retrieved from a corpus of nltk.
A corpus is a large, exhaustive, collection of words and text used in a particular library. For example the Brown Corpus has more than million words created at Brown University, categorised by genre.
For this example, we will use the Project Gutenberg's Corpus, which contains words of a sample collection from Project Gutenberg, http://www.gutenberg.org/.
You may download your favourte classics from Project Gutenberg and load the text document accordingly.
import packages¶
First let's import the gutenberg corpus from the nltk package:
from nltk.corpus import gutenberg
Project Gutenberg Corpus¶
You may download your favourte classics from Project Gutenberg and load the text document : http://www.gutenberg.org/
gutenberg.fileids()
To get the words, we just need to call the words() method of the gutenberg corpus:
wordlist = gutenberg.words('austen-emma.txt')
len(wordlist)
Collect, sort and save¶
This list has the exhaustive list of words, so we want the frequency of distinct words:
from nltk import FreqDist
frequency_list = FreqDist(wordlist)
len(frequency_list)
And sort it by frequency:
most_common = frequency_list.most_common()
most_common[:5]
Once sorted, we can keep the sorted list and discard the frequency. So we just keep the first item of each tuple in the list using list comprehension method:
common_words = [i[0] for i in most_common]
And save it for use:
with open("words.txt", "w") as output:
for item in common_words:
output.write("%s\n" % item)
And thus we have a list of words ready for used, sorted by its frequency distribution from the most common words to the least.