I am trying to input an entire paragraph into my word processor to be split into sentences first and then into words.
I tried the following code but it does not work,
#text is the paragraph input
sent_text = sent_tokenize(text)
tokenized_text = word_tokenize(sent_text.split)
tagged = nltk.pos_tag(tokenized_text)
You probably intended to loop over
import nltk sent_text = nltk.sent_tokenize(text) # this gives us a list of sentences # now loop over each sentence and tokenize it separately for sentence in sent_text: tokenized_text = nltk.word_tokenize(sentence) tagged = nltk.pos_tag(tokenized_text) print(tagged)