NLTK(Natural Language Toolkit)是一個用于自然語言處理的Python庫,可以用于文本分類等任務。以下是使用NLTK庫進行文本分類的基本步驟:
import nltk
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('stopwords')
# 示例文本數據
documents = [
("This is a good movie", "positive"),
("I like this movie", "positive"),
("I hate this movie", "negative"),
("This is the worst movie ever", "negative")
]
def document_features(document):
document_words = set(document)
features = {}
for word in word_features:
features['contains({})'.format(word)] = (word in document_words)
return features
all_words = nltk.FreqDist(w.lower() for w in nltk.word_tokenize(text) if w.isalpha())
word_features = list(all_words.keys())[:100]
featuresets = [(document_features(d), c) for (d,c) in documents]
train_set, test_set = featuresets[:3], featuresets[3:]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
通過以上步驟,你可以使用NLTK庫進行文本分類任務,并得到分類準確率。你也可以嘗試使用其他分類器,如SVM、決策樹等,來得到更好的分類結果。