日记大全

日记大全 > 句子大全

面向深度学习的文本预处理方法

句子大全 2023-11-25 03:32:02
相关推荐

如今,深度学习引起了人们极大的兴趣,尤其是自然语言处理(NLP)。不久前,Kaggle公司开展一场自然语言处理(NLP)竞赛,其名称为“Quora不真诚问题挑战(Quora Question insincerity Challenge)”。这个竞赛指出解决文本分类问题,其目的是通过竞赛以及Kaggle专家提供的宝贵内核,使其变得更容易理解。

首先从解释竞赛中的文本分类问题开始。

文本分类是自然语言处理中的一项常见任务,它将不确定长度的文本序列转换为文本类别。那么文本分类有什么作用?可以:

了解评论时的情绪

在Facebook等平台上查找有害评论

在Quora上查找不真诚的问题,而目前Kaggle公司正在进行的一项竞赛

在网站上查找虚假评论

确定文本广告是否会被点击

现在,这些问题都有一些共同点。而从机器学习的角度来看,这些问题本质上是相同的,只是目标标签发生了变化,并没有其他的变化。话虽如此,业务知识的添加可以帮助使这些模型更加健壮,这就是在预处理数据以进行测试分类时想要包含的内容。

虽然本文关注的预处理管道主要围绕深度学习,但其中大部分也适用于传统的机器学习模型。

首先,在完成所有步骤之前,先了解一下文本数据深度学习管道的流程,以便更进一步了解整个过程。

通常从清理文本数据和执行基本 事件驱动架构(EDA)开始。在这里,尝试通过清理数据来提高数据质量。还尝试通过删除词汇表外(OOV)的单词来提高Word2Vec嵌入的质量。前两个步骤之间通常没有什么顺序,并且通常在这两个步骤之间来回切换。

接下来,为可以输入深度学习模型的文本创建一个表示。然后开始创建模型并训练它们。最后,在此使用适当的指标评估模型,并获得领导者的批准以部署模型。如果这些术语现在没有多大意义,那么不要担心,可以尝试通过本文阐述的过程来解释它们。

在这里,先谈谈单词嵌入。在为深度学习模型预处理数据时,就必须考虑一下。

Word2Vec嵌入入门

现在需要有一种方法来表示词汇中的单词。一种方法是使用one-hot编码的单词向量,但这并不是一个很好的选择。其一个主要原因是one-hot单词向量无法准确表达不同单词之间的相似度,例如余弦相似度。

鉴于one-hot编码向量的结构,不同单词之间的相似度总是为0。另一个原因是,随着词汇量的增加,这些one-hot编码向量变得非常大。

Word2Vec通过提供单词的固定长度向量表示以及捕获不同单词之间的相似性和类比关系,克服了上述困难。

Word2vec单词向量的学习方式允许学习不同的类比。它使人们能够对以前不可能的单词进行代数运算。例如:什么是国王——男人+女人?出来是女王。

Word2Vec向量也帮助找出单词之间的相似性。如果试图找到与“good”相似的词,会发现awesome、great等。正是word2vec的这一特性使其对于文本分类非常宝贵。现在的深度学习网络可以明白“good”和“great”本质上是含义相似的词。

因此,简单来说,word2vec为单词创建向量。因此,对字典中的每个单词都有一个d维向量。通常使用其他人在维基百科、推特等大型文本语料库上训练后提供的预训练词向量。最常用的预训练词向量是具有300维词向量的Glove和Fasttext。而在这篇文章中将使用Glove。

文本数据的基本预处理技术

在大多数情况下,观察到的文本数据并不完全干净。来自不同来源的数据具有不同的特征,这使得文本预处理成为分类管道中最重要的步骤之一。

例如,来自Twitter的文本数据与Quora或某些新闻/博客平台上的文本数据完全不同,因此需要区别对待。有用的是,将在本文中讨论的技术对于在自然语言处理(NLP)中可能遇到的任何类型的数据都足够通用。

(1)清除特殊字符和删除标点符号

预处理管道很大程度上取决于将用于分类任务的word2vec嵌入。原则上,预处理应该与训练词嵌入之前使用的预处理相匹配。由于大多数嵌入不提供标点符号和其他特殊字符的向量值,因此要做的第一件事就是去除文本数据中的特殊字符。这些是Quora不真诚问题数据中存在的一些特殊字符,使用替换功能来摆脱这些特殊字符。

#将看到的所有文本分类方法都会用到的一些预处理。

Python

1 puncts = [",", ".", """, ":", ")", "(", "-", "!", "?", "|", ";", """, "$", "&", "/", "[", "]", ">", "%", "=", "#", "*", "+", "\\", "", "~", "@", "", "·", "_", "{", "}", "", "^", "", "`", "<", "→", "°", "", "", "", "", "←", "×", "§", "″", "′", " ", "█", "", "à", "…", "“", "★", "”", "–", "●", "", "", "", "", "", "", "", "", "↑", "±", "", "", "═", "", "║", "―", "", "▓", "—", "", "─", "", ":", "", "⊕", "▼", "", "", "■", "’", "", "¨", "▄", "", "☆", "é", "", "", "¤", "▲", "è", "", "", "", "", "‘", "∞", "", ")", "↓", "、", "│", "(", "", ",", "", "╩", "╚", "", "", "╦", "╣", "╔", "╗", "", "", "", "", "", "≤", "", "√",

Python

1def clean_text(x): x = str(x) for punct in puncts: if punct in x: x = x.replace(punct, "") return

这也可以在一个简单的正则表达式的帮助下完成。但是人们通常喜欢上述做事方式,因为它有助于理解从数据中删除的字符类型。

Python

1def clean_numbers(x): if bool(re.search(r"\d", x)): x = re.sub("[0-9]{5,}", "#####", x) x = re.sub("[0-9]{4}", "####", x) x = re.sub("[0-9]{3}", "###", x) x = re.sub("[0-9]{2}", "##", x) return x

(2)清除数字

为什么要用#s替换数字?因为大多数嵌入都像这样预处理了它们的文本。

Python小技巧:在下面的代码中使用if语句来预先检查文本中是否存在数字。就像if总是比re.sub命令快,而且大部分文本都不包含数字。

Python

1 def clean_numbers(x): if bool(re.search(r"\d", x)): x = re.sub("[0-9]{5,}", "#####", x)

(3)删除拼写错误

找出数据中的拼写错误总是有帮助的。由于word2vec中不存在这些词的嵌入,应该用正确的拼写替换单词以获得更好的嵌入覆盖率。

以下代码工件是对Peter Norvig的拼写检查器的改编。它使用单词的word2vec排序来近似单词概率,因为谷歌word2vec显然在训练语料库中按照频率降序排列单词。可以使用它来找出拥有的数据中的一些拼写错误的单词。

以下是来自Quora问题相似性挑战中的CPMP脚本。

Python

1 import re from collections import Counter import gensim import heapq from operator import itemgetter from multiprocessing import Pool 2model = gensim.models.KeyedVectors.load_word2vec_format("../input/embeddings/GoogleNews-vectors-negative300/GoogleNews-vectors-negative300.bin", binary=True) words = model.index2word 3 w_rank = {} for i,word in enumerate(words): w_rank[word] = i 4 WORDS = w_rank 5 def words(text): return re.findall(r"\w+", text.lower()) 6 def P(word): "Probability of `word`." # use inverse of rank as proxy # returns 0 if the word isn"t in the dictionary return - WORDS.get(word, 0) 7 def correction(word): "Most probable spelling correction for word." return max(candidates(word), key=P) 8 def candidates(word): "Generate possible spelling corrections for word." return (known([word]) or known(edits1(word)) or known(edits2(word)) or [word]) 9 def known(words): "The subset of `words` that appear in the dictionary of WORDS." return set(w for w in words if w in WORDS) 10 def edits1(word): "All edits that are one edit away from `word`." letters = "abcdefghijklmnopqrstuvwxyz" splits = [(word[:i], word[i:]) for i in range(len(word) + 1)] deletes = [L + R[1:] for L, R in splits if R] transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1] replaces = [L + c + R[1:] for L, R in splits if R for c in letters] inserts = [L + c + R for L, R in splits for c in letters] return set(deletes + transposes + replaces + inserts) 11 def edits2(word): "All edits that are two edits away from `word`." return (e2 for e1 in edits1(word) for e2 in edits1(e1)) 12 def build_vocab(texts): sentences = texts.apply(lambda x: x.split()).values vocab = {} for sentence in sentences: for word in sentence: try: vocab[word] += 1 except KeyError: vocab[word] = 1 return vocab 13 vocab = build_vocab(train.question_text) 14 top_90k_words = dict(heapq.nlargest(90000, vocab.items(), key=itemgetter(1))) 15 pool = Pool(4) corrected_words = pool.map(correction,list(top_90k_words.keys())) 16 for word,corrected_word in zip(top_90k_words,corrected_words): if word!=corrected_word: print

一旦完成了查找拼写错误的数据,接下来要做的就是使用拼写错误映射和正则表达式函数来替换它们。

Python

1 mispell_dict = {"colour": "color", "centre": "center", "favourite": "favorite", "travelling": "traveling", "counselling": "counseling", "theatre": "theater", "cancelled": "canceled", "labour": "labor", "organisation": "organization", "wwii": "world war 2", "citicise": "criticize", "youtu ": "youtube ", "Qoura": "Quora", "sallary": "salary", "Whta": "What", "narcisist": "narcissist", "howdo": "how do", "whatare": "what are", "howcan": "how can", "howmuch": "how much", "howmany": "how many", "whydo": "why do", "doI": "do I", "theBest": "the best", "howdoes": "how does", "mastrubation": "masturbation", "mastrubate": "masturbate", "mastrubating": "masturbating", "pennis": "penis", "Etherium": "Ethereum", "narcissit": "narcissist", "bigdata": "big data", "2k17": "2017", "2k18": "2018", "qouta": "quota", "exboyfriend": "ex boyfriend", "airhostess": "air hostess", "whst": "what", "watsapp": "whatsapp", "demonitisation": "demonetization", "demonitization": "demonetization", "demonetisation": "demonetization"

Python

1def _get_mispell(mispell_dict): mispell_re = re.compile("(%s)" % "|".join(mispell_dict.keys())) return mispell_dict, mispell_re 2 mispellings, mispellings_re = _get_mispell(mispell_dict) def replace_typical_misspell(text): def replace(match): return mispellings[match.group(0)] return mispellings_re.sub(replace, text) 3 # Usage replace_typical_misspell("Whta is demonitisation")

(4)消除缩略语

缩略语是采用撇号书写的单词。缩略语的例子是“ain’t”或“aren’t”。因为想标准化文本,所以扩展这些缩略语是有意义的。下面使用压缩映射和正则表达式函数完成这项工作。

Python

1 contraction_dict = {"ain"t": "is not", "aren"t": "are not","can"t": "cannot", ""cause": "because", "could"ve": "could have", "couldn"t": "could not", "didn"t": "did not", "doesn"t": "does not", "don"t": "do not", "hadn"t": "had not", "hasn"t": "has not", "haven"t": "have not", "he"d": "he would","he"ll": "he will", "he"s": "he is", "how"d": "how did", "how"d"y": "how do you", "how"ll": "how will", "how"s": "how is", "I"d": "I would", "I"d"ve": "I would have", "I"ll": "I will", "I"ll"ve": "I will have","I"m": "I am", "I"ve": "I have", "i"d": "i would", "i"d"ve": "i would have", "i"ll": "i will", "i"ll"ve": "i will have","i"m": "i am", "i"ve": "i have", "isn"t": "is not", "it"d": "it would", "it"d"ve": "it would have", "it"ll": "it will", "it"ll"ve": "it will have","it"s": "it is", "let"s": "let us", "ma"am": "madam", "mayn"t": "may not", "might"ve": "might have","mightn"t": "might not","mightn"t"ve": "might not have", "must"ve": "must have", "mustn"t": "must not", "mustn"t"ve": "must not have", "needn"t": "need not", "needn"t"ve": "need not have","o"clock": "of the clock", "oughtn"t": "ought not", "oughtn"t"ve": "ought not have", "shan"t": "shall not", "sha"n"t": "shall not", "shan"t"ve": "shall not have", "she"d": "she would", "she"d"ve": "she would have", "she"ll": "she will", "she"ll"ve": "she will have", "she"s": "she is", "should"ve": "should have", "shouldn"t": "should not", "shouldn"t"ve": "should not have", "so"ve": "so have","so"s": "so as", "this"s": "this is","that"d": "that would", "that"d"ve": "that would have", "that"s": "that is", "there"d": "there would", "there"d"ve": "there would have", "there"s": "there is", "here"s": "here is","they"d": "they would", "they"d"ve": "they would have", "they"ll": "they will", "they"ll"ve": "they will have", "they"re": "they are", "they"ve": "they have", "to"ve": "to have", "wasn"t": "was not", "we"d": "we would", "we"d"ve": "we would have", "we"ll": "we will", "we"ll"ve": "we will have", "we"re": "we are", "we"ve": "we have", "weren"t": "were not", "what"ll": "what will", "what"ll"ve": "what will have", "what"re": "what are", "what"s": "what is", "what"ve": "what have", "when"s": "when is", "when"ve": "when have", "where"d": "where did", "where"s": "where is", "where"ve": "where have", "who"ll": "who will", "who"ll"ve": "who will have", "who"s": "who is", "who"ve": "who have", "why"s": "why is", "why"ve": "why have", "will"ve": "will have", "won"t": "will not", "won"t"ve": "will not have", "would"ve": "would have", "wouldn"t": "would not", "wouldn"t"ve": "would not have", "y"all": "you all", "y"all"d": "you all would","y"all"d"ve": "you all would have","y"all"re": "you all are","y"all"ve": "you all have","you"d": "you would", "you"d"ve": "you would have", "you"ll": "you will", "you"ll"ve": "you will have", "you"re": "you are", "you"ve": "you have"}

Python

1 def _get_contractions(contraction_dict): contraction_re = re.compile("(%s)" % "|".join(contraction_dict.keys())) return contraction_dict, contraction_re 2 contractions, contractions_re = _get_contractions(contraction_dict) 3 def replace_contractions(text): def replace(match): return contractions[match.group(0)] return contractions_re.sub(replace, text) 4 # Usage replace_contractions("this"s a text with contraction")

除了上述技术外,还有其他文本预处理技术,如词干提取、词形还原和停用词去除。由于这些技术不与深度学习NLP模型一起使用,在这里不会讨论它们。

表示:序列创建

使深度学习成为自然语言处理(NLP)的“go-to”选择的原因之一是,实际上不必从文本数据中人工设计特征。深度学习算法将一系列文本作为输入,像人类一样学习文本结构。由于机器不能理解单词,因此它们期望以数字形式提供数据。所以想将文本数据表示为一系列数字。

要了解这是如何完成的,需要对Keras Tokenizer功能有所了解。可以使用任何其他分词器,但Keras分词器是一种流行的选择。

(1)标记器

简单来说,标记器(tokenizer)是一个将句子拆分成单词的实用函数。keras.preprocessing.text.Tokenizer将文本标记(拆分)为标记(单词),同时仅保留文本语料库中出现次数最多的单词。

Python

1#Signature: Tokenizer(num_words=None, filters="!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n", lower=True, split=" ", char_level=False, oov_token=None, document_count=0, **kwargs)

num_words参数仅在文本中保留预先指定的单词数。这很有帮助,因为不希望这个模型通过考虑很少出现的单词而产生大量噪音。在现实世界的数据中,使用num_words参数留下的大多数单词通常是拼写错误的。在默认情况下,标记器还会过滤一些不需要的标记并将文本转换为小写。

一旦适合数据的标记器还会保留一个单词索引(可以用来为单词分配唯一编号的单词字典),可以通过以下方式访问它:

tokenizer.word_index

索引字典中的单词按频率排序。

所以使用标记器的整个代码如下:

Python

from keras.preprocessing.text import Tokenizer ## Tokenize the sentences tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(train_X)+list(test_X)) train_X = tokenizer.texts_to_sequences(train_X) test_X = tokenizer.texts_to_sequences(test_X)

其中train_X和test_X是语料库中的文档列表。

(2)序列预处理

通常模型期望每个序列(每个训练示例)具有相同的长度(相同数量的单词/标记)。可以使用maxlen参数来控制它。

例如:

Python

train_X = pad_sequences(train_X, maxlen=maxlen) test_X = pad_sequences(test_X, maxlen=maxlen)

现在训练数据包含一个数字列表。每个列表具有相同的长度。还有word_index,它是文本语料库中出现次数最多的单词的字典。

(3)嵌入富集

如上所述,将使用GLoVE Word2Vec嵌入来解释富集。GLoVE预训练向量在维基百科语料库上进行训练。

这意味着数据中可能出现的某些词可能不会出现在嵌入中。那么怎么处理呢?先加载Glove Embeddings。

Python

1 def load_glove_index(): EMBEDDING_FILE = "../input/embeddings/glove.840B.300d/glove.840B.300d.txt" def get_coefs(word,*arr): return word, np.asarray(arr, dtype="float32")[:300] embeddings_index = dict(get_coefs(*o.split(" ")) for o in open(EMBEDDING_FILE)) return embeddings_index 2 glove_embedding_index = load_glove_index()

确保将下载这些GLoVE向量的文件夹的路径。

这个glove_embedding_index包含什么?它只是一个字典,其中键是词,值是词向量,而一个长度为300的np.array,其字典的长度大约是10亿。由于只需要word_index中单词的嵌入,将创建一个只包含所需嵌入的矩阵。

Python

1 def create_glove(word_index,embeddings_index): emb_mean,emb_std = -0.005838499,0.48782197 all_embs = np.stack(embeddings_index.values()) embed_size = all_embs.shape[1] nb_words = min(max_features, len(word_index)) embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size)) count_found = nb_words for word, i in tqdm(word_index.items()): if i >= max_features: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector else: count_found-=1 print("Got embedding for ",count_found," words.") return embedding_matrix

上面的代码工作正常,但有没有一种方法可以让利用GLoVE中的预处理来发挥优势?

是的。在为glove进行预处理时,创作者没有将单词转换为小写。这意味着它包含“USA”、“usa”和“Usa”等单词的多种变体。这也意味着在某些情况下,虽然存在像“Word”这样的单词,但不存在小写形式的类似物,即“word”。

在这里可以通过使用下面的代码来解决这种情况。

Python

1 def create_glove(word_index,embeddings_index): emb_mean,emb_std = -0.005838499,0.48782197 all_embs = np.stack(embeddings_index.values()) embed_size = all_embs.shape[1] nb_words = min(max_features, len(word_index)) embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size)) count_found = nb_words for word, i in tqdm(word_index.items()): if i >= max_features: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector else: if word.islower(): # try to get the embedding of word in titlecase if lowercase is not present embedding_vector = embeddings_index.get(word.capitalize()) if embedding_vector is not None: embedding_matrix[i] = embedding_vector else: count_found-=1 else: count_found-=1 print("Got embedding for ",count_found," words.") return embedding_matrix

上面只是一个例子,说明如何利用嵌入知识来获得更好的覆盖率。有时,根据问题的不同,人们还可以通过使用一些领域知识和自然语言处理(NLP)技能向嵌入中添加额外信息来获得价值。

例如,可以通过在Python中的TextBlob包中添加单词的极性和主观性,向嵌入本身添加外部知识。

Python

1 from textblob import TextBlob word_sent = TextBlob("good").sentiment print(word_sent.polarity,word_sent.subjectivity) # 0.7 0.6

可以使用TextBlob获取任何单词的极性和主观性。因此,可以尝试将这些额外信息添加到嵌入中。

Python

1 def create_glove(word_index,embeddings_index): emb_mean,emb_std = -0.005838499,0.48782197 all_embs = np.stack(embeddings_index.values()) embed_size = all_embs.shape[1] nb_words = min(max_features, len(word_index)) embedding_matrix = np.random.normal(emb_mean, emb_std, (nb_words, embed_size+4)) count_found = nb_words for word, i in tqdm(word_index.items()): if i >= max_features: continue embedding_vector = embeddings_index.get(word) word_sent = TextBlob(word).sentiment # Extra information we are passing to our embeddings extra_embed = [word_sent.polarity,word_sent.subjectivity] if embedding_vector is not None: embedding_matrix[i] = np.append(embedding_vector,extra_embed) else: if word.islower(): embedding_vector = embeddings_index.get(word.capitalize()) if embedding_vector is not None: embedding_matrix[i] = np.append(embedding_vector,extra_embed) else: embedding_matrix[i,300:] = extra_embed count_found-=1 else: embedding_matrix[i,300:] = extra_embed count_found-=1 print("Got embedding for ",count_found," words.") return embedding_matrix

工程嵌入是在后期从深度学习模型中获得更好性能的重要组成部分。通常,会在项目阶段多次重新访问这部分代码,同时尝试进一步改进的模型。在这里可以展示很多创造力,以提高对word_index的覆盖率,并在嵌入中包含额外的功能。

更多工程特性

嵌入矩阵的文本预处理方法

人们总是可以添加句子特定的特征,如句子长度、唯一词的数量等,作为另一个输入层,为深度神经网络提供额外的信息。

例如,创建了这些额外的特征,作为Quora Insincerity分类挑战的特征工程管道的一部分。

Python

1 def add_features(df): df["question_text"] = df["question_text"].progress_apply(lambda x:str(x)) df["lower_question_text"] = df["question_text"].apply(lambda x: x.lower()) df["total_length"] = df["question_text"].progress_apply(len) df["capitals"] = df["question_text"].progress_apply(lambda comment: sum(1 for c in comment if c.isupper())) df["caps_vs_length"] = df.progress_apply(lambda row: float(row["capitals"])/float(row["total_length"]), axis=1) df["num_words"] = df.question_text.str.count("\S+") df["num_unique_words"] = df["question_text"].progress_apply(lambda comment: len(set(w for w in comment.split()))) df["words_vs_unique"] = df["num_unique_words"] / df["num_words"] return df

结论

自然语言处理(NLP)在深度学习领域仍然是一个非常有趣的问题,因此希望更多的人进行大量的实验,看看哪些有效,哪些无效。而试图为任何自然语言处理(NLP)问题的深度学习神经网络的预处理步骤可以提供有益的视角。

阅读剩余内容
网友评论
相关内容
拓展阅读
最近更新