发布时间:2024-07-12 10:01
中文分词的概念与分类
常用分词(规则分词、统计分词、混合分词)技术介绍
开源中文分词工具-Jieba
实战分词之高频词提取
规则分词
最早兴起,主要通过人工设立词库,按照一定方式进行匹配切分,实现简单高效,但对新词难以处理;
统计分词
能较好应对新词发现场景,但是太过于依赖于语料质量;
混合分词
规则分词与统计分词的结合体;
定义
一种机械分词方法,主要通过维护词典,切分语句时,将语句中的每个字符串与词表中的词逐一匹配,找到则切分,否则不切分;
分类
基本思想
假定分词词典中最长词有 i i i个汉字字符,则用被处理文档的当前字符串中的前 i i i个字作为匹配字段,查找字典;
算法描述
基本原理
从被处理文档末端开始匹配扫描,每次取末端的 i i i个字符作为匹配字段,匹配事变则去掉匹配字段最前一个字,继续匹配;
基本原理
将正向最大匹配法和逆向最大匹配法得到的分词结果进行比较,庵后按照最大匹配原则,选取词数切分最少的作为结果;
相关代码
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @version : 1.0
# @Time : 2019-8-25 15:27
# @Author : cunyu
# @Email : cunyu1024@foxmail.com
# @Site : https://cunyu1943.github.io
# @File : mm.py
# @Software: PyCharm
# @Desc : 正向最大匹配分词
train_data = './data/train.txt' # 训练语料
test_data = './data/test.txt' # 测试语料
result_data = './data/test_sc_zhengxiang.txt' # 生成结果
def get_dic(train_data): # 读取文本返回列表
with open(train_data, 'r', encoding='utf-8', ) as f:
try:
file_content = f.read().split()
finally:
f.close()
chars = list(set(file_content))
return chars
def MM(test_data, result_data, dic):
# 词的最大长度
max_length = 5
h = open(result_data, 'w', encoding='utf-8', )
with open(test_data, 'r', encoding='utf-8', ) as f:
lines = f.readlines()
for line in lines: # 分别对每行进行正向最大匹配处理
max_length = 5
my_list = []
len_hang = len(line)
while len_hang > 0:
tryWord = line[0:max_length]
while tryWord not in dic:
if len(tryWord) == 1:
break
tryWord = tryWord[0:len(tryWord) - 1]
my_list.append(tryWord)
line = line[len(tryWord):]
len_hang = len(line)
for t in my_list: # 将分词结果写入生成文件
if t == '\n':
h.write('\n')
else:
print(t)
h.write(t + " ")
h.close()
if __name__ == '__main__':
print('读入词典')
dic = get_dic(train_data)
print('开始匹配')
MM(test_data, result_data, dic)
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @version : 1.0
# @Time : 2019-8-25 15:36
# @Author : cunyu
# @Email : cunyu1024@foxmail.com
# @Site : https://cunyu1943.github.io
# @File : rmm.py
# @Software: PyCharm
# @Desc : 逆向最大匹配法
train_data = './data/train.txt'
test_data = './data/test.txt'
result_data = './data/test_sc.txt'
def get_dic(train_data):
with open(train_data, 'r', encoding='utf-8', ) as f:
try:
file_content = f.read().split()
finally:
f.close()
chars = list(set(file_content))
return chars
def RMM(test_data,result_data,dic):
max_length = 5
h = open(result_data, 'w', encoding='utf-8', )
with open(test_data, 'r', encoding='utf-8', ) as f:
lines = f.readlines()
for line in lines:
my_stack = []
len_hang = len(line)
while len_hang > 0:
tryWord = line[-max_length:]
while tryWord not in dic:
if len(tryWord) == 1:
break
tryWord = tryWord[1:]
my_stack.append(tryWord)
line = line[0:len(line) - len(tryWord)]
len_hang = len(line)
while len(my_stack):
t = my_stack.pop()
if t == '\n':
h.write('\n')
else:
print(t)
h.write(t + " ")
h.close()
if __name__ == '__main__':
print('获取字典')
dic = get_dic(train_data)
print('开始匹配……')
RMM(test_data,result_data,dic)
主要操作
n元条件概率
P ( w i ∣ w i − ( n − 1 ) , … , w i − 1 ) = c o u n t ( w i − ( n − 1 ) , … , w i − 1 , w i ) c o u n t ( w i − ( n − 1 ) , … , w i − 1 ) P(w_i|w_{i-(n-1)},…,w_{i-1})=\frac{count(w_{i-(n-1)},…,w_{i-1},w_i)}{count(w_{i-(n-1)},…,w_{i-1})} P(wi∣wi−(n−1),…,wi−1)=count(wi−(n−1),…,wi−1)count(wi−(n−1),…,wi−1,wi)
其中, c o u n t ( w i − ( n − 1 ) , … , w i − 1 ) count(w_{i-(n-1)},…,w_{i-1}) count(wi−(n−1),…,wi−1)表示词 w i − ( n − 1 ) , … , w i − 1 w_{i-(n-1)},…,w_{i-1} wi−(n−1),…,wi−1在语料库中出现的总次数;
HMM模型
将分词作为字在字串中的序列标注任务来实现。基本思路:每个字在构造一个特定的词语时都占据着一个确定的构词位置(即词位),规定每个字最多有四个构词位置:B(词首)、M(词中)、E(词尾)、S(单独成词);
数学理论
假设用 λ = λ 1 λ 2 … λ n \lambda=\lambda _1 \lambda_2 …\lambda_n λ=λ1λ2…λn代表输入的句子, n n n表示句子长度, λ i \lambda_i λi表示字, o = o 1 o 2 … o n o=o_1o_2…o_n o=o1o2…on表示输出的标签,则理想输出为:
m a x = m a x P ( o = o 1 o 2 … o n ∣ λ = λ 1 λ 2 … λ n ) = P ( o 1 ∣ λ 1 ) P ( o 2 ∣ λ 2 ) … P ( o n ∣ λ n ) max = maxP(o=o_1o_2…o_n|\lambda=\lambda _1 \lambda_2 …\lambda_n)=P(o_1|\lambda_1)P(o_2|\lambda_2)…P(o_n|\lambda_n) max=maxP(o=o1o2…on∣λ=λ1λ2…λn)=P(o1∣λ1)P(o2∣λ2)…P(on∣λn)
在这个算法中,求解结果的常用方法是Veterbi算法,这是一种动态规划算法,核心是:如果最终的最优路径经过某个节点 o i o_i oi,则从初始节点到 o i − 1 o_{i-1} oi−1点的路径必然也是一个最优路径,因为每个节点 o i o_i oi只会影响前后两个 P ( o i − 1 ∣ o i ) P(o_{i-1}|o_i) P(oi−1∣oi)和 P ( o i ∣ o i + 1 ) P(o_i|o_{i+1}) P(oi∣oi+1);
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @version : 1.0
# @Time : 2019-8-27 15:42
# @Author : cunyu
# @Email : cunyu1024@foxmail.com
# @Site : https://cunyu1943.github.io
# @File : HMM_model.py
# @Software: PyCharm
# @Desc : HMM模型分词
import os
import pickle
class HMM(object):
def __init__(self):
import os
# 主要是用于存取算法中间结果,不用每次都训练模型
self.model_file = './data/models/hmm_model.pkl'
# 状态值集合
self.state_list = ['B', 'M', 'E', 'S']
# 参数加载,用于判断是否需要重新加载model_file
self.load_para = False
# 用于加载已计算的中间结果,当需要重新训练时,需初始化清空结果
def try_load_model(self, trained):
if trained:
import pickle
with open(self.model_file, 'rb') as f:
self.A_dic = pickle.load(f)
self.B_dic = pickle.load(f)
self.Pi_dic = pickle.load(f)
self.load_para = True
else:
# 状态转移概率(状态->状态的条件概率)
self.A_dic = {}
# 发射概率(状态->词语的条件概率)
self.B_dic = {}
# 状态的初始概率
self.Pi_dic = {}
self.load_para = False
# 计算转移概率、发射概率以及初始概率
def train(self, path):
# 重置几个概率矩阵
self.try_load_model(False)
# 统计状态出现次数,求p(o)
Count_dic = {}
# 初始化参数
def init_parameters():
for state in self.state_list:
self.A_dic[state] = {s: 0.0 for s in self.state_list}
self.Pi_dic[state] = 0.0
self.B_dic[state] = {}
Count_dic[state] = 0
def makeLabel(text):
out_text = []
if len(text) == 1:
out_text.append('S')
else:
out_text += ['B'] + ['M'] * (len(text) - 2) + ['E']
return out_text
init_parameters()
line_num = -1
# 观察者集合,主要是字以及标点等
words = set()
with open(path, encoding='utf8') as f:
for line in f:
line_num += 1
print(line)
line = line.strip()
if not line:
continue
word_list = [i for i in line if i != ' ']
words |= set(word_list) # 更新字的集合
linelist = line.split()
line_state = []
for w in linelist:
line_state.extend(makeLabel(w))
assert len(word_list) == len(line_state)
for k, v in enumerate(line_state):
Count_dic[v] += 1
if k == 0:
self.Pi_dic[v] += 1 # 每个句子的第一个字的状态,用于计算初始状态概率
else:
self.A_dic[line_state[k - 1]][v] += 1 # 计算转移概率
self.B_dic[line_state[k]][word_list[k]] = \
self.B_dic[line_state[k]].get(word_list[k], 0) + 1.0 # 计算发射概率
self.Pi_dic = {k: v * 1.0 / line_num for k, v in self.Pi_dic.items()}
self.A_dic = {k: {k1: v1 / Count_dic[k] for k1, v1 in v.items()}
for k, v in self.A_dic.items()}
# 加1平滑
self.B_dic = {k: {k1: (v1 + 1) / Count_dic[k] for k1, v1 in v.items()}
for k, v in self.B_dic.items()}
# 序列化
import pickle
with open(self.model_file, 'wb') as f:
pickle.dump(self.A_dic, f)
pickle.dump(self.B_dic, f)
pickle.dump(self.Pi_dic, f)
return self
def viterbi(self, text, states, start_p, trans_p, emit_p):
V = [{}]
path = {}
for y in states:
V[0][y] = start_p[y] * emit_p[y].get(text[0], 0)
path[y] = [y]
for t in range(1, len(text)):
V.append({})
newpath = {}
# 检验训练的发射概率矩阵中是否有该字
neverSeen = text[t] not in emit_p['S'].keys() and \
text[t] not in emit_p['M'].keys() and \
text[t] not in emit_p['E'].keys() and \
text[t] not in emit_p['B'].keys()
for y in states:
emitP = emit_p[y].get(text[t], 0) if not neverSeen else 1.0 # 设置未知字单独成词
(prob, state) = max(
[(V[t - 1][y0] * trans_p[y0].get(y, 0) *
emitP, y0)
for y0 in states if V[t - 1][y0] > 0])
V[t][y] = prob
newpath[y] = path[state] + [y]
path = newpath
if emit_p['M'].get(text[-1], 0) > emit_p['S'].get(text[-1], 0):
(prob, state) = max([(V[len(text) - 1][y], y) for y in ('E', 'M')])
else:
(prob, state) = max([(V[len(text) - 1][y], y) for y in states])
return (prob, path[state])
def cut(self, text):
import os
if not self.load_para:
self.try_load_model(os.path.exists(self.model_file))
prob, pos_list = self.viterbi(text, self.state_list, self.Pi_dic, self.A_dic, self.B_dic)
begin, next = 0, 0
for i, char in enumerate(text):
pos = pos_list[i]
if pos == 'B':
begin = i
elif pos == 'E':
yield text[begin: i + 1]
next = i + 1
elif pos == 'S':
yield char
next = i + 1
if next < len(text):
yield text[next:]
if __name__ == '__main__':
hmm = HMM()
hmm.train('./data/trainCorpus.txt_utf8')
print('训练完成')
text = input('输入一个句子:')
result = hmm.cut(text)
print(str(list(result)))
其他统计分词算法
使用Jieba的优点
官网地址
简介
结合基于规则和统计的两类方法,首先基于前缀词典(指词典中的词按照前缀包含的顺序排列)进行词图扫描,从而形成一种层级包含结构。这样可以快速构建包含全部可能分词结果的有向无环图,图中包含多条分词路径,有向指全部路径都始于第一个字、止于最后一个字,而无环则指节点间不构成闭环。基于标注语料,使用动态规划方法找出最大概率路径并将其作为最终分词结果。而对于OOV,则使用基于汉字成词的HMM模型,采用Viterbi算法进行推导。
三种分词模式
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @version : 1.0
# @Time : 2019-8-26 21:33
# @Author : cunyu
# @Email : cunyu1024@foxmail.com
# @Site : https://cunyu1943.github.io
# @File : jieba_demo.py
# @Software: PyCharm
# @Desc : jieba简单使用
# 三种模式对比
import jieba
sent = '中文分词是文本处理中不可获取的一步!'
seg_list = jieba.cut(sent,cut_all=True)
print('全模式: ', '/'.join(seg_list))
seg_list = jieba.cut(sent,cut_all=False)
print('精确模式: ', '/'.join(seg_list))
seg_list = jieba.cut_for_search(sent)
print('搜索引擎模式: ', '/'.join(seg_list))
高频词提取
高频词指文档中出现频率较高且非无用的词语,在一定程度上代表了文档的焦点所在,主要有以下干扰:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @version : 1.0
# @Time : 2019-8-27 9:16
# @Author : cunyu
# @Email : cunyu1024@foxmail.com
# @Site : https://cunyu1943.github.io
# @File : keyWords.py
# @Software: PyCharm
# @Desc : 关键词抽取
import glob
import random
import jieba
# 读取文件
def get_content(path):
with open(path, 'r', encoding='gbk', errors='ignore') as f:
content = ''
for l in f:
l = l.strip()
content += l
return content
# 获取高频词
def get_TF(words, topK=10):
tf_dic = {}
for w in words:
tf_dic[w] = tf_dic.get(w, 0) + 1
return sorted(tf_dic.items(), key=lambda x: x[1], reverse=True)[:topK]
# 加载停用词
def stop_words(path):
with open(path) as f:
return [l.strip() for l in f]
# 分词
def main(path,stop_words_path):
files = glob.glob(path)
corpus = [get_content(x) for x in files[:5]]
# 随机产生一个随机数(小于总的长度)个样本,然后对这几个样本进行提取高频词操作
sample_inxs = random.randint(0, len(corpus))
for sample_inx in range(sample_inxs):
split_words = [x for x in jieba.cut(corpus[sample_inx]) if x not in stop_words(stop_words_path)]
print('样本之一:' + corpus[sample_inx])
print('样本分词效果:' + '/ '.join(split_words))
print('样本的topK(10)词:' + str(get_TF(split_words)))
if __name__ == '__main__':
path = './data/news/C000010/*.txt'
stop_words_path='./data/stop_words.utf8'
main(path, stop_words_path)