一、概述
Stanford CoreNLP 是斯坦福大学开发的一款强大的自然语言处理(NLP)工具,支持多种语言的文本处理,包括中文。本文将详细介绍如何使用 Stanford CoreNLP 实现中文文本的分词、词性标注、命名实体识别、句法分析等功能,并提供完整的代码示例和配置文件。
二、环境配置
1. Maven 依赖配置
在项目的 pom.xml
文件中添加以下依赖:
<dependencies><!-- Stanford CoreNLP --><dependency><groupId>edu.stanford.nlp</groupId><artifactId>stanford-corenlp</artifactId><version>${corenlp.version}</version></dependency><!-- Stanford CoreNLP Models --><dependency><groupId>edu.stanford.nlp</groupId><artifactId>stanford-corenlp</artifactId><version>${corenlp.version}</version><classifier>models</classifier></dependency><!-- Chinese Models --><dependency><groupId>edu.stanford.nlp</groupId><artifactId>stanford-corenlp</artifactId><version>${corenlp.version}</version><classifier>models-chinese</classifier></dependency>
</dependencies>
2. 配置文件
将以下配置文件保存为 CoreNLP-chinese.properties
,并放置在 src/main/resources
目录下:
# Pipeline options - lemma is no-op for Chinese but currently needed because coref demands it (bad old requirements system)
annotators = tokenize, ssplit, pos, lemma, ner, parse, coref# segment
tokenize.language = zh
segment.model = edu/stanford/nlp/models/segmenter/chinese/ctb.gz
segment.sighanCorporaDict = edu/stanford/nlp/models/segmenter/chinese
segment.serDictionary = edu/stanford/nlp/models/segmenter/chinese/dict-chris6.ser.gz
segment.sighanPostProcessing = true# sentence split
ssplit.boundaryTokenRegex = [.\u3002]|[!?\uFF01\uFF1F]+# pos
pos.model = edu/stanford/nlp/models/pos-tagger/chinese-distsim.tagger# ner
ner.language = chinese
ner.model = edu/stanford/nlp/models/ner/chinese.misc.distsim.crf.ser.gz
ner.applyNumericClassifiers = true
ner.useSUTime = false# regexner
ner.fine.regexner.mapping = edu/stanford/nlp/models/kbp/chinese/gazetteers/cn_regexner_mapping.tab
ner.fine.regexner.noDefaultOverwriteLabels = CITY,COUNTRY,STATE_OR_PROVINCE# parse
parse.model = edu/stanford/nlp/models/srparser/chineseSR.ser.gz# depparse
depparse.model = edu/stanford/nlp/models/parser/nndep/UD_Chinese.gz
depparse.language = chinese# coref
coref.sieves = ChineseHeadMatch, ExactStringMatch, PreciseConstructs, StrictHeadMatch1, StrictHeadMatch2, StrictHeadMatch3, StrictHeadMatch4, PronounMatch
coref.input.type = raw
coref.postprocessing = true
coref.calculateFeatureImportance = false
coref.useConstituencyTree = true
coref.useSemantics = false
coref.algorithm = hybrid
coref.path.word2vec =
coref.language = zh
coref.defaultPronounAgreement = true
coref.zh.dict = edu/stanford/nlp/models/dcoref/zh-attributes.txt.gz
coref.print.md.log = false
coref.md.type = RULE
coref.md.liberalChineseMD = false# kbp
kbp.semgrex = edu/stanford/nlp/models/kbp/chinese/semgrex
kbp.tokensregex = edu/stanford/nlp/models/kbp/chinese/tokensregex
kbp.language = zh
kbp.model = none# entitylink
entitylink.wikidict = edu/stanford/nlp/models/kbp/chinese/wikidict_chinese.tsv.gz
三、代码实现
1. 初始化 Stanford CoreNLP 管道
创建 CoreNLPHel
类,用于初始化 Stanford CoreNLP 管道:
import edu.stanford.nlp.pipeline.StanfordCoreNLP;public class CoreNLPHel {private static CoreNLPHel instance = new CoreNLPHel();private StanfordCoreNLP pipeline;private CoreNLPHel() {String props = "CoreNLP-chinese.properties"; // 配置文件路径pipeline = new StanfordCoreNLP(props);}public static CoreNLPHel getInstance() {return instance;}public StanfordCoreNLP getPipeline() {return pipeline;}
}
2. 分词功能
创建 Segmentation
类,用于实现中文分词:
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.util.CoreMap;import java.util.List;public class Segmentation {private String segtext;public String getSegtext() {return segtext;}public Segmentation(String text) {CoreNLPHel coreNLPHel = CoreNLPHel.getInstance();StanfordCoreNLP pipeline = coreNLPHel.getPipeline();Annotation annotation = new Annotation(text);pipeline.annotate(annotation);List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);StringBuffer sb = new StringBuffer();for (CoreMap sentence : sentences) {for (CoreLabel token : sentence.get(CoreAnnotations.TokensAnnotation.class)) {String word = token.get(CoreAnnotations.TextAnnotation.class);sb.append(word).append(" ");}}segtext = sb.toString().trim();}
}
3. 句子分割
创建 SenSplit
类,用于实现句子分割:
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.util.CoreMap;import java.util.ArrayList;
import java.util.List;public class SenSplit {private ArrayList<String> sensRes = new ArrayList<>();public ArrayList<String> getSensRes() {return sensRes;}public SenSplit(String text) {CoreNLPHel coreNLPHel = CoreNLPHel.getInstance();StanfordCoreNLP pipeline = coreNLPHel.getPipeline();Annotation annotation = new Annotation(text);pipeline.annotate(annotation);List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);for (CoreMap sentence : sentences) {sensRes.add(sentence.get(CoreAnnotations.TextAnnotation.class));}}
}
4. 词性标注
创建 PosTag
类,用于实现词性标注:
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.util.CoreMap;import java.util.List;public class PosTag {private String postext;public String getPostext() {return postext;}public PosTag(String text) {CoreNLPHel coreNLPHel = CoreNLPHel.getInstance();StanfordCoreNLP pipeline = coreNLPHel.getPipeline();Annotation annotation = new Annotation(text);pipeline.annotate(annotation);List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);StringBuffer sb = new StringBuffer();for (CoreMap sentence : sentences) {for (CoreLabel token : sentence.get(CoreAnnotations.TokensAnnotation.class)) {String word = token.get(CoreAnnotations.TextAnnotation.class);String pos = token.get(CoreAnnotations.PartOfSpeechAnnotation.class);sb.append(word).append("/").append(pos).append(" ");}}postext = sb.toString().trim();}
}
5. 命名实体识别
创建 NamedEntity
类,用于实现命名实体识别:
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.util.CoreMap;import java.util.List;public class NamedEntity {private String nertext;public String getNertext() {return nertext;}public NamedEntity(String text) {CoreNLPHel coreNLPHel = CoreNLPHel.getInstance();StanfordCoreNLP pipeline = coreNLPHel.getPipeline();Annotation annotation = new Annotation(text);pipeline.annotate(annotation);List<CoreMap> sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);StringBuffer sb = new StringBuffer();for (CoreMap sentence : sentences) {for (CoreLabel token : sentence.get(CoreAnnotations.TokensAnnotation.class)) {String word = token.get(CoreAnnotations.TextAnnotation.class);String ner = token.get(CoreAnnotations.NamedEntityTagAnnotation.class);sb.append(word).append("/").append(ner).append(" ");}}nertext = sb.toString().trim();}
}
6. 句法分析
创建 SPTree
类,用于实现句法分析:
import edu.stanford.nlp.ling.CoreAnnotations;
import edu.stanford.nlp.pipeline.Annotation;
import edu.stanford.nlp.pipeline.StanfordCoreNLP;
import edu.stanford.nlp.semgraph.SemanticGraph;
import edu.stanford.nlp.semgraph.SemanticGraphCoreAnnotations;
import edu.stanford.nlp.trees.Tree;
import edu.stanford.nlp.trees.TreeCoreAnnotations;
import edu.stanford.nlp.util.CoreMap;import java.util.List;public class SPTree {private List<CoreMap> sentences;public SPTree(String text) {CoreNLPHel coreNLPHel = CoreNLPHel.getInstance();StanfordCoreNLP pipeline = coreNLPHel.getPipeline();Annotation annotation = new Annotation(text);pipeline.annotate(annotation);sentences = annotation.get(CoreAnnotations.SentencesAnnotation.class);}// 句子的依赖图(依存分析)public String getDepprasetext() {StringBuffer sb2 = new StringBuffer();for (CoreMap sentence : sentences) {String sentext = sentence.get(CoreAnnotations.TextAnnotation.class);SemanticGraph graph = sentence.get(SemanticGraphCoreAnnotations.BasicDependenciesAnnotation.class);sb2.append(sentext).append("\n");sb2.append(graph.toString(SemanticGraph.OutputFormat.LIST)).append("\n");}return sb2.toString().trim();}// 句子的解析树public String getPrasetext() {StringBuffer sb1 = new StringBuffer();for (CoreMap sentence : sentences) {Tree tree = sentence.get(TreeCoreAnnotations.TreeAnnotation.class);String sentext = sentence.get(CoreAnnotations.TextAnnotation.class);sb1.append(sentext).append("/").append(tree.toString()).append("\n");}return sb1.toString().trim();}
}
四、测试代码
1. 分词测试
public class Test {public static void main(String[] args) {System.out.println(new Segmentation("这家酒店很好,我很喜欢。").getSegtext());System.out.println(new Segmentation("他和我在学校里常打桌球。").getSegtext());System.out.println(new Segmentation("貌似实际用的不是这几篇。").getSegtext());System.out.println(new Segmentation("硕士研究生产。").getSegtext());System.out.println(new Segmentation("我是中国人。").getSegtext());}
}
2. 句子分割测试
public class Test1 {public static void main(String[] args) {String text = "巴拉克·奥巴马是美国总统。他在2008年当选?今年的美国总统是特朗普?普京的粉丝";ArrayList<String> sensRes = new SenSplit(text).getSensRes();for (String str : sensRes) {System.out.println(str);}}
}
3. 词性标注测试
public class Test2 {public static void main(String[] args) {String text = "巴拉克·奥巴马是美国总统。他在2008年当选?今年的美国总统是特朗普?普京的粉丝";System.out.println(new PosTag(text).getPostext());}
}
4. 命名实体识别测试
public class Test3 {public static void main(String[] args) {String text = "巴拉克·奥巴马是美国总统。他在2008年当选?今年的美国总统是特朗普?普京的粉丝";System.out.println(new NamedEntity(text).getNertext());}
}
5. 句法分析测试
public class Test4 {public static void main(String[] args) {String text = "巴拉克·奥巴马是美国总统。他在2008年当选?今年的美国总统是特朗普?普京的粉丝";SPTree spTree = new SPTree(text);System.out.println(spTree.getPrasetext());}
}
五、运行结果
1. 分词结果
这家 酒店 很好 , 我 很 喜欢 。
他 和 我 在 学校 里 常 打 桌球 。
貌似 实际 用 的 不 是 这几 篇 。
硕士 研究 生产 。
我 是 中国 人 。
2. 句子分割结果
巴拉克·奥巴马是美国总统。
他在2008年当选?
今年的美国总统是特朗普?
普京的粉丝
3. 词性标注结果
巴拉克·奥巴马/NNP 是/VC 美国/NNP 总统/NN 。/PU 他/PRP 在/IN 2008年/CD 当选/VBN ?/PU 今年/CD 的/POS 美国/NNP 总统/NN 是/VBP 特朗普/NNP ?/PU 普京/NNP 的/POS 粉丝/NN
4. 命名实体识别结果
巴拉克·奥巴马/PERSON 是/OTHER 美国/LOC 总统/OTHER 。/OTHER 他/OTHER 在/OTHER 2008年/DATE 当选/OTHER ?/OTHER 今年/DATE 的/OTHER 美国/LOC 总统/OTHER 是/OTHER 特朗普/PERSON ?/OTHER 普京/PERSON 的/OTHER 粉丝/OTHER
5. 句法分析结果
巴拉克·奥巴马是美国总统。/(ROOT(S(NP (NNP 巴拉克·奥巴马))(VP (VC 是)(NP (NNP 美国) (NN 总统)))(. 。)))
他在2008年当选?/(ROOT(S(NP (PRP 他))(VP (IN 在)(NP (CD 2008年))(VP (VBN 当选)))(? ?)))
今年的美国总统是特朗普?/(ROOT(S(NP (CD 今年) (DEG 的) (NNP 美国) (NN 总统))(VP (VBP 是)(NP (NNP 特朗普)))(? ?)))
普京的粉丝/ROOT(S(NP (NNP 普京) (DEG 的) (NN 粉丝)))
六、总结
本文详细介绍了如何使用 Stanford CoreNLP 实现中文文本的分词、句子分割、词性标注、命名实体识别和句法分析等功能。通过配置文件和代码实现,我们可以轻松地对中文文本进行处理和分析。这些功能在自然语言处理领域有广泛的应用,如文本分类、情感分析、机器翻译等。