Page 196 - Journal of Library Science in China, Vol.47, 2021
P. 196
ZHANG Wei, WANG Hao, DENG Sanhong & ZHANG Baolong / Sentiment term extraction 195
and application of Chinese ancient poetry text for digital humanities
(a) original term extraction results calculation (b) discriminative term extraction results calculation
Figure 6. Extraction results of emotional term based on {“ 忄”, “心”} benchmark extended
Figure 6 shows the calculation results of the set of radicals based on “忄” and “心”, and “+
radical” is the experimental result of its incremental expansion, with values of F1 in reverse
clockwise order in it. After analysis, it can be seen that: 1) All incremental extensions of the
radical features can make R and R_distinct better than baseline; 2)In terms of F1, the top five for
incremental radicals are: “一” > “大” > “目” > “宀” > “讠”, with “一”, “大” and “目” better than
baseline; 3)The top five of F1_distinct are: “目”>“人”>“宀”>“氵”>“大”, with “目” outperforming
“baseline”; 4)Generally speaking, the overall performance of the emotional extraction experiment
using {“忄”, “心”} as the set of radicals remains the best.
(3) Multi-feature extension. The single features with better performance than baseline are
superimposed in a forward direction to obtain multi-feature sequences, including E/F/P/B. Among
them, the pinyin features are selected as E_P with better performance among the constrained
features, and the radical features are selected as {“忄”, “心”} as the collection constrained features.
Experiments were conducted with six groups of multi-features based on the above features,
and the results showed that none of them exceeded the baseline, for which it is inferred that the
introduction of multi-features is not always conducive to the optimization of the model, depending
on the specific domain corpus features.
3.2 Results of human emotion term extraction based on transfer learning
The improvement effect of Chinese character feature expansion is more limited compared
to baseline. In this paper, we further build the BERT-BiLSTM-CRFs deep learning model.
Considering the migratable nature of the BERT model, in order to prevent the underfitting or