A platform for massive agent-based simulation and its evaluation
Gaku Yamamoto, Hideki Tai, et al.
AAMAS 2008
Given the lack of word delimeters in written Japanese, word segmentation is generally considered a crucial first step in processing Japanese texts. Typical Japanese segmentation algorithms rely either on a lexicon and syntactic analysis or on pre-segmented data; but these are labor-intensive, and the lexion-syntactic techniques are vulnerable to the unknown word problem. In contrast, we introduce a novel, more robust statistical method utilizing unsegmented training data. Despite its simplicity, the algorithm yields performance on long kanji sequences comparable to and sometimes surpassing that of state-of-the-art morphological analyzers over a variety of error metrics. The algorithm also outperforms another mostly-unsupervised statistical algorithm previously proposed for Chinese. Additionally, we present a two-level annotation scheme for Japanese to incorporate multiple segmentation granularities, and introduce two novel evaluation metrics, both based on the notion of a compatible bracket, that can account for multiple granularities simultaneously.
Gaku Yamamoto, Hideki Tai, et al.
AAMAS 2008
Xiaoxiao Guo, Shiyu Chang, et al.
AAAI 2019
Wang Zhang, Subhro Das, et al.
ICASSP 2025
Khalid Abdulla, Andrew Wirth, et al.
ICIAfS 2014