以下是引用 xujiajin 在 2005-8-2 23:40:19 的发言:
Sigh
以下是引用 superyangt 在 2005-8-9 20:45:56 的发言:
请问各位,有谁知道怎么用concordancer处理中文文本?
感谢。
以下是引用 xiaoz 在 2005-7-12 22:04:50 的发言:
Haven't tried Concordance, but if it is based Unicode as 动态语法 suggested, there is be no problem with this tool.
But for WordSmith 3, only Concord works on segemnted Chinese texts. Wordlist, and relatedly Cluster, and Keyword, do not work.
以下是引用 清风出袖 在 2006-3-23 20:21:48 的发言:
词匠3可以将分过词的中文文本索引出来但是无法统计词频和统计汉语词汇。好像上面已经说过的。这种统计汉语词汇和词频的功能只有在词匠4里面才有这个功能的。
以下是引用 清风出袖 在 2006-3-23 8:29:59 的发言:
I find that a Chinese file tokenized by segtag as mentioned at http://www.corpus4u.com/forum_view.asp?forum_id=8&view_id=557&page=2 doesn't work with WordSmith 3 when I try it in concordancing, while the same file tokenized by ICTCLAS can be recognized by WordSmith 3 in concordancing. Why? Are there any specific reasons for this?