We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
大神你好,请问能做到简繁转换后ik分词吗?
The text was updated successfully, but these errors were encountered:
char_filter阶段就做完转换了吧,所以在tokenizer阶段你直接配置成ik不可以了吗?
Sorry, something went wrong.
設置了轉換器后分詞不太理想,比如搜索清華沒有結果,搜索清華大學有結果,我設置如下: { "analysis": { "char_filter": { "tsconvert": { "type": "stconvert", "convert_type": "t2s" } }, "analyzer": { "my_analyzer": { "type": "custom", "char_filter": [ "tsconvert" ], "tokenizer": "ik_smart", "filter": [ "lowercase" ] } } } } 有什麽建議嗎?
No branches or pull requests
大神你好,请问能做到简繁转换后ik分词吗?
The text was updated successfully, but these errors were encountered: