Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

第二个问题就报错 #50

Open
wanghaicn opened this issue Jul 21, 2024 · 0 comments
Open

第二个问题就报错 #50

wanghaicn opened this issue Jul 21, 2024 · 0 comments

Comments

@wanghaicn
Copy link

如下,第二个问题就会报错
欢迎使用复旦 DISC-LawLLM,输入进行对话,clear 清空历史,Ctrl+C 中断生成,stream 开关流式生成,exit 结束。

用户:我的甲方没有签订合同,业务就终止了,现在甲方拒绝支付劳务费用,我怎么才能要回我的工资?

DISC-LawLLM:根据中国劳动法的规定,用人单位应当与劳动者订立书面劳动合同。如果您的甲方没有签订合同,您可以向当地劳动监察部门投诉,要求其依法履行义务并支付您的工资。

同时,您也可以通过法律途径追讨欠薪。根据《中华人民共和国劳动争议调解仲裁法》规定,您可以向劳动争议仲裁委员会申请仲裁或者直接向人民法院提起诉讼。在起诉时,您需要提供相关证据证明双方存在劳动关系以及未支付的工资数额等事实。

建议您咨询专业律师以获取更详细和具体的法律意见,以便更好地维护自己的合法权益。

用户:劳动争议仲裁是仅仅能处理劳动纠纷,劳务纠纷也能处理吗?

DISC-LawLLM:Traceback (most recent call last):
File "/root/cli_demo.py", line 80, in
main()
File "/root/cli_demo.py", line 61, in main
for response in model.chat(tokenizer, messages, stream=True):
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/DISC-LawLLM/modeling_baichuan.py", line 589, in chat
input_ids = self._build_chat_input(tokenizer, messages, generation_config.max_new_tokens)
File "/root/.cache/huggingface/modules/transformers_modules/DISC-LawLLM/modeling_baichuan.py", line 561, in _build_chat_input
content_tokens = tokenizer.encode(message['content'])
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2319, in encode
encoded_inputs = self.encode_plus(
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2727, in encode_plus
return self._encode_plus(
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 649, in _encode_plus
first_ids = get_input_ids(text)
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 616, in get_input_ids
tokens = self.tokenize(text, **kwargs)
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 547, in tokenize
tokenized_text.extend(self._tokenize(token))
File "/root/.cache/huggingface/modules/transformers_modules/DISC-LawLLM/tokenization_baichuan.py", line 95, in _tokenize
return self.sp_model.encode(text, out_type=str)
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/sentencepiece/init.py", line 552, in Encode
return self._EncodeAsPieces(input, enable_sampling, nbest_size,
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/sentencepiece/init.py", line 322, in _EncodeAsPieces
return _sentencepiece.SentencePieceProcessor__EncodeAsPieces(self, text, enable_sampling, nbest_size, alpha, add_bos, add_eos, reverse, emit_unk_piece)
TypeError: not a string

同样,第二个问题就报错:
欢迎使用复旦 DISC-LawLLM,输入进行对话,clear 清空历史,Ctrl+C 中断生成,stream 开关流式生成,exit 结束。

用户:我的车子丢了,怎么办?

DISC-LawLLM:如果您的车辆丢失了,您可以采取以下措施:

1.报警:立即向当地警方报案。提供详细的信息,包括车辆型号、车牌号码、颜色和任何其他相关信息。

2.挂失车辆登记证:联系汽车制造商或经销商以挂失车辆登记证,防止他人使用您的车辆进行非法交易。

3.保险索赔:如果您购买了车辆保险,可联系保险公司申请理赔。根据您的保险政策,可能需要提供相关的证据和文件来支持您的索赔。

4.寻找目击者:询问周围的人是否看到了您的车辆,或者是否有关于车辆被盗的线索。这有助于您更快地找回车辆。

5.社交媒体求助:在社交媒体上发布寻车启事,并附上车辆照片和其他重要信息。这样可以让更多的目击者看到您的帖子,帮助您更快地找回车辆。

用户:有人占了我的车位.怎么办?

DISC-LawLLM:Traceback (most recent call last):
File "/root/cli_demo.py", line 80, in
main()
File "/root/cli_demo.py", line 61, in main
for response in model.chat(tokenizer, messages, stream=True):
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/.cache/huggingface/modules/transformers_modules/DISC-LawLLM/modeling_baichuan.py", line 589, in chat
input_ids = self._build_chat_input(tokenizer, messages, generation_config.max_new_tokens)
File "/root/.cache/huggingface/modules/transformers_modules/DISC-LawLLM/modeling_baichuan.py", line 561, in _build_chat_input
content_tokens = tokenizer.encode(message['content'])
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2319, in encode
encoded_inputs = self.encode_plus(
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2727, in encode_plus
return self._encode_plus(
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 649, in _encode_plus
first_ids = get_input_ids(text)
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 616, in get_input_ids
tokens = self.tokenize(text, **kwargs)
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/transformers/tokenization_utils.py", line 547, in tokenize
tokenized_text.extend(self._tokenize(token))
File "/root/.cache/huggingface/modules/transformers_modules/DISC-LawLLM/tokenization_baichuan.py", line 95, in _tokenize
return self.sp_model.encode(text, out_type=str)
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/sentencepiece/init.py", line 552, in Encode
return self._EncodeAsPieces(input, enable_sampling, nbest_size,
File "/root/miniconda3/envs/disc_law1/lib/python3.9/site-packages/sentencepiece/init.py", line 322, in _EncodeAsPieces
return _sentencepiece.SentencePieceProcessor__EncodeAsPieces(self, text, enable_sampling, nbest_size, alpha, add_bos, add_eos, reverse, emit_unk_piece)
TypeError: not a string

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant