You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Expected behavior / 预期结果 (Mandatory / 必填)
A clear and concise description of what you expected to happen.
问答应当返回调用llama微调模型返回tokenizer.batch_decode(outputs, skip_special_tokens=False)[0]的结果。
The text was updated successfully, but these errors were encountered:
与bug#1880的错误类型一样,没有改完
Describe the bug/ 问题描述 (Mandatory / 必填)
A clear and concise description of what the bug is.
在调用这个函数:
def get_code_completion(prompt: str, model, tokenizer, temperature: float) -> str:
"""Generate code completion for a given prompt"""
try:
model.eval()
input_ids=tokenizer(prompt, return_tensors="ms")#改成符合mindspore形式的张量
outputs = model.generate(
input_ids=tokenized_input["input_ids"]
max_new_tokens=MAX_NEW_TOKENS,
temperature=temperature,
top_k=TOP_K,
top_p=TOP_P,
do_sample=True,
no_repeat_ngram_size=NO_REPEAT_NGRAM_SIZE,
repetition_penalty=REPETITION_PENALTY,
)
ms.ms_memory_recycle()
return tokenizer.batch_decode(outputs, skip_special_tokens=False)[0]
except Exception as e:
print(f"Error during code generation: {str(e)}")#转化成字符串打印错误信息
raise
接收到下图中来自transformers的models里面llama模型的报错,显示split没有dim这一个参数。
Hardware Environment(Ascend/GPU/CPU) / 硬件环境:
Ascend
Software Environment / 软件环境 (Mandatory / 必填):
在华为modelarts和openI中均出现该错误。
-- MindSpore version (e.g., 1.7.0.Bxxx) :2.4.0
-- Python version (e.g., Python 3.7.5) :3.9
-- 镜像:mindspore_2.2.0-cann_7.0.1-py_3.9-euler_2.10.7-aarch64-snt9b
Excute Mode / 执行模式 (Mandatory / 必填)(PyNative/Graph):Graph
Expected behavior / 预期结果 (Mandatory / 必填)
A clear and concise description of what you expected to happen.
问答应当返回调用llama微调模型返回tokenizer.batch_decode(outputs, skip_special_tokens=False)[0]的结果。
The text was updated successfully, but these errors were encountered: