We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
为什么在同样的设定下我用ollama的qwen14b和gpt3.5差出来老多了,我可以在哪里看解析文档用的提示词和模型返回的结果吗?
上面两张是gpt解析出来的结果。
下面两张是qwen14b在同参数下的结果。
解析设置如上面的图片所示,我不认为qwen14b的结果能力会这么差,是不是哪里出问题了,我可以去哪看大模型对话的具体情况?
The text was updated successfully, but these errors were encountered:
Normal. Knowledge graphs extraction mostly depend on capability of LLM.
Sorry, something went wrong.
I am using qwen2.5:32b, and the token is set to 512, but no graph is displayed at all. I don't know why.
Why can your ollama still generate graphs? I am using the 0.14.1 image. Is it a version problem?
No branches or pull requests
Describe your problem
为什么在同样的设定下我用ollama的qwen14b和gpt3.5差出来老多了,我可以在哪里看解析文档用的提示词和模型返回的结果吗?
上面两张是gpt解析出来的结果。
下面两张是qwen14b在同参数下的结果。
解析设置如上面的图片所示,我不认为qwen14b的结果能力会这么差,是不是哪里出问题了,我可以去哪看大模型对话的具体情况?
The text was updated successfully, but these errors were encountered: