Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Why is there such a big difference in the results of different models using knowledge graphs for analysis? #3101

Open
JidaDiao opened this issue Oct 30, 2024 · 3 comments
Labels
question Further information is requested

Comments

@JidaDiao
Copy link

JidaDiao commented Oct 30, 2024

Describe your problem

为什么在同样的设定下我用ollama的qwen14b和gpt3.5差出来老多了,我可以在哪里看解析文档用的提示词和模型返回的结果吗?
SCR-20241030-mtpt
SCR-20241030-mtvi

上面两张是gpt解析出来的结果。

SCR-20241030-mugc SCR-20241030-muim

下面两张是qwen14b在同参数下的结果。

SCR-20241030-mutx

解析设置如上面的图片所示,我不认为qwen14b的结果能力会这么差,是不是哪里出问题了,我可以去哪看大模型对话的具体情况?

@JidaDiao JidaDiao added the question Further information is requested label Oct 30, 2024
@KevinHuSh
Copy link
Collaborator

Normal. Knowledge graphs extraction mostly depend on capability of LLM.

@dadastory
Copy link

image
I am using qwen2.5:32b, and the token is set to 512, but no graph is displayed at all. I don't know why.

@dadastory
Copy link

Why can your ollama still generate graphs? I am using the 0.14.1 image. Is it a version problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants