-
-
Notifications
You must be signed in to change notification settings - Fork 12.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] groq的deepseek r1的思考过程渲染错误 #5638
Comments
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
📦 deployment environmentDocker 📌 Software versionlastst 💻 System environmentWindows 🌐 browserChrome 🐛 Problem description📷 Reapped stepsNO Response 🚦 expectation resultShow thinking time like DeepSeek's official website, and can fold the process of thinking 📝 Supplementary information_NO Response |
Hey there, @xingpingcn! I'm here to help you with any bugs, questions, or contributions. Let's work together to resolve your issue. The issue you're experiencing with the "deepspeek-r1-distill-llama-70b" model in the lobe-chat framework is a known problem. Specifically, when using the DeepSeek-R1 model with Ollama deployment in Lobe Chat (versions v1.47.9 and v1.48.0), the To address this, you might want to check if there are any updates or patches available for the lobe-chat framework that resolve this issue. Additionally, reviewing the implementation in To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
This issue is closed, If you have any questions, you can comment and reply. |
📦 部署环境
Docker
📌 软件版本
lastest
💻 系统环境
Windows
🌐 浏览器
Chrome
🐛 问题描述
📷 复现步骤
No response
🚦 期望结果
像deepseek官网一样显示思考时间,而且能够折叠思考过程
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: