Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: chatpire/chatgpt-web-share
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.4.5
Choose a base ref
...
head repository: chatpire/chatgpt-web-share
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v0.4.6
Choose a head ref

Commits on Jan 1, 2024

  1. Copy the full SHA
    e9225ee View commit details

Commits on Jan 10, 2024

  1. Copy the full SHA
    6c7c0d1 View commit details

Commits on Jan 20, 2024

  1. Copy the full SHA
    e4aa754 View commit details

Commits on Jan 22, 2024

  1. Copy the full SHA
    1bef010 View commit details
  2. Copy the full SHA
    1f2bb0b View commit details
  3. Add OpenAI web team account support

    Signed-off-by: yongman <yming0221@gmail.com>
    yongman committed Jan 22, 2024
    Copy the full SHA
    a2771be View commit details

Commits on Jan 23, 2024

  1. fix complete args

    Signed-off-by: yongman <yming0221@gmail.com>
    yongman committed Jan 23, 2024
    Copy the full SHA
    a572f17 View commit details
  2. move is_team_user to user web setting

    Signed-off-by: yongman <yming0221@gmail.com>
    yongman committed Jan 23, 2024
    Copy the full SHA
    a70fbf5 View commit details

Commits on Jan 24, 2024

  1. Merge pull request #345 from yongman/add-team-account-support

    [WIP] Add OpenAI web team account support
    moeakwak authored Jan 24, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    48a306c View commit details

Commits on Feb 2, 2024

  1. use wss protocol to get response

    Signed-off-by: yongman <yming0221@gmail.com>
    yongman committed Feb 2, 2024
    Copy the full SHA
    51121a1 View commit details
  2. Merge upstream

    Signed-off-by: yongman <yming0221@gmail.com>
    yongman committed Feb 2, 2024
    Copy the full SHA
    b702ba0 View commit details
  3. Merge pull request #361 from yongman/support-wss

    Use wss protocol to get response
    moeakwak authored Feb 2, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    3c154c4 View commit details
  4. wss: make more reliable by using subprotocols with ack

    Signed-off-by: yongman <yming0221@gmail.com>
    yongman committed Feb 2, 2024
    Copy the full SHA
    929e328 View commit details

Commits on Feb 3, 2024

  1. Copy the full SHA
    b9aa278 View commit details
  2. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    624cb3b View commit details
  3. Merge pull request #362 from yongman/add-wss-subprotocol

    wss: make it more reliable by using subprotocols with ack
    moeakwak authored Feb 3, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    ecb4047 View commit details
  4. Copy the full SHA
    e97a5a1 View commit details
  5. Copy the full SHA
    3450f62 View commit details
  6. Copy the full SHA
    11e8674 View commit details
  7. Copy the full SHA
    3f96850 View commit details
  8. fix websocket timeout when receive response

    Signed-off-by: yongman <yming0221@gmail.com>
    yongman committed Feb 3, 2024
    Copy the full SHA
    f03492e View commit details
  9. Merge pull request #363 from yongman/fix-wss-timeout

    fix websocket timeout when receive response
    moeakwak authored Feb 3, 2024

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature.
    Copy the full SHA
    5f1a6a2 View commit details
  10. Copy the full SHA
    6b2f3a2 View commit details
  11. Copy the full SHA
    02a481f View commit details
  12. Copy the full SHA
    d488a7c View commit details
  13. deps: solve bcrypt warning

    moeakwak committed Feb 3, 2024
    Copy the full SHA
    89692f3 View commit details
  14. Copy the full SHA
    e9e68d4 View commit details
Showing with 2,809 additions and 1,227 deletions.
  1. +2 −0 backend/.gitignore
  2. +28 −0 backend/alembic/versions/333722b0921e_add_source_id_for_baseconversation.py
  3. +1 −1 backend/api/conf/base_config.py
  4. +6 −4 backend/api/conf/config.py
  5. +2 −3 backend/api/conf/credentials.py
  6. +1 −1 backend/api/database/custom_types/pydantic_type.py
  7. +3 −3 backend/api/middlewares/asgi_logger/middleware.py
  8. +1 −0 backend/api/models/db.py
  9. +3 −0 backend/api/models/doc/__init__.py
  10. +0 −1 backend/api/models/json.py
  11. +2 −7 backend/api/response.py
  12. +73 −35 backend/api/routers/chat.py
  13. +26 −13 backend/api/routers/conv.py
  14. +7 −5 backend/api/routers/files.py
  15. +8 −0 backend/api/routers/system.py
  16. +6 −4 backend/api/routers/users.py
  17. +1 −0 backend/api/schemas/conversation_schemas.py
  18. +38 −0 backend/api/schemas/openai_schemas.py
  19. +13 −6 backend/api/schemas/user_schemas.py
  20. +10 −12 backend/api/sources/openai_api.py
  21. +175 −72 backend/api/sources/openai_web.py
  22. +1 −1 backend/api/users.py
  23. +4 −1 backend/config_templates/config.yaml
  24. +953 −862 backend/poetry.lock
  25. +6 −3 backend/pyproject.toml
  26. +67 −68 backend/requirements.txt
  27. +65 −39 backend/utils/admin/sync_conv.py
  28. +10 −11 backend/utils/common.py
  29. +1 −1 frontend/package.json
  30. +1 −1 frontend/scripts/updateapi.sh
  31. +1 −1 frontend/src/api/chat.ts
  32. +5 −0 frontend/src/api/system.ts
  33. +2 −0 frontend/src/api/url.ts
  34. +16 −5 frontend/src/locales/en-US.json
  35. +3 −1 frontend/src/locales/ms-MY.json
  36. +16 −5 frontend/src/locales/zh-CN.json
  37. +28 −2 frontend/src/types/json/config_schema.json
  38. +1 −1 frontend/src/types/json/openapi.json
  39. +963 −26 frontend/src/types/json/schemas.json
  40. +1 −1 frontend/src/types/json_schema.ts
  41. +116 −4 frontend/src/types/openapi.ts
  42. +4 −3 frontend/src/types/schema.ts
  43. +1 −1 frontend/src/utils/user.ts
  44. +1 −1 frontend/src/views/admin/components/charts/UserUsageChart.vue
  45. +1 −1 frontend/src/views/admin/components/inputs/CountNumberInputWithAdd.vue
  46. +1 −1 frontend/src/views/admin/components/inputs/ModelDictField.vue
  47. +102 −5 frontend/src/views/admin/pages/config_manager.vue
  48. +22 −2 frontend/src/views/admin/pages/conversation_manager.vue
  49. +2 −2 frontend/src/views/conversation/components/FileUploadRegion.vue
  50. +0 −1 frontend/src/views/conversation/components/InputRegion.vue
  51. +1 −1 frontend/src/views/conversation/components/MessageRowMultimodalTextDisplay.vue
  52. +4 −4 frontend/src/views/conversation/components/NewConversationFormPluginSelectionLabel.vue
  53. +2 −3 frontend/src/views/conversation/index.vue
  54. +1 −1 frontend/src/views/conversation/utils/export.ts
  55. +1 −1 frontend/src/views/conversation/utils/message.ts
2 changes: 2 additions & 0 deletions backend/.gitignore
Original file line number Diff line number Diff line change
@@ -8,3 +8,5 @@ logs
ChatGPT-Proxy-V4
*.json
data
build
dist
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
"""Add source_id for BaseConversation
Revision ID: 333722b0921e
Revises: 7d94b5503088
Create Date: 2024-02-03 12:29:17.095124
"""
from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision = '333722b0921e'
down_revision = '7d94b5503088'
branch_labels = None
depends_on = None


def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('conversation', sa.Column('source_id', sa.String(length=256), nullable=True, comment='对话来源id'))
# ### end Alembic commands ###


def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('conversation', 'source_id')
# ### end Alembic commands ###
2 changes: 1 addition & 1 deletion backend/api/conf/base_config.py
Original file line number Diff line number Diff line change
@@ -57,7 +57,7 @@ def load(self):
raise ConfigException(f"Cannot read config ({self._config_path}), error: {str(e)}")

def save(self):
config_dict = jsonable_encoder(self._model.dict())
config_dict = jsonable_encoder(self._model.model_dump())
# 复制 self._config_path 备份一份
config_dir = os.path.dirname(self._config_path)
if not os.path.exists(config_dir):
10 changes: 6 additions & 4 deletions backend/api/conf/config.py
Original file line number Diff line number Diff line change
@@ -5,7 +5,7 @@
from api.conf.base_config import BaseConfig
from api.enums import OpenaiWebChatModels, OpenaiApiChatModels
from api.enums.options import OpenaiWebFileUploadStrategyOption
from utils.common import singleton_with_lock
from utils.common import SingletonMeta

_TYPE_CHECKING = False

@@ -47,7 +47,7 @@ class DataSetting(BaseModel):
database_url: str = 'sqlite+aiosqlite:///data/database.db'
mongodb_url: str = 'mongodb://cws:password@mongo:27017'
mongodb_db_name: str = 'cws'
run_migration: bool = False
run_migration: bool = True
max_file_upload_size: int = Field(100 * 1024 * 1024, ge=0)

@field_validator("database_url")
@@ -68,6 +68,8 @@ class AuthSetting(BaseModel):
class OpenaiWebChatGPTSetting(BaseModel):
enabled: bool = True
is_plus_account: bool = True
enable_team_subscription: bool = False
team_account_id: Optional[str] = None
chatgpt_base_url: Optional[str] = None
proxy: Optional[str] = None
common_timeout: int = Field(20, ge=1,
@@ -79,6 +81,7 @@ class OpenaiWebChatGPTSetting(BaseModel):
enabled_models: list[OpenaiWebChatModels] = ["gpt_3_5", "gpt_4", "gpt_4_plugins"]
model_code_mapping: dict[OpenaiWebChatModels, str] = default_openai_web_model_code_mapping
file_upload_strategy: OpenaiWebFileUploadStrategyOption = OpenaiWebFileUploadStrategyOption.browser_upload_only
max_completion_concurrency: int = Field(1, ge=1)
disable_uploading: bool = False

@field_validator("chatgpt_base_url")
@@ -134,8 +137,7 @@ class ConfigModel(BaseModel):
log: LogSetting = LogSetting()


@singleton_with_lock
class Config(BaseConfig[ConfigModel]):
class Config(BaseConfig[ConfigModel], metaclass=SingletonMeta):
if _TYPE_CHECKING:
openai_web: OpenaiWebChatGPTSetting = OpenaiWebChatGPTSetting()
openai_api: OpenaiApiSetting = OpenaiApiSetting()
5 changes: 2 additions & 3 deletions backend/api/conf/credentials.py
Original file line number Diff line number Diff line change
@@ -3,7 +3,7 @@
from pydantic import BaseModel

from api.conf.base_config import BaseConfig
from utils.common import singleton_with_lock
from utils.common import SingletonMeta

_TYPE_CHECKING = False

@@ -15,8 +15,7 @@ class CredentialsModel(BaseModel):
openai_api_key: Optional[str] = None


@singleton_with_lock
class Credentials(BaseConfig[CredentialsModel]):
class Credentials(BaseConfig[CredentialsModel], metaclass=SingletonMeta):
if _TYPE_CHECKING:
openai_web_access_token: Optional[str]
# chatgpt_account_username: Optional[str]
2 changes: 1 addition & 1 deletion backend/api/database/custom_types/pydantic_type.py
Original file line number Diff line number Diff line change
@@ -3,7 +3,7 @@

import sqlalchemy
from fastapi.encoders import jsonable_encoder
from pydantic import BaseModel, parse_obj_as
from pydantic import BaseModel
from sqlalchemy import Dialect
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.sql.type_api import _T
6 changes: 3 additions & 3 deletions backend/api/middlewares/asgi_logger/middleware.py
Original file line number Diff line number Diff line change
@@ -115,9 +115,9 @@ def __init__(self, scope: HTTPScope, info: AccessInfo) -> None:
"b": self.get("{Content-Length}o", "-"),
"f": self["{Referer}i"],
"a": self["{User-Agent}i"],
"T": int(request_time),
"M": int(request_time * 1_000),
"D": int(request_time * 1_000_000),
"T": str(round(request_time)),
"M": str(round(request_time * 1_000)),
"D": str(round(request_time * 1_000_000)),
"L": f"{request_time:.6f}",
"p": f"<{os.getpid()}>",
}
1 change: 1 addition & 0 deletions backend/api/models/db.py
Original file line number Diff line number Diff line change
@@ -72,6 +72,7 @@ class BaseConversation(Base):

id: Mapped[int] = mapped_column(Integer, primary_key=True)
source: Mapped[ChatSourceTypes] = mapped_column(Enum(ChatSourceTypes), comment="对话类型")
source_id: Mapped[Optional[str]] = mapped_column(String(256), comment="对话来源id")
conversation_id: Mapped[uuid.UUID] = mapped_column(GUID, index=True, unique=True, comment="uuid")
current_model: Mapped[Optional[str]] = mapped_column(default=None, use_existing_column=True)
title: Mapped[Optional[str]] = mapped_column(comment="对话标题")
3 changes: 3 additions & 0 deletions backend/api/models/doc/__init__.py
Original file line number Diff line number Diff line change
@@ -204,6 +204,9 @@ class OpenaiWebConversationHistoryMeta(BaseModel):
source: Literal["openai_web"]
moderation_results: Optional[list[Any]] = None
plugin_ids: Optional[list[str]] = None
gizmo_id: Optional[str] = None
is_archived: Optional[bool] = None
conversation_template_id: Optional[str] = None


class OpenaiApiConversationHistoryMeta(BaseModel):
1 change: 0 additions & 1 deletion backend/api/models/json.py
Original file line number Diff line number Diff line change
@@ -2,7 +2,6 @@
from typing import Optional, Generic, TypeVar, get_args, Literal

from pydantic import model_validator, BaseModel, Field, create_model, RootModel
from pydantic.generics import GenericModel

from api.enums import OpenaiWebChatModels, OpenaiApiChatModels

9 changes: 2 additions & 7 deletions backend/api/response.py
Original file line number Diff line number Diff line change
@@ -32,12 +32,6 @@ class ResponseWrapper(BaseModel, Generic[T]):
message: str = ""
result: Optional[T | Any] = None

def to_dict(self):
return jsonable_encoder(self)

def to_json(self):
return json.dumps(self.to_dict(), ensure_ascii=False)


class CustomJSONResponse(Response):
media_type = "application/json"
@@ -55,7 +49,8 @@ def __init__(
def render(self, content: typing.Any) -> bytes:
if not isinstance(content, ResponseWrapper):
content = ResponseWrapper(code=self.status_code, message=get_http_message(self.status_code), result=content)
return content.to_json().encode("utf-8")
result = json.dumps(jsonable_encoder(content), ensure_ascii=False)
return result.encode("utf-8")


class PrettyJSONResponse(Response):
108 changes: 73 additions & 35 deletions backend/api/routers/chat.py
Original file line number Diff line number Diff line change
@@ -10,7 +10,7 @@
from fastapi.encoders import jsonable_encoder
from fastapi_cache.decorator import cache
from httpx import HTTPError
from pydantic import ValidationError
from pydantic import ValidationError, BaseModel
from sqlalchemy import select, func, and_
from starlette.websockets import WebSocket, WebSocketState
from websockets.exceptions import ConnectionClosed
@@ -39,25 +39,39 @@
config = Config()

INSTALLED_PLUGINS_CACHE_FILE_PATH = os.path.join(config.data.data_dir, "installed_plugin_manifests.json")
INSTALLED_PLUGINS_CACHE_EXPIRE = 3600 * 24
INSTALLED_PLUGINS_TEAM_CACHE_FILE_PATH = os.path.join(config.data.data_dir, "installed_plugin_manifests_team.json")
CACHE_EXPIRE_DURATION = 3600 * 24

_installed_plugins: OpenaiChatPluginListResponse | None = None
_installed_plugins_map: dict[str, OpenaiChatPlugin] | None = None
_installed_plugins_last_update_time = None

# TODO: 优化插件缓存处理,隔离不同来源的插件

def _load_installed_plugins_from_cache():
global _installed_plugins, _installed_plugins_map, _installed_plugins_last_update_time
if os.path.exists(INSTALLED_PLUGINS_CACHE_FILE_PATH):
with open(INSTALLED_PLUGINS_CACHE_FILE_PATH, "r") as f:
data = json.load(f)
_installed_plugins = OpenaiChatPluginListResponse.model_validate(data["installed_plugins"])
_installed_plugins_map = {plugin.id: plugin for plugin in _installed_plugins.items}
_installed_plugins_last_update_time = data["installed_plugins_last_update_time"]
class PluginsCache(BaseModel):
response: Optional[OpenaiChatPluginListResponse] = None
map: Optional[dict[str, OpenaiChatPlugin]] = None
last_update_time: Optional[float] = None


_cache_by_use_team = {
False: PluginsCache(),
True: PluginsCache()
}


def _save_installed_plugins_to_cache(installed_plugins, installed_plugins_last_update_time):
with open(INSTALLED_PLUGINS_CACHE_FILE_PATH, "w") as f:
def _load_installed_plugins_from_cache():
global _cache_by_use_team
for use_team in [False, True]:
_cache = _cache_by_use_team[use_team]
path = INSTALLED_PLUGINS_CACHE_FILE_PATH if not use_team else INSTALLED_PLUGINS_TEAM_CACHE_FILE_PATH
if os.path.exists(path):
with open(path, "r") as f:
data = json.load(f)
_cache.response = OpenaiChatPluginListResponse.model_validate(data["installed_plugins"])
_cache.map = {plugin.id: plugin for plugin in _cache.response.items}
_cache.last_update_time = data["installed_plugins_last_update_time"]


def _save_installed_plugins_to_cache(installed_plugins, installed_plugins_last_update_time, dest_path: str):
with open(dest_path, "w") as f:
json.dump(jsonable_encoder({
"installed_plugins": installed_plugins,
"installed_plugins_last_update_time": installed_plugins_last_update_time
@@ -67,34 +81,41 @@ def _save_installed_plugins_to_cache(installed_plugins, installed_plugins_last_u
_load_installed_plugins_from_cache()


async def _refresh_installed_plugins():
global _installed_plugins, _installed_plugins_map, _installed_plugins_last_update_time
if _installed_plugins is None or time.time() - _installed_plugins_last_update_time > INSTALLED_PLUGINS_CACHE_EXPIRE:
_installed_plugins = await openai_web_manager.get_installed_plugin_manifests()
_installed_plugins_map = {plugin.id: plugin for plugin in _installed_plugins.items}
_installed_plugins_last_update_time = time.time()
_save_installed_plugins_to_cache(_installed_plugins, _installed_plugins_last_update_time)
return _installed_plugins
async def _refresh_installed_plugins(use_team: bool = False):
global _cache_by_use_team

_cache = _cache_by_use_team[use_team]
if _cache.response is None or time.time() - _cache.last_update_time > CACHE_EXPIRE_DURATION:
_cache.response = await openai_web_manager.get_installed_plugin_manifests(use_team=use_team)
_cache.map = {plugin.id: plugin for plugin in _cache.response.items}
_cache.last_update_time = time.time()
_save_installed_plugins_to_cache(_cache.response, _cache.last_update_time,
INSTALLED_PLUGINS_TEAM_CACHE_FILE_PATH if use_team else INSTALLED_PLUGINS_CACHE_FILE_PATH)

return _cache.response


@router.get("/chat/openai-plugins", tags=["chat"], response_model=OpenaiChatPluginListResponse)
@cache(expire=60 * 60 * 24)
async def get_openai_web_chat_plugins(offset: int = 0, limit: int = 0, category: str = "", search: str = "",
_user: User = Depends(current_active_user)):
plugins = await openai_web_manager.get_plugin_manifests(offset, limit, category, search)
user: User = Depends(current_active_user)):
plugins = await openai_web_manager.get_plugin_manifests(offset, limit, category, search,
user.setting.openai_web.use_team)
return plugins


@router.get("/chat/openai-plugins/installed", tags=["chat"], response_model=OpenaiChatPluginListResponse)
async def get_installed_openai_web_chat_plugins(_user: User = Depends(current_active_user)):
async def get_installed_openai_web_chat_plugins(user: User = Depends(current_active_user)):
plugins = await _refresh_installed_plugins()
return plugins


@router.get("/chat/openai-plugins/installed/{plugin_id}", tags=["chat"], response_model=OpenaiChatPlugin)
async def get_installed_openai_web_plugin(plugin_id: str, _user: User = Depends(current_active_user)):
await _refresh_installed_plugins()
global _installed_plugins_map
async def get_installed_openai_web_plugin(plugin_id: str, user: User = Depends(current_active_user)):
use_team = user.setting.openai_web.use_team
await _refresh_installed_plugins(use_team)
global _cache_by_use_team
_installed_plugins_map = _cache_by_use_team[use_team].map
if plugin_id in _installed_plugins_map:
return _installed_plugins_map[plugin_id]
else:
@@ -103,12 +124,13 @@ async def get_installed_openai_web_plugin(plugin_id: str, _user: User = Depends(

@router.patch("/chat/openai-plugins/{plugin_id}/user-settings", tags=["chat"], response_model=OpenaiChatPlugin)
async def update_chat_plugin_user_settings(plugin_id: str, settings: OpenaiChatPluginUserSettings,
use_team: Optional[bool] = config.openai_web.enable_team_subscription,
_user: User = Depends(current_super_user)):
if settings.is_authenticated is not None:
raise InvalidParamsException("can not set is_authenticated")
result = await openai_web_manager.change_plugin_user_settings(plugin_id, settings)
result = await openai_web_manager.change_plugin_user_settings(plugin_id, settings, use_team)
assert isinstance(result, OpenaiChatPlugin)
await _refresh_installed_plugins()
await _refresh_installed_plugins(use_team)
return result


@@ -245,6 +267,8 @@ async def reply(response: AskResponse):

params = await websocket.receive_json()

use_team = user.setting.openai_web.use_team and config.openai_web.enable_team_subscription

try:
ask_request = AskRequest.model_validate(params)
except ValidationError as e:
@@ -269,6 +293,13 @@ async def reply(response: AskResponse):
conversation_id = ask_request.conversation_id
conversation = await _get_conversation_by_id(ask_request.conversation_id, user_db)

# 是否可用 team 对话
if conversation is not None and conversation.source_id is not None and use_team == False:
e = WebsocketException(1008, "errors.teamConversationNotAllowed")
await reply(AskResponse(type=AskResponseType.error, tip=e.tip, error_detail=e.error_detail))
await websocket.close(e.code, e.tip)
return

request_start_time = datetime.now()

websocket_code = 1001
@@ -303,7 +334,6 @@ async def reply(response: AskResponse):
message = None

try:
# rev: 更改状态为 asking
if ask_request.source == ChatSourceTypes.openai_web:
await change_user_chat_status(user.id, OpenaiWebChatStatus.asking)

@@ -323,10 +353,11 @@ async def reply(response: AskResponse):
model = OpenaiApiChatModels(ask_request.model)

# stream 传输
async for data in manager.complete(text_content=ask_request.text_content,
async for data in manager.complete(model=model,
text_content=ask_request.text_content,
use_team=use_team,
conversation_id=ask_request.conversation_id,
parent_message_id=ask_request.parent,
model=model,
plugin_ids=ask_request.openai_web_plugin_ids if ask_request.new_conversation else None,
attachments=ask_request.openai_web_attachments,
multimodal_image_parts=ask_request.openai_web_multimodal_image_parts,
@@ -370,6 +401,7 @@ async def reply(response: AskResponse):
except OpenaiException as e:
logger.error(str(e))
error_detail_map = {
400: "errors.openai.400",
401: "errors.openai.401",
403: "errors.openai.403",
404: "errors.openai.404",
@@ -513,7 +545,11 @@ async def reply(response: AskResponse):
if ask_request.source == ChatSourceTypes.openai_web and ask_request.new_title is not None and \
ask_request.new_title.strip() != "":
try:
await openai_web_manager.set_conversation_title(str(conversation_id), ask_request.new_title)
source_id = None
if use_team:
source_id = config.openai_web.team_account_id
await openai_web_manager.set_conversation_title(str(conversation_id), ask_request.new_title,
source_id=source_id)
except Exception as e:
logger.warning(f"set_conversation_title error {e.__class__.__name__}: {str(e)}")

@@ -528,6 +564,8 @@ async def reply(response: AskResponse):
create_time=current_time,
update_time=current_time
)
if use_team:
new_conv.source_id = config.openai_web.team_account_id
conversation = BaseConversation(**new_conv.model_dump(exclude_unset=True))
session.add(conversation)

Loading