-
-
Notifications
You must be signed in to change notification settings - Fork 803
perf:优化hint渲染方式,为部分类型供应商添加默认的温度选项 #2321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
* add ModelScope API support * update
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
嗨 @RC-CHN - 我已经审阅了您的更改 - 以下是一些反馈:
- 将重复的默认温度 (0.4) 提取到 default.py 中的共享常量或帮助程序中,以减少重复并简化未来的更新。
- 确认 AstrBotConfig.vue 中新的提示渲染组件在切换
isEditing
时能正确显示和隐藏提供商提示,特别是对于缺少或hint
值为空的条目。 - 更新 plugin.py 中的回退赋值,以处理
desc
和description
都存在或都不存在的情况,可能会发出警告或错误以避免静默覆盖。
AI 代理的提示
请处理此代码审查中的评论:
## 总体评论
- 将重复的默认温度 (0.4) 提取到 default.py 中的共享常量或帮助程序中,以减少重复并简化未来的更新。
- 确认 AstrBotConfig.vue 中新的提示渲染组件在切换 `isEditing` 时能正确显示和隐藏提供商提示,特别是对于缺少或 `hint` 值为空的条目。
- 更新 plugin.py 中的回退赋值,以处理 `desc` 和 `description` 都存在或都不存在的情况,可能会发出警告或错误以避免静默覆盖。
## 单独评论
### 评论 1
<location> `astrbot/core/config/default.py:602` </location>
<code_context>
"timeout": 120,
"model_config": {
"model": "gpt-4o-mini",
+ "temperature": 0.4
},
+ "hint": "也兼容所有与OpenAI API兼容的服务。"
</code_context>
<issue_to_address>
考虑将温度和提示字符串等默认值集中到常量和辅助函数中,以避免重复并提高可维护性。
```python
# config_defaults.py
DEFAULT_TEMPERATURE = 0.4
DEFAULT_HINTS = {
"openai": "也兼容所有与OpenAI API兼容的服务。",
"anthropic": "注意Claude系列模型的温度调节范围为0到1.0,超出可能导致报错",
"ollama": "启用前请确保已正确安装并运行 Ollama 服务端,Ollama默认不带鉴权,无需修改key",
# … add any other per-provider hint overrides here
}
DEFAULT_CHAT_HINT = DEFAULT_HINTS["openai"]
```
```python
# config_build.py
from config_defaults import DEFAULT_TEMPERATURE, DEFAULT_CHAT_HINT, DEFAULT_HINTS
def make_chat_provider(id, provider, model, api_base, **extra):
cfg = {
"id": id,
"provider": provider,
"type": "openai_chat_completion",
"provider_type": "chat_completion",
"enable": True,
"key": [],
"api_base": api_base,
"timeout": 120,
"model_config": {
"model": model,
"temperature": DEFAULT_TEMPERATURE,
},
"hint": DEFAULT_HINTS.get(provider, DEFAULT_CHAT_HINT),
}
cfg.update(extra)
return cfg
CONFIG = {
"openai": make_chat_provider(
"openai", "openai", "gpt-4o-mini",
"https://api.openai.com/v1"
),
"azure": make_chat_provider(
"azure", "azure", "gpt-4o-mini",
api_base="", api_version="2024-05-01-preview"
),
"xAI": make_chat_provider(
"xai", "xai", "grok-2-latest",
"https://api.x.ai/v1"
),
"Anthropic": make_chat_provider(
"claude", "anthropic", "claude-3-5-sonnet-latest",
"https://api.anthropic.com/v1",
max_tokens=4096
),
# … other providers …
}
```
然后从您的大字典中删除所有就地 `"temperature": 0.4` 和重复的 `"hint"` 行。
这保持了行为一致性,集中了默认值,并使每个提供商的覆盖显式且最小化。
</issue_to_address>
帮助我更有用!请点击每个评论上的 👍 或 👎,我将使用反馈来改进您的评论。
Original comment in English
Hey @RC-CHN - I've reviewed your changes - here's some feedback:
- Extract the repeated default temperature (0.4) into a shared constant or helper in default.py to reduce duplication and simplify future updates.
- Confirm that the new hint rendering component in AstrBotConfig.vue correctly displays and hides provider hints when toggling
isEditing
, especially for entries with missing or emptyhint
values. - Update the fallback assignment in plugin.py to handle cases where both
desc
anddescription
exist or neither is present, potentially issuing a warning or error to avoid silent overrides.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Extract the repeated default temperature (0.4) into a shared constant or helper in default.py to reduce duplication and simplify future updates.
- Confirm that the new hint rendering component in AstrBotConfig.vue correctly displays and hides provider hints when toggling `isEditing`, especially for entries with missing or empty `hint` values.
- Update the fallback assignment in plugin.py to handle cases where both `desc` and `description` exist or neither is present, potentially issuing a warning or error to avoid silent overrides.
## Individual Comments
### Comment 1
<location> `astrbot/core/config/default.py:602` </location>
<code_context>
"timeout": 120,
"model_config": {
"model": "gpt-4o-mini",
+ "temperature": 0.4
},
+ "hint": "也兼容所有与OpenAI API兼容的服务。"
</code_context>
<issue_to_address>
Consider centralizing default values like temperature and hint strings in constants and helper functions to avoid repetition and improve maintainability.
```python
# config_defaults.py
DEFAULT_TEMPERATURE = 0.4
DEFAULT_HINTS = {
"openai": "也兼容所有与OpenAI API兼容的服务。",
"anthropic": "注意Claude系列模型的温度调节范围为0到1.0,超出可能导致报错",
"ollama": "启用前请确保已正确安装并运行 Ollama 服务端,Ollama默认不带鉴权,无需修改key",
# … add any other per-provider hint overrides here
}
DEFAULT_CHAT_HINT = DEFAULT_HINTS["openai"]
```
```python
# config_build.py
from config_defaults import DEFAULT_TEMPERATURE, DEFAULT_CHAT_HINT, DEFAULT_HINTS
def make_chat_provider(id, provider, model, api_base, **extra):
cfg = {
"id": id,
"provider": provider,
"type": "openai_chat_completion",
"provider_type": "chat_completion",
"enable": True,
"key": [],
"api_base": api_base,
"timeout": 120,
"model_config": {
"model": model,
"temperature": DEFAULT_TEMPERATURE,
},
"hint": DEFAULT_HINTS.get(provider, DEFAULT_CHAT_HINT),
}
cfg.update(extra)
return cfg
CONFIG = {
"openai": make_chat_provider(
"openai", "openai", "gpt-4o-mini",
"https://api.openai.com/v1"
),
"azure": make_chat_provider(
"azure", "azure", "gpt-4o-mini",
api_base="", api_version="2024-05-01-preview"
),
"xAI": make_chat_provider(
"xai", "xai", "grok-2-latest",
"https://api.x.ai/v1"
),
"Anthropic": make_chat_provider(
"claude", "anthropic", "claude-3-5-sonnet-latest",
"https://api.anthropic.com/v1",
max_tokens=4096
),
# … other providers …
}
```
Then drop all the in-place `"temperature": 0.4` and repeated `"hint"` lines from your big dict.
This keeps behaviour identical, centralizes defaults, and makes per-provider overrides explicit and minimal.
</issue_to_address>
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
@@ -599,7 +599,9 @@ | |||
"timeout": 120, | |||
"model_config": { | |||
"model": "gpt-4o-mini", | |||
"temperature": 0.4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (complexity): 考虑将温度和提示字符串等默认值集中到常量和辅助函数中,以避免重复并提高可维护性。
# config_defaults.py
DEFAULT_TEMPERATURE = 0.4
DEFAULT_HINTS = {
"openai": "也兼容所有与OpenAI API兼容的服务。",
"anthropic": "注意Claude系列模型的温度调节范围为0到1.0,超出可能导致报错",
"ollama": "启用前请确保已正确安装并运行 Ollama 服务端,Ollama默认不带鉴权,无需修改key",
# … add any other per-provider hint overrides here
}
DEFAULT_CHAT_HINT = DEFAULT_HINTS["openai"]
# config_build.py
from config_defaults import DEFAULT_TEMPERATURE, DEFAULT_CHAT_HINT, DEFAULT_HINTS
def make_chat_provider(id, provider, model, api_base, **extra):
cfg = {
"id": id,
"provider": provider,
"type": "openai_chat_completion",
"provider_type": "chat_completion",
"enable": True,
"key": [],
"api_base": api_base,
"timeout": 120,
"model_config": {
"model": model,
"temperature": DEFAULT_TEMPERATURE,
},
"hint": DEFAULT_HINTS.get(provider, DEFAULT_CHAT_HINT),
}
cfg.update(extra)
return cfg
CONFIG = {
"openai": make_chat_provider(
"openai", "openai", "gpt-4o-mini",
"https://api.openai.com/v1"
),
"azure": make_chat_provider(
"azure", "azure", "gpt-4o-mini",
api_base="", api_version="2024-05-01-preview"
),
"xAI": make_chat_provider(
"xai", "xai", "grok-2-latest",
"https://api.x.ai/v1"
),
"Anthropic": make_chat_provider(
"claude", "anthropic", "claude-3-5-sonnet-latest",
"https://api.anthropic.com/v1",
max_tokens=4096
),
# … other providers …
}
然后从您的大字典中删除所有就地 "temperature": 0.4
和重复的 "hint"
行。
这保持了行为一致性,集中了默认值,并使每个提供商的覆盖显式且最小化。
Original comment in English
issue (complexity): Consider centralizing default values like temperature and hint strings in constants and helper functions to avoid repetition and improve maintainability.
# config_defaults.py
DEFAULT_TEMPERATURE = 0.4
DEFAULT_HINTS = {
"openai": "也兼容所有与OpenAI API兼容的服务。",
"anthropic": "注意Claude系列模型的温度调节范围为0到1.0,超出可能导致报错",
"ollama": "启用前请确保已正确安装并运行 Ollama 服务端,Ollama默认不带鉴权,无需修改key",
# … add any other per-provider hint overrides here
}
DEFAULT_CHAT_HINT = DEFAULT_HINTS["openai"]
# config_build.py
from config_defaults import DEFAULT_TEMPERATURE, DEFAULT_CHAT_HINT, DEFAULT_HINTS
def make_chat_provider(id, provider, model, api_base, **extra):
cfg = {
"id": id,
"provider": provider,
"type": "openai_chat_completion",
"provider_type": "chat_completion",
"enable": True,
"key": [],
"api_base": api_base,
"timeout": 120,
"model_config": {
"model": model,
"temperature": DEFAULT_TEMPERATURE,
},
"hint": DEFAULT_HINTS.get(provider, DEFAULT_CHAT_HINT),
}
cfg.update(extra)
return cfg
CONFIG = {
"openai": make_chat_provider(
"openai", "openai", "gpt-4o-mini",
"https://api.openai.com/v1"
),
"azure": make_chat_provider(
"azure", "azure", "gpt-4o-mini",
api_base="", api_version="2024-05-01-preview"
),
"xAI": make_chat_provider(
"xai", "xai", "grok-2-latest",
"https://api.x.ai/v1"
),
"Anthropic": make_chat_provider(
"claude", "anthropic", "claude-3-5-sonnet-latest",
"https://api.anthropic.com/v1",
max_tokens=4096
),
# … other providers …
}
Then drop all the in-place "temperature": 0.4
and repeated "hint"
lines from your big dict.
This keeps behaviour identical, centralizes defaults, and makes per-provider overrides explicit and minimal.
Motivation
目前对大多数供应商没有默认的温度设置,已有很多用户反馈希望能直接在webui中配置温度,同时个人感觉部分供应商默认的hint复用可编辑项渲染方式,醒目度和易读性可能有所不足
Modifications
为部分模型供应商添加默认的温度配置项,修改了部分供应商的hint信息以及单独重构了hint在webui上的显示方式
Check
requirements.txt
和pyproject.toml
文件相应位置。Sourcery 总结
通过引入默认温度参数和增强的提示显示来优化提供商配置,添加 ModelScope 支持,改进插件元数据处理和 OpenAI 解析,并更新文档以包含新的部署和模板指南。
新功能:
错误修复:
改进:
文档:
Original summary in English
Summary by Sourcery
Optimize provider configuration by introducing default temperatures and enhanced hint display, add ModelScope support, improve plugin metadata handling and OpenAI parsing, and update documentation with new deployment and template guidance.
New Features:
Bug Fixes:
Enhancements:
Documentation: