不支持本地Ollama部署的模型调用? #607
Unanswered
swy755565055
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
本地部署了Ollama运行的qwen2.5模型,
llm_config = {
'model': 'qwen2.5:7b',
'model_server': 'http://127.0.0.1:11434/v1/', # base_url,也称为 api_base
'api_key': 'ollama',
}
但是启动每次都报错:NotImplementedError
定位到源码:modelscope_agent\llm_init_.py
def get_chat_model(model: str, model_server: str, **kwargs) -> BaseChatModel:
"""
model: the model name: such as qwen-max, gpt-4 ...
model_server: the source of model, such as dashscope, openai, modelscope ...
**kwargs: more parameters, such as api_key, api_base
"""
model_type = re.split(r'[-/]', model)[0] # parser qwen / gpt / ...
registered_model_id = f'{model_server}{model_type}'
if registered_model_id in LLM_REGISTRY: # specific model from specific source
return LLM_REGISTRY[registered_model_id](model, model_server, **kwargs)
elif model_server in LLM_REGISTRY: # specific source
return LLM_REGISTRY[model_server](model, model_server, **kwargs)
else:
raise NotImplementedError
LLM_REGISTRY里并没有注册本地的model_server
所以目前是还不支持除了DashScope以外的模型调用?
Beta Was this translation helpful? Give feedback.
All reactions