Replies: 25 comments 4 replies
-
我看到你自己写的api配置文件工具名称带“/”,另外你的参数我看到还有文件,required数组中没有任何必填参数。 tools = [
{
"name": "get_current_weather_by_location",
"description": "根据城市获取当前天气",
"parameters": {
"type": "object",
"properties": {
"location": {
"description": "城市名称 e.g. 北京,上海,武汉"
}
},
"required": ['location']
}
},
{
"name": "recommend_dress_by_weather",
"description": "根据天气信息推荐穿衣风格",
"parameters": {
"type": "object",
"properties": {
"temp": {
"description": "温度(摄氏度) e.g. 36.4,37.8"
},
"wind_scale": {
"description": "风力大小 e.g. 1,2,3"
}
},
"required": ['temp']
}
},
{
"name": "cocktail_list",
"description": "获取流行的鸡尾酒清单",
"parameters": {}
},
{
"name": "cocktail_recipe_suggestions",
"description": "获取鸡尾酒配方建议",
"parameters": {}
},
{
"name": "get_news_about_birthday",
"description": "获取生日庆祝相关的新闻文章",
"parameters": {}
}
]
system_info = {"role": "system",
"content": "Answer the following questions as best as you can. You have access to the following tools:",
"tools": tools} 工具实现(这里是假数据) # 根据地区获取当前天气
def get_current_weather_by_location(location: str):
print("天气函数被调用了")
if location == "北京":
return {
"location": location,
"temp": "24",
"text": "多云",
"windDir": "东南风",
"windScale": "1"
}
elif location == "武汉":
return {
"location": location,
"temp": "26",
"text": "晴",
"windDir": "西北风",
"windScale": "2"
}
# 根据温度风力大小推荐穿着
def recommend_dress_by_weather(temp, wind_scale):
print(f"推荐函数被调用了,入参为:温度{temp},风力等级{wind_scale}")
if float(temp) < 24:
return {
"style": "长袖,外套"
}
else:
return {
"style": "短袖"
}
# 获取流行的鸡尾酒清单
def cocktail_list():
return {
"cocktail_list": [
{
"name": "莫吉托(Mojito)"
}, {
"name": "马丁尼(Martini)"
}, {
"name": "辛普尔(Sazerac)"
}, {
"name": "白俄罗斯(White Russian)"
}, {
"name": "莫斯科骡子(Moscow Mule)"
}, {
"name": "青蛙跳(Long Island Iced Tea)"
}, {
"name": "柯尔克特尾巴(Kir Royale)"
}, {
"name": "毛利人(Mai Tai)"
}, {
"name": "哈瓦那特(Havana Special)"
}, {
"name": "草地边(Grasshopper)"
}
]
}
# 获取鸡尾酒配方建议
def cocktail_recipe_suggestions():
return {
"recipe": [
{
"name": "白朗姆酒"
}, {
"name": "薄荷叶"
}, {
"name": "糖"
}, {
"name": "青柠汁"
}, {
"name": "苏打水"
}, {
"name": "伏特加或琴酒"
}, {
"name": "干苦艾酒"
}, {
"name": "咖啡利口酒"
}, {
"name": "伏特加"
}, {
"name": "龙舌兰酒"
}
]
}
# 获取生日庆祝相关的新闻文章
def get_news_about_birthday():
return {
"content": """
参加朋友的生日会,你也一起出主意、构想庆祝内容,是不是会更有参与感、更难忘?
学前儿童刊物《小小拇指》为庆祝创刊10周年,将在6月至8月的巡回展上,举办多场免费的华语讲故事活动。这次讲故事活动的最大特色是将以参与式剧场的形式,带领孩童和家长一同参加“小拇指的生日会”。这种形式鼓励参与者积极发挥创意、贡献点子,每个人都是故事的创建者,参与越多,融入感就越强。
主讲者符妙娟(39岁,戏剧工作者)与同伴林慈暄(协助执行)用了一个月的时间,构思故事脚本和互动方式。故事带领小朋友回顾10年来《小小拇指》的重要内容,比如认识新加坡、各种新鲜趣闻,以及朗朗上口的本土儿歌,过程中会穿插各种想法交流和亲自动手的环节。
"""
} 我没有使用官方的cli_demo,我自己写了一个,没有做边界处理和异常处理,可以自己后面完善 import json
from transformers import AutoTokenizer, AutoModel
from tools import system_info
tokenizer = AutoTokenizer.from_pretrained("/data/models/llm/chatglm3-6b/", trust_remote_code=True)
model = AutoModel.from_pretrained("/data/models/llm/chatglm3-6b/", trust_remote_code=True).cuda()
model = model.eval()
def model_chat(task_query):
model_history = [system_info]
model_response, model_history = model.chat(tokenizer, task_query, history=model_history)
return run_task(model_response, model_history)
def run_task(model_response, model_history):
if isinstance(model_response, dict):
import function_map
func = getattr(function_map, model_response.get("name"))
param = model_response.get("parameters")
func_response = func(**param)
result = json.dumps(func_response, ensure_ascii=False)
model_response, model_history = model.chat(tokenizer, result, history=model_history, role="observation")
model_response, model_history = run_task(model_response, model_history)
return model_response, model_history
else:
return model_response, model_history
if __name__ == '__main__':
query = """
今天下午我要去北京出差,请帮我查询一下当地的天气,另外我应该采取什么样的穿衣风格?
"""
response, _ = model_chat(query)
print(response)
# while True:
# query = input("query:")
# response, _ = model_chat(query)
# print(response) |
Beta Was this translation helpful? Give feedback.
-
我也发现了,好像没有自动判断是否使用工具的能力,一直在各种问题上强制使用tools |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
emmmm,请你仔细阅读 工具调用的文档,感觉你好像没有理解这里要怎么写,你可以把我上面的示例放到项目中测试一下,效果很好 |
Beta Was this translation helpful? Give feedback.
-
然而function_call确实存在#56说的命中问题 |
Beta Was this translation helpful? Give feedback.
-
可能确实存在你说的这个问题,因为我做的测试还很少,明天我会拿一些其他API做测试,上面的代码我在本地跑了5-6次没有出现你说的获取不到风力的情况,可以参考这里这里 |
Beta Was this translation helpful? Give feedback.
-
按照您的配置文件改了tool的描述,但是问无关tools的问题,模型还是倾向于调用工具。 并且测试还发现,部分问题如果observation 返回是False,模型还是能回答的。 |
Beta Was this translation helpful? Give feedback.
-
现在的主要问题的,很难对其他模型通用,原来使用langchain通过prompt实现json生成,但是做不到observation ,如果使用chatglm3提供的方式,其他开源模型就用不了 |
Beta Was this translation helpful? Give feedback.
-
我是按照官方文档书写的tools,然后把sysinfo 放到messages里面,等同于插入history。这是最后的messages: 我预期是他不使用任何工具直接回答,但是他却输出了使用'text-to-speech‘工具的结果。 |
Beta Was this translation helpful? Give feedback.
-
@deepslit tools = [
{
"name": "track",
"description": "追踪指定股票的实时价格",
"parameters": {
"type": "object",
"properties": {
"symbol": {
"description": "需要追踪的股票代码"
}
},
"required": ['symbol']
}
},
{
"name": "text-to-speech",
"description": "将文本转换为语音",
"parameters": {
"type": "object",
"properties": {
"text": {
"description": "需要转换成语音的文本"
},
"voice": {
"description": "要使用的语音类型(男声、女声等)"
},
"speed": {
"description": "语音的速度(快、中等、慢等)"
}
},
"required": ['text']
}
}
]
system_info = {"role": "system",
"content": "Answer the following questions as best as you can. You have access to the following tools:",
"tools": tools} import json
from transformers import AutoTokenizer, AutoModel
from tools import system_info
tokenizer = AutoTokenizer.from_pretrained("/data/models/llm/chatglm3-6b/", trust_remote_code=True)
model = AutoModel.from_pretrained("/data/models/llm/chatglm3-6b/", trust_remote_code=True).cuda()
model = model.eval()
def model_chat(task_query):
model_history = [system_info]
model_response, model_history = model.chat(tokenizer, task_query, history=model_history)
return run_task(model_response, model_history)
def run_task(model_response, model_history):
if isinstance(model_response, dict):
import function_map
func = getattr(function_map, model_response.get("name"))
param = model_response.get("parameters")
func_response = func(**param)
result = json.dumps(func_response, ensure_ascii=False)
model_response, model_history = model.chat(tokenizer, result, history=model_history, role="observation")
model_response, model_history = run_task(model_response, model_history)
return model_response, model_history
else:
return model_response, model_history
if __name__ == '__main__':
query = """
你是谁你会做什么?
"""
response, _ = model_chat(query)
print(response)
# while True:
# query = input("query:")
# response, _ = model_chat(query)
# print(response) |
Beta Was this translation helpful? Give feedback.
-
在这个条件下:
但感觉是这个文本转语音的工具设置问题,如果不加问号可能会就被判定为需要转语音的一句话了,所以不纠结这个了。我用你的工具是正常的,所以感觉能力还是不错的,就是不知道和同样对齐过工具使用能力的Qwen-7b-chat谁更好一点 |
Beta Was this translation helpful? Give feedback.
-
@deepslit |
Beta Was this translation helpful? Give feedback.
-
@deepslit |
Beta Was this translation helpful? Give feedback.
-
我这边也实验了,加上"\n"能解决这个问题,但这应该算一个bug,正常用户输入不会先输入一个回车。 |
Beta Was this translation helpful? Give feedback.
-
@hongyix |
Beta Was this translation helpful? Give feedback.
-
您看这个history,并没有text-to-speech这个工具,但是还是调用了get_current_weather_by_location这个api,并且这个问题是加上了"\n"。 [{'role': 'system', 'content': 'Answer the following questions as best as you can. You have access to the following tools:', 'tools': [{'name': 'get_current_weather_by_location', 'description': '根据城市获取当前天气', 'parameters': {'type': 'object', 'properties': {'location': {'description': '城市名称 e.g. 北京,上海,武汉'}}, 'required': ['location']}}, {'name': 'recommend_dress_by_weather', 'description': '根据天气信息推荐穿衣风格', 'parameters': {'type': 'object', 'properties': {'temp': {'description': '温度(摄氏度) e.g. 36.4,37.8'}, 'wind_scale': {'description': '风力大小 e.g. 1,2,3'}}, 'required': ['temp']}}, {'name': 'cocktail_list', 'description': '获取流行的鸡尾酒清单', 'parameters': {}}, {'name': 'cocktail_recipe_suggestions', 'description': '获取鸡尾酒配方建议', 'parameters': {}}, {'name': 'get_news_about_birthday', 'description': '获取生日庆祝相关的新闻文章', 'parameters': {}}]}, {'role': 'user', 'content': '\n上海有什么好吃的\n'}, {'role': 'assistant', 'metadata': 'get_current_weather_by_location', 'content': " 还有这个例子,如果不加'/n' ,模型还是会去调用工具,加上了话可以正常回答 [{'role': 'system', 'content': 'Answer the following questions as best as you can. You have access to the following tools:', 'tools': [{'name': 'get_current_weather_by_location', 'description': '根据城市获取当前天气', 'parameters': {'type': 'object', 'properties': {'location': {'description': '城市名称 e.g. 北京,上海,武汉'}}, 'required': ['location']}}, {'name': 'recommend_dress_by_weather', 'description': '根据天气信息推荐穿衣风格', 'parameters': {'type': 'object', 'properties': {'temp': {'description': '温度(摄氏度) e.g. 36.4,37.8'}, 'wind_scale': {'description': '风力大小 e.g. 1,2,3'}}, 'required': ['temp']}}, {'name': 'cocktail_list', 'description': '获取流行的鸡尾酒清单', 'parameters': {}}, {'name': 'cocktail_recipe_suggestions', 'description': '获取鸡尾酒配方建议', 'parameters': {}}, {'name': 'get_news_about_birthday', 'description': '获取生日庆祝相关的新闻文章', 'parameters': {}}]}, {'role': 'user', 'content': '扑克牌有什么玩法?'}, {'role': 'assistant', 'metadata': 'cocktail_list', 'content': ' |
Beta Was this translation helpful? Give feedback.
-
请问大佬部署到生产环境用了量化或者TensorRT加速吗 |
Beta Was this translation helpful? Give feedback.
-
在常规对话中,计算的问题也会调用解释器导致报错,估计是计算问题都被训练成工具了 关于更多场景混乱,乱掉工具的问题都可以在这里讨论,我们将在未来的模型加以改进 |
Beta Was this translation helpful? Give feedback.
-
这个工具调用如果用户没有输入参数,就会随机生成,这个是咋回事? |
Beta Was this translation helpful? Give feedback.
-
OpenAI 格式的API在处理函数调用的时候,存在无法正确处理模型输出的情况,例如,如果模型输出了仓库中关于函数调用的示例中的文本
openai_api_demo/utils.py的
最开头没有
|
Beta Was this translation helpful? Give feedback.
-
这个输出确实是错的,正确的应该是
我在langchain适配中也是在工程上适配这个模型输出的不足 |
Beta Was this translation helpful? Give feedback.
-
本版本模型暂时无法处理模型,要等下一代了 |
Beta Was this translation helpful? Give feedback.
-
好奇的发问:工具模式中模型会根据定义的模板决定调用哪个模板,提取参数并定制化输出,是因为训练过吗? |
Beta Was this translation helpful? Give feedback.
-
测试了很多问题,模型一定会去尝试调用工具。并且调用的工具和入参都是错误的。
Beta Was this translation helpful? Give feedback.
All reactions