使用智谱的 glm 在线服务的时候,会返回使用的 token:
- prompt_tokens
- completion_tokens
- total_tokens
Completion(model='glm-4v-plus', created=1731402350, choices=[CompletionChoice(index=0, finish_reason='stop', message=CompletionMessage(content='', role='assistant', tool_calls=None))], request_id='20241112170540c45eaf5a3144487f', id='20241112170540c45eaf5a3144487f', usage=CompletionUsage(prompt_tokens=0, completion_tokens=0, total_tokens=0))
但是如果我是自己本地部署 chatglm、minicpm、qwen 这些模型,怎么统计 token 呢?
示例代码
# test.py
# test.py
import torch
from PIL import Image
from modelscope import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('OpenBMB/MiniCPM-V-2_6', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('OpenBMB/MiniCPM-V-2_6', trust_remote_code=True)
image = Image.open('image.png').convert('RGB')
question = 'What is in the image?'
msgs = [{'role': 'user', 'content': [image, question]}]
res = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(res)
## if you want to use streaming, please make sure sampling=True and stream=True
## the model.chat will return a generator
res = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
stream=True
)
generated_text = ""
for new_text in res:
generated_text += new_text
print(new_text, flush=True, end='')
返回的 res 本身就是字符串了,而不是一个结构化对象
这个统计 token 的方式,每个 llm 都一样吗?