Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vLLM does not support Functionary Tokenizer #198

Open
ysiaj33802 opened this issue May 29, 2024 · 7 comments
Open

vLLM does not support Functionary Tokenizer #198

ysiaj33802 opened this issue May 29, 2024 · 7 comments

Comments

@ysiaj33802
Copy link

When running the vLLM server for Functionary v2.5 small, the vLLM throws an error because it does not support Functionary tokenizer. I' reverted back to v2.4 for now, but thought I should bring this issue up.

ValueError: Model architectures ['FunctionaryForCausalLM'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LlavaForConditionalGeneration', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'MiniCPMForCausalLM', 'OLMoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PhiForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'XverseForCausalLM']

@jeffreymeetkai
Copy link
Collaborator

Hi, thank you for pointing this out. I have fixed this problem by updating the model repo in HuggingFace. Please delete the existing functionary model folder cached in your machine and try running the server again.

@l4b4r4b4b4
Copy link
Contributor

l4b4r4b4b4 commented Aug 9, 2024

hmm I am still getting this error. Same for 3.1

@jeffreymeetkai
Copy link
Collaborator

@l4b4r4b4b4 Can you try again after updating the dependencies to the latest and also re-downloading the model files (by deleting the existing cache and running server_vllm.py)? I can load functionary-small-v3.1 and functionary-small-v3.2 on my machine and perform inference without any problems right now.

cd functionary
pip install -r requirements.txt

Do let me know if the problem still occurs after performing the corrective actions.

@l4b4r4b4b4
Copy link
Contributor

l4b4r4b4b4 commented Aug 9, 2024

I actually made a fork with updated transformer (0.44.0) and vllm (0.5.6) dependencies and packed everything into a Dockerfile.

Now functionary seems to work with my functionary AWQ quants when specifying the huggingface model_path.

Only remaining issue: served_model_name gets picked up by vllm, however I get 404 if I try to set it by hand.
Had to make the adjustment of setting trust_remote_code = True for the tokenizer.

Will make a PR after some clean up if you like.

@l4b4r4b4b4 Can you try again after updating the dependencies to the latest and also re-downloading the model files (by deleting the existing cache and running server_vllm.py)? I can load functionary-small-v3.1 and functionary-small-v3.2 on my machine and perform inference without any problems right now.

cd functionary
pip install -r requirements.txt

Do let me know if the problem still occurs after performing the corrective actions.

@jeffrey-fong
Copy link
Contributor

Did you git pull from the latest main branch? I remember encountering the problem with served_model_name previously and fixed it. You could be on an older commit of Functionary.

@l4b4r4b4b4
Copy link
Contributor

l4b4r4b4b4 commented Aug 9, 2024

yes I pulled from functionary/main. here is my fork.

@l4b4r4b4b4
Copy link
Contributor

Also I noticed when making AWQ quants with AutoAWQ, that tokenization_functionary.py is not copied over into the resulting model. Any idea why? Probably somethings that needs fixing in AutoAWQ I guess...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants