Skip to content
This repository has been archived by the owner on Sep 30, 2023. It is now read-only.

Need Elixir Language Support #22

Open
neel-desh opened this issue May 5, 2023 · 3 comments
Open

Need Elixir Language Support #22

neel-desh opened this issue May 5, 2023 · 3 comments

Comments

@neel-desh
Copy link

Hi @ravenscroftj

I'm a complete newbie to this, and I would love to add Elixir lang support to this. How can we go about this?

I'm ready to dedicate all my free time to this!

@ravenscroftj
Copy link
Owner

ravenscroftj commented May 10, 2023

thanks for taking the time to open this ticket @neel-desh - sorry for the slow response.

In order to add new languages the model needs to be fine-tuned and updated as I mentioned in #16

It's currently not possible to train or fine tune the quantized models so we'd need to take the original unquantized models and fine tune those.

Of course the other challenge is finding an appropriate corpus to fine-tune the model on. You'd need a suitably large set of Elixir lang repos with which to train it and then apply the training procedure.

I'm definitely interested in helping to support this if you're interested in trying to build parts of it yourself. I am also very interested in supporting LORA as I can see a use case for allowing users (or companies even) to fine tune turbopilot on their own historical code repos and have it produce suggestions in their style

@neel-desh
Copy link
Author

Yes, I'm so sorry I missed the email notification,
Im free and Ill be more than happy to contribute not only for elixir but any other languages that is required

@ravenscroftj
Copy link
Owner

hey @neel-desh - sorry I've been pretty busy recently too, I've only just remembered that you replied.

Since you opened this thread a few new models have come out that I want to add support for in TurboPilot but the only one that I can find which already supports Elixir is StarCoder which requires at least 20GB RAM to run in pytorch and would probably still need ~10-12GB RAM to run as a ggml model.

Therefore, I still think there is some merit in trying to add support for Elixir to smaller models and using that as a learning exercise for adding other languages.

I'm going to have a go at scoping out all the necessary steps here. I still think that a LoRA adapter is probably the way to go (basically we plug in some additional layers to the model to give it the ability to deal with a specific language).

In order to train a LoRA adapter we need to gather training data - you'll need a whole bunch of elixir code. The training set that StarCoder was trained on is the stack which is a huge dataset with 300+ different languages in it. I have just verified that it also has a subset of elixir files in it so you could try extracting this as a first step although you will need to pay attention to the license notices when you first log in.

The next steps will be to train a LoRA adapter model, add support for LoRA models to turbopilot based on existing Llama support for them, quantise and integrate the two models. I'll do some more research on what is involved.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants