Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Ollama integration #256

Open
jaroslaw-weber opened this issue Nov 5, 2023 · 6 comments
Open

Feature: Ollama integration #256

jaroslaw-weber opened this issue Nov 5, 2023 · 6 comments
Labels
feature New feature or request pending triage

Comments

@jaroslaw-weber
Copy link

Feature request

I think it would be nice to have option to run this with local model through ollama.
I could implement it but seems there is no activity in the pull requests recently and worried my PR will get rejected. Please let me know if you greenlight this feature.

Why?

Gpt is paid, but would be nice to have free option.

Alternatives

No response

Additional context

No response

@jaroslaw-weber jaroslaw-weber added feature New feature or request pending triage labels Nov 5, 2023
@MikeBirdTech
Copy link

This would be a massively beneficial feature. I would gladly offer my help with implementing it

@Nutlope

@mtompkins
Copy link

Echoing the support here. This would have a material impact on uptake as well.

@AntouanK
Copy link

If it's interesting, I have used it locally with LM Studio ( it supports any open source LLM )
https://lmstudio.ai/

--> caveat, most open-source have short context window ( 32k at best ) so large diffs won't work

the process I used :

  • install / run LM studio ( get the model of your choice, and run the local server )
  • find where the aicommits binary is ( it's a js script basically ). You can run which aicommits to find the path
  • edit that file and replace the api.openai.com with localhost

image

  • replace import Tn from "https"; with import Tn from "http";
  • replace port 443 with 1234 ( or whatever port you set in LM Studio )
    image

image

that should be it.

You can set any random openai key just to get it going ( for example aicommits config set OPENAI_KEY=sk-111 )

It works for me, but for small commits.

image

I hope it gets proper support with flags. And hopefully we'll get bigger context window on open-source LLMs soon.

Enjoy.

@newptcai
Copy link

I also made this work. Note that aicommits actually links to aicommits/dist/cli.mjs. If you do not want to overwrite the original cli.mjs, you need to make a copy of it and put it at the same location, e.g., aicommits/dist/cli-local.mjs. Then can create a symbolic link to it and call it something like aicommits-local. That gives you the option to use both open-ai and you local llm.

@newptcai
Copy link

I have to say, GPT3.5 works a lot better than local LLM.

@firstnapat
Copy link

If it's interesting, I have used it locally with LM Studio ( it supports any open source LLM ) https://lmstudio.ai/

--> caveat, most open-source have short context window ( 32k at best ) so large diffs won't work

the process I used :

  • install / run LM studio ( get the model of your choice, and run the local server )
  • find where the aicommits binary is ( it's a js script basically ). You can run which aicommits to find the path
  • edit that file and replace the api.openai.com with localhost

image

  • replace import Tn from "https"; with import Tn from "http";
  • replace port 443 with 1234 ( or whatever port you set in LM Studio )
    image

image

that should be it.

You can set any random openai key just to get it going ( for example aicommits config set OPENAI_KEY=sk-111 )

It works for me, but for small commits.

image

I hope it gets proper support with flags. And hopefully we'll get bigger context window on open-source LLMs soon.

Enjoy.

This work for me. Thank a lot 👍🏼👍🏼

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request pending triage
Projects
None yet
Development

No branches or pull requests

6 participants