You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Autocomplete behavior with Cody's experimental Ollama provider does not work as expected across all indentation levels. For example:
Using granite-code:8b-base-q8_0 with Cody, suggestions only appear at the root indentation level. If the cursor is at an inner indentation level (e.g., within a function), no suggestions are provided unless I hit backspace to move the cursor back to the root level.
Using the yi-coder:9b-base-q8_0 and deepseek-coder-v2:16b-lite-base-q6_K models with Cody's autocomplete gives inconsistent results. There's a ~50% chance of getting a suggestion at the correct indentation level, but only the first token is suggested rather than a complete line of code. Going back to the root indentation level results in no suggestions at all, unlike when using the Granite models.
In comparison, using these models with the Continue.continue extension provides correct autocomplete functionality, including suggestions at all indentation levels.
I made sure other extensions with the feature to do autosuggestions weren't active.
Expected behavior
Autocomplete should function consistently at all indentation levels, providing suggestions regardless of cursor placement in the code. Additionally, models such as yi-coder:9b-base-q8_0 and deepseek-coder-v2:16b-lite-base-q6_K should provide complete code suggestions rather than partial tokens, and models like granite-code:8b-base-q8_0 should work similarly to how they function in the Continue.continue extension, offering suggestions at any indentation level without needing manual cursor adjustment.
Additional context
The issues were experienced while running Cody's experimental Ollama provider on the following configurations:
Version
sourcegraph.cody-ai
v1.35.1726240556 (pre-release)vs code
1.94.0-insider (user setup)ollama
v0.3.10Describe the bug
When using the following settings:
Autocomplete behavior with Cody's experimental Ollama provider does not work as expected across all indentation levels. For example:
granite-code:8b-base-q8_0
with Cody, suggestions only appear at the root indentation level. If the cursor is at an inner indentation level (e.g., within a function), no suggestions are provided unless I hit backspace to move the cursor back to the root level.yi-coder:9b-base-q8_0
anddeepseek-coder-v2:16b-lite-base-q6_K
models with Cody's autocomplete gives inconsistent results. There's a ~50% chance of getting a suggestion at the correct indentation level, but only the first token is suggested rather than a complete line of code. Going back to the root indentation level results in no suggestions at all, unlike when using the Granite models.In comparison, using these models with the
Continue.continue
extension provides correct autocomplete functionality, including suggestions at all indentation levels.I made sure other extensions with the feature to do autosuggestions weren't active.
Expected behavior
Autocomplete should function consistently at all indentation levels, providing suggestions regardless of cursor placement in the code. Additionally, models such as
yi-coder:9b-base-q8_0
anddeepseek-coder-v2:16b-lite-base-q6_K
should provide complete code suggestions rather than partial tokens, and models likegranite-code:8b-base-q8_0
should work similarly to how they function in theContinue.continue
extension, offering suggestions at any indentation level without needing manual cursor adjustment.Additional context
The issues were experienced while running Cody's experimental Ollama provider on the following configurations:
A
B
C
Both the Continue extension and Cody's experimental autocomplete are using the same model configurations.
The text was updated successfully, but these errors were encountered: