Releases: drorm/gish
Releases · drorm/gish
Upgrade to V4 of openai API
Fix for bug where "output" directory is created.
Fix for bug, #18, where "output" directory is created.
V0.4.0
- Show history of requests
- Resume a chat to requests prior to the last one using number in history
- Support the --extra command for additional openai flags such as temperature and max_tokens - Make prompt work when not streamed - Show the prompt on dryrun - Make -c -e work together
V0.3.0
- Support for generating an app. Handle multiple files and save them in a directory.
- Added examples for the above:
- webapp.txt prompt to generate web-applications
- nodejs.txt prompt to generate node.js apps
- Add support to ignore comment lines that start with #
- Fix token cost format.
Release V0.2.1
Fix to #9
Release V0.2
- New chat functionality in interactive mode with a new "chat" command.
- New chat functionality in CLI mode: new option "-c --chat" to treat the input as a chat and add the previous request to the history
- History file is now in ~/.gish/history.json and support chats.
- New examples in the example directory for using gish: automatic git commit messages, code new features, error handling, create unit tests, editing READMEmd, generating TOC, and git commits
- New user settings file in ~/.gish/settings.json where you can customize your settings
- Disable spinner when stdout is not a tty enabling gish > file or gish | command
- "help" command to display available commands in interactive mode.
- Add version option to show current version.
- Now available as an NPM package
Initial Release 0.1
This is the Initial release.
Features include command-line, piped, and interactive modes, the ability to incorporate files into prompts with the #import statement, easy diffing of generated files with the original using the #diff statement or the -d flag, local history and review of previous questions and prompts, the ability to save responses, and the option to stream results or receive them all at once. Additionally, Gish provides stats for each request, showing the number of tokens used, the cost, and the elapsed time.