Show HN:Cocommit——Git 提交的副驾驶
Show HN: Cocommit – A copilot for git commit

原始链接: https://github.com/andrewromanenco/cocommit

Cocommit是一个命令行工具,它利用大型语言模型 (LLM) 来提升你的 Git 提交信息的质量。Cocommit 会分析你最近一次提交,并提出改进建议,指出优点和需要改进的地方,例如需要更具体。 它使用 LangChain 支持多种大型语言模型,允许你使用 OpenAI、Groq 和 Amazon Bedrock 等不同的提供商。你可以使用命令行参数指定 LLM,这些参数对应 LangChain 的 `init_chat_model` 参数。 一些常用的配置有快捷方式,简化了使用(例如,`-s bedrock-claude37`)。依赖项按需安装,确保只安装必要的软件包。 Cocommit 提供调试选项,以便查看原始的 LLM 提示和响应。欢迎贡献代码。它采用 MIT 许可证。版本 2 正在开发中,将添加更多功能。

AndrewRDev 在 Hacker News 上展示了他的作品 “Cocommit”,这是一个轻量级的 Git 提交辅助工具,它使用 LangChain 支持多种大型语言模型 (LLM)。目前它使用了 Claude 3.7(通过 Bedrock)和 OpenAI 的 GPT-4o,并邀请社区进行测试并提供反馈,特别是通过 GitHub Issues 反馈,以查看它在其他 LangChain 支持的 LLM 上的表现如何。 评论区引发了关于详细提交信息必要性的讨论。Andrewfromx 认为简洁的提交信息就足够了,`git log -p` 可以提供详细的上下文。Floodle 反驳道,提交信息应该解释更改背后的“原因”。Artursapek 同意上下文信息很有帮助,但建议如果代码本身不够清晰,代码注释是更持久的解决方案。

原文

image image image

Cocommit is a command-line tool that works with your HEAD commit and leverages an LLM of your choice to enhance commit quality.

A good commit consists of multiple elements, but at a minimum, it should have a well-crafted commit message. Cocommit analyzes the message from the last (HEAD) commit and suggests improvements, highlighting both strengths and areas for enhancement.

Cocommit v2 is currently in development and will introduce many new features—see the v2 documentation for details.

Cocommit utilizes LangChain as an abstraction layer to access various Large Language Models (LLMs).

  • Works with the LLM of your choice
  • Provides a simple foundation for building custom AI-powered tools
  • Easily integrates into your development workflow
  • Allows customization of the LLM experience to fit your project needs
git add .
git commit
cocommit -s bedrock-claude37
Calling with: --model_provider bedrock --model us.anthropic.claude-3-7-sonnet-20250219-v1:0 --region_name us-east-1
Calling LLM....
Done in 9.6 seconds.

About your commit:
This is a good quality commit message that follows most best practices. It has a clear, concise title in imperative mood and a brief explanation of why the change was made.

Strengths:
  - Concise first line under 80 characters
  - Uses imperative mood correctly ("Add" not "Added")
  - Provides context in the description about why the change was made
  - Clearly specifies what was changed
  - Follows proper structure with title, blank line, and description

Improvements:
  - The description could be slightly more specific about which Python versions are now supported

********** Proposed Git Message: ********************

Add Python version classifiers to project metadata

This change lists supported Python versions for better visibility on PyPI.

****************************************************

Fixes:
  - Consider specifying which Python versions are now supported in the description for more detail

Amend the commit message? [Y/n]: y
********** Previous Message **********
Add Python version classifiers metadata

Lists supported Python versions for PyPI.
**************************************
Amend ... done!

To install Cocommit, run:

You can also install Cocommit from the source code, which allows for customization. For details, see the Contributing Guide.

Handling on-demand Dependencies

When calling a specific LLM, you may encounter a message like this:

cocommit --model llama3-8b-8192 --model_provider groq
Unable to import langchain_groq. Please install with `pip install -U langchain-groq`

To resolve this, execute the suggested pip install command. Since Cocommit supports a wide variety of LLMs, dependencies are installed only when needed.

Cocommit interacts with Large Language Models (LLMs) through an abstraction layer provided by LangChain. This allows you to use any LLM provider supported by LangChain’s init_chat_model function.

To use OpenAI, set your API key as the OPENAI_API_KEY environment variable. Alternatively, you can provide the key via the command line using the --api_key <your key> option:

cocommit --model_provider openai --model gpt-4o

Note: On the first run, you may need to install additional dependencies by running: pip install -U langchain-openai.

Simplifying OpenAI Usage with Shortcuts

Cocommit provides shortcuts for common LLM providers and models, allowing you to avoid specifying every parameter manually.

Ensure your chosen LLM provider is authorized (e.g., OpenAI API key or AWS credentials).

Run:

For example, if OPENAI_API_KEY is set:

This command is equivalent to the full OpenAI GPT-4o example above.

Example: Using Claude 3.7 on Bedrock

Before using Amazon Bedrock, ensure that:

  • The Claude 3.7 model is enabled in your AWS account (region: us-east-1)
  • You have valid credentials to access the model (typically configured in ~/.aws/credentials or via environment variables; refer to AWS documentation for details)

To analyze and enhance the last commit message in a Git repository, run:

cocommit --model_provider bedrock --model us.anthropic.claude-3-7-sonnet-20250219-v1:0 --region us-east-1

To simplify execution for future use, save the above command in a shell script.

Simplifying Bedrock Claude 3.7 Usage with Shortcuts

If AWS credentials are configured and Claude 3.7 is enabled in us-east-1, run:

cocommit -s bedrock-claude37

This command is equivalent to the previous Bedrock example.

Viewing Available Shortcuts

To see all available shortcuts, run:

cocommit --show-shortcuts

If a shortcut you need is missing, consider contributing! See the Contributing Guide for details.

Cocommit leverages LangChain’s init_chat_model function to call different LLMs. Command-line arguments map directly to its parameters.

  • The --model parameter corresponds to the model argument in init_chat_model. It's the first positional parameter in LangChain's documentation.
  • Other command-line parameters are passed directly to init_chat_model (ensure you use _ instead of - when required by LangChain’s documentation).

Recommended usage: Always specify --model and --model_provider explicitly, unless using a shortcut.

Cocommit supports debugging features, including:

  • Viewing the raw LLM prompt
  • Displaying the raw LLM response

To explore available options, run:

Contributions are appreciated! If you'd like to get started, please review the contributing guidelines.

This project is licensed under the MIT License.

联系我们 contact @ memedata.com