(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43708025

这个Hacker News帖子讨论了OpenAI的新Codex CLI,一个“轻量级编码代理”。一些评论者将其与Anthropic的Claude Code进行了比较,并指出OpenAI的产品是开源的。一位评论者建议使用Plandex v2作为更强大、更高效、更经济的开源替代方案,它支持多个提供商并可以索引大型项目。另一位用户称赞Codex CLI的截图到应用程序原型制作功能。有人担心通过环境变量暴露OpenAI API密钥,但有人反驳说运行不受信任的代码风险更大。一些用户对像Aider那样广泛的模型支持以及独立工具中的自主模式感兴趣。帖子中也有一些评论谈到了基于终端的AI工具的兴起趋势,提到了Aider、Parllama和“llm”实用程序。一位评论者指出演示GIF的速度太快,无法提供信息,而另一位则开玩笑说,鉴于Codex CLI需要4-8GB的RAM,称其为“轻量级”的说法站不住脚。


原文
Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI Codex CLI: Lightweight coding agent that runs in your terminal (github.com/openai)
61 points by mfiguiere 2 hours ago | hide | past | favorite | 14 comments










Hi folks! If you might be interested in an open source alternative to this/claude code that is actually more capable, efficient, and cost-effective than what can be achieved with a single provider’s models, maybe give Plandex v2[1] a try.

I launched the v2 a few weeks ago and it is now running well—very stable and reliable overall. Benchmarks are on my todo list but I honestly think it’s SOTA tier for agentic AI codegen for large projects and complex tasks. It’s more agentic than aider, more configurable and tightly controlled than Devin, and more provider-agnostic/multi-provider/open source than Claude Code or this new competitor from OpenAI.

I’m still working on getting the very latest models integrated. Gemini Pro 2.5 and these new OpenAI models will be introduced by the end of the week I hope. Currently by default, it supports 2M tokens of context directly and can index massive projects of 20N tokens and beyond.

Very interested to hear HN’s thoughts and feedback. I’m planning a Show HN within the next few days.

1 - https://github.com/plandex-ai/plandex



This is pretty neat! I was able to use it for few use cases where it got it right the first time. The ability to use a screenshot to create an application is nice for rapid prototyping. And good to see them open sourcing it unlike claude.


Next, set your OpenAI API key as an environment variable:

export OPENAI_API_KEY="your-api-key-here"

Note: This command sets the key only for your current terminal session. To make it permanent, add the export line to your shell's configuration file (e.g., ~/.zshrc).

Can't any 3rd party utility running in the same shell session phone home with the API key? I'd ideally want only codex to be able to access this var



If you run malicious 3rd party code as your main user then you have bigger problems than an OpenAI API key leaking.


If one of these tools has broad model support (like aider) it would be a game changer.


If aiders creator sees this, any plans on implementing agentic mode, or something more autonomous like claude cli works? Would love to have an independent tool doing that.


It's wild that both OpenAI and Anthropic are releasing tools that run in the terminal, especially with a TUI which is what we showcase.

aider was one of the first we listed as terminal tool of the week (0) last year (1)

and we recently listed parllama (3) if you like to run offline and online models in the terminal with a full TUI.

(0) https://terminaltrove.com/tool-of-the-week/

(1) https://terminaltrove.com/aider/

(2) https://terminaltrove.com/parllama/



related demo/intro video: https://x.com/OpenAIDevs/status/1912556874211422572

this is a direct answer to claude code which has been shipping furiously: https://x.com/_catwu/status/1903130881205977320

and is not open source; there are unverified comments that they have DMCA'ed decompilations https://x.com/vikhyatk/status/1899997417736724858?s=46

by total coincidence we're releasing our claude code interview later this week that touches on a lot of these points + why code agent CLIs are an actually underrated point in the SWE design space

(TLDR you can use it like a linux utility - similar to @simonw's `llm` - to sprinkle intelligence in all sorts of things like CI/PR review without the overhead of buying a Devin or a Copilot SaaS)

if you are a Claude Code (and now OAI Codex) power user we want to hear use cases - CFP closing soon, apply here https://sessionize.com/ai-engineer-worlds-fair-2025



  RAM  4‑GB minimum (8‑GB recommended)
It's a CLI...


Possibly the heaviest "lightweight" CLI tool ever made haha.


What's the point of making the gif run so fast you can't even see shit


People somehow seem to be adverse to making the shift from GIF to H.264


That's the model speed :)


Not really, they don't even give you a second to read the output before it loops back again.






Join us for AI Startup School this June 16-17 in San Francisco!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com