![]() |
|
![]() |
|
If you're thinking it is expensive, wait until you start to play with Claude Opus. Sooner or later I will declare bankrupt Nice product BTW. I really liked the UI, is very polished |
![]() |
|
This looked cool and I was excited to try it until I realized that I either need a subscription, or I need to set up a server. Why does this need a server, when Aider just works via the cli?
|
![]() |
|
Agreed on providing a docker image. I made an issue to track it here: https://github.com/plandex-ai/plandex/issues/78 I do want to make self-hosting as easy as possible. In my experience, there will still be enough folks who prefer cloud to make it work :) There's a local .plandex directory in the project which just stores the project id, and a $HOME/.plandex-home directory that stores some local metadata on each project--so far just the current plan and current branch. |
![]() |
|
I see, thanks for the explanation! If you're only storing a bit of data, removing the requirement for a local directory would make deployment easier; these could just go into the database.
|
![]() |
|
How do you situate changes in a file? That seems like the hard part to me since the LLM can't necessary count to output a patch with line numbers.
|
![]() |
|
It does use line numbers, which definitely aren't infallible. That's why a `plandex changes` TUI is included to review changes before applying. Unfortunately no one has figured out a file update strategy yet that doesn't make occasional mistakes--probably we'll need either next-gen models or fine-tuning to get there. That said, counting isn't necessarily required to use line numbers. If line numbers are included in the file when it's sent to the model, it becomes a text analysis task rather than a counting task. Here are the relevant prompts: https://github.com/plandex-ai/plandex/blob/main/app/server/m... |
![]() |
|
You could do this with Plandex (or Aider... or ChatGPT) by having it output a shell script then `chmod +x` it and run it. I experimented early on with doing script execution like this in Plandex, but decided to just focus on writing and updating files, as it seemed questionable whether execution could be made reliable enough to be worthwhile without significant model advances. That said, I'd like to revisit it eventually, and some more constrained tasks like copying and moving files around are likely doable without full-on shell script execution, though some scary failure cases are possible here if the model gets the paths wrong in a really bad way. OpenInterpreter is another project you could check out that is more focused on code/script execution: https://github.com/OpenInterpreter/open-interpreter |
![]() |
|
> Nah these things are all stupid as hell. Any back and forth between a human and an LLM in terms of problem solving coding tasks is an absolute disaster. I actually agree in the general case, but for specific applications these tools can be seriously awesome. Case in point - this repo of mine, which I think it's fair to say was 80% written by GPT-4 via Aider. https://github.com/epiccoleman/scrapio Now of course this is a very simple project, which is obviously going to have better results. And if you read through the commit history [1], you can see that I had to have a pretty good idea of what had to be done to get useful output from the LLM. There are places where I had to figure out something that the LLM was never going to get on its own, places where I made manual changes because directing the AI to do it would have been more trouble than it was worth, etc. But to me, the cool thing about this project was that I just wouldn't have bothered to do it if I had to do all the work myself. Realistically I just wanted to download and process a list of like 15 urls, and I don't think the time invested in writing a scraper would have made sense for the level of time I would have saved if I had to figure it all out myself. But because I knew specifically what needed to happen, and was able to provide detailed requirements, I saved a ton of time and labor and wound up with something useful. I've tried to use these sorts of tools for tasks in bigger and more complicated repos, and I agree that in those cases they really tend to swing and miss more often than not. But if you're smart enough to use it as the tool it is and recognize the limitations, LLM-aided dev can be seriously great. [1]: https://github.com/epiccoleman/scrapio/commits/master/?befor... |
![]() |
|
I've been using https://cursor.sh/ heavily for about 2 months and I'm pretty happy. Cursor is a fork of VSCode focused on AI. I'd prefer to use something totally open-source, but Cursor is free, gets regular updates, and I can use my OpenAI API key. The diff view works well with AI coding assistants. I end up parallelizing more. I let cursor do its thing while I'm already looking at the next file. I love aider too! Have used it to automate things such as maintaining a translated version of the page in a git pre-commit hook. |
![]() |
|
In my experience, Supermaven makes Copilot look like a joke, and they’ve just released a Jetbrains plugin. YMMV. It’s just code suggestions though, no chat box.
|
![]() |
|
I’ll be trying this today, thank you for the suggestion. 300K context window is awesome. I actually like completions more, it feels more natural. I’m fine to go to ChatGPT/Opus to chat if needed. |
![]() |
|
Aider works a little different where it doesn't just code complete or focus on a function level. It can solve much bigger problems.
|
![]() |
|
I just tried it and it's amazingly cool, but the quality of the output just isn't there for me yet. It makes too much subtle errors to be as useful as the screenshots and the gifs makes it look
|
![]() |
|
Hi, for somewhere between GitHub Copilot and aider, you can try the desktop app 16x Prompt. I have been using it daily for the past few months and it suits my working style nicely. It is capable of handling complex tasks like feature development and refactoring across multiple files, but it doesn't try to generate diff and apply them automatically. Instead, you will get a response from LLM that is easy to read and allow you as a developer to quickly apply to your existing codebase. You can check it out here: https://prompt.16x.engineer/ |
![]() |
|
I’ve used aider to understand new codebases using technologies I don’t know and it did a fantastic job; much faster than grep/find + google.
|
![]() |
|
If emacs hard blocks on LSP requests, that may be on emacs as well.
I recomemd you try ruff-lsp, although it does not iver everything and is more for linting, it's higb quality
|
![]() |
|
I’m still waiting for that bastard Devin to write my killer app. Now you want me to code my own killer app with an AI micromanage me?
|
![]() |
|
> - context aware autocomplete for real If the AI tool can fetch related classes for the code that I'm working with that would be so helpful! |
![]() |
|
It gets commit messages wrong. Commit messages should signal intent, not what the patch does. "changing an enum" is a horrible commit message.
|
![]() |
|
Exactly. How do Python people put up with that? Have none of them tried to run multiple Python projects over time on the same Linux install with commandline access to all of them at once? |
![]() |
|
I use synthetic aperture pair programming. I write some code and get it working. Then I get pulled off it for a few months and come back to it. Me + 3 months: da fuk. |
![]() |
|
I have used this technique for months and it’s great https://x.com/arjie/status/1575201117595926530 I just have copilot in my editor and switch into my editor with C-x C-e for AI completion. I use neovim like example but you can use whatever you like. EDIT: Oh never mind. I see what it is now. It’s a terminal based flow for editing code. Mine is for command line writing live. |
![]() |
|
That's just not the reality anymore. You can run a decent open source coding language model on local hardware. Just needs a bit of work and it's not quite as seamless.
|
![]() |
|
If there is a good library for it within the domain, ideally at some point the AI will suggest it. Can't wait until the AI writes it own library it will reference in future answers.
|
![]() |
|
Well, regular search means switching to a different application, with an implied context switch. It definitely takes longer for many things than just using GitHub copilot.
|
![]() |
|
Yea that's where I've landed. Telling it what to do is time consuming. Telling it what I want to do in a broader term and asking for code examples is a lot better., especially for something I don't know how to do. Otherwise the autocomplete/suggestions in the editor is great for the minutia and tedious crap and utility functions. Probably saves me about 20% typing which is great on hands that have typing for 20 odd years. It's also good for finding tools and libraries (when it doesn't hallucinate) since https://libs.garden disappeared inexplicably (dunno what to do on Friday nights now that I can't browse through that wonderful site till 2am) |
I studied Aider's code and prompts quite a bit in the early stages of building Plandex. I'm grateful to Paul for building it and making it open source.
1 - https://github.com/plandex-ai/plandex