(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41353835

Microsoft 的 AI 开发工具承诺提供无缝的用户体验,结合了 Visual Studio、C# 和 Azure 等各种 Microsoft 服务。 虽然它可能提供了不错的功能组合,但其整体质量仍略落后于竞争对手,需要充分致力于 Microsoft 生态系统才能实现利益最大化。 免费的开源替代方案将很快推出,而自托管解决方案将不断改进。 主要优势在于智能自动完成功能,与现有工具相比,可以更准确地理解上下文并建议适当的功能。 例如,一旦设置了变量,它可以正确地建议从映射中检索值。 尽管存在一些问题,但其多文件编辑功能还是很出色,对于从事小型项目的经验不足的开发人员来说尤其有利。 然而,在处理冲突的标识符时,它会遇到困难,需要编程专业知识来解决。 值得注意的是,竞争工具 Avante.nvim 的作者对 Microsoft 的产品表示赞赏,表示有兴趣采用某些方面,同时避免其他方面以增强用户便利性。

相关文章

原文


I really like Cursor, however I think ultimately a good open-source alternative will likely overtake it soon.

Keep in mind Cursor is just a fork of VSCode, with AI features that are pretty much just embedded extensions. Their product is great, but many users would prefer a bring-your-own-key & selecting their own model providers.



> Keep in mind Cursor is just a fork of VSCode, with AI features that are pretty much just embedded extensions

Sounds to me like Rabbit R1. A company picks up existing open source tools, builds their own extension/UI on top, and ship as something entirely new and novel. It'll grab a lot of attention short term, but others will quickly figure out how to make their own implementation that runs directly on the existing open source tools.



That's what we are doing at Aide (shameless plug since I work on Aide)

I think using the LSP is not just a trivial task of grabbing definitions using the LSP, there is context management and the speed at which inference works. On top of it, we also have to grab similar snippets from the surrounding code (open files) so we generate code which belongs to your codebase.

Lots of interesting challenges but its a really fun space to work on.

https://aide.dev/



continue(yc) is an open source vscode extension. The best thing about cursor is their auto complete feature, their own fine-tuned model. It will be a while for others to build something close to it.



Not sure about continue, but I use Cursor for work, and it's really good at predicting simple operations. I rarely use it to actually generate code but it's pretty good at completing the code I'm already writing / the thing I'm already doing.



>>Their product is great, but many users would prefer a bring-your-own-key & selecting their own model providers.

On the contrary. Most enterprise users will prefer one package they can buy and not buy the thing piece wise.

Big reason why VSCode won was because they were able to provide a lot of things out the box saving the user round trip through the config hell rabbit hole modern vim/emacs ecosystems are.

If you want to sell developer tooling products, provide as much as possible out of the box.

People want to solve their problems using your tool. Not fix/build problems/missing features in your tool.



>saving the user round trip through the config hell rabbit hole modern vim/emacs ecosystems are

That used to be a valid problem, but times have changed. For instance, Neovim has things now like kickstart.nvim and lazy.nvim that solve this problem. I've been test-driving LazyVim for the past month or so and I don't have to config anything anymore because the updates and plugins are sane choices.



vi(m) is still the best command line editing tool out there. And I used is almost a dozen times everyday.

But the IDE ship sailed long back. Its just that the modern IDE's simply do a lot out of the box that is just not possible to configure quickly, or well enough outside using packages. Most of the times the packages don't work well with each other.

Like the Python's formatter often interferes with vim's modal editing etc.

With AI, this will take it a level further apart.



With that argument, it would be reasonable to assume Microsoft will just clone the key features (Composer etc) and bake them into the next generation of Copilot on VSCode.

Microsoft has its top-tier distribution advantages, plus they can build native integrations with Github/Azure etc to take things to a new level - think one-click deployments built into VSCode.

In fact, given the rising popularity of Replit/Vercel/etc I wouldn't be surprised if Microsoft is cooking as we speak.



That relies on MS' ability to deliver something both complete, comprehensive + have a good UX.

Most of the time they can deliver on...some fraction of some of those.

This is just the AI version of "oh they have Visual Studio + C# + Azure, C# can do FE + BE with ASP.net etc etc so why would anyone ever use anything else?"...and yet, here we are.

They'll deliver some janky, combo of features, it'll be stapled on top of VS Code, it'll be about 65% as good as everything else, but you've got to be all-in-on-MS to really get the value out of it, which will be great for a few enterprise clients, and middling to average for everyone else.

Open Source versions off this will be available soon enough, self-hosted models are only getting better, many orgs are loathe to spend anymore than the absolute minimum on dev-tools (why would they pay for fancy ML stuff that devs will want to run personal versions of anyways) sowhat's the real moat or value prop here?



A big reason copilot spread so fast is because people already trust GitHub with their code - enabling AI doesn't really modify risk. If GH wanted to break TOS and train on your code they could, even without copilot, if you're using GH for private repos.

Any other third party needs to get vetted/trusted - I would be the first to doubt an AI startup.



As more and more people use co-pilot or even cursor, there is likely to be a mad increase in productivity.

But also note, there is likely to be a situation like the early days of internet where people write lots of code, but also bug ridden and unreadable code.

It will take a few years for things to return to routine.



Meh - personally I see copilot as a mild boost/auto complete++ - helps me type out stuff that's obvious in context but needs to be typed out anyway.

I've had it disabled when switching between environments and sometimes not notice for a day - depending on what I'm doing.



I've explored both Zed and Cursor recently and have ended up preferring Zed by a fair margin. Unfortunately their documentation is lacking, but the tool has a pretty coherent design so it's not too bad to figure out. This blog post was the most useful resource I could find to understand the tool: https://zed.dev/blog/zed-ai

For me the collab with Anthropic mentioned is significant too—auspicious.



The lack of an option on Windows makes it harder to justify when alacritty + nvim achieves great speeds as well, with all the customizability and what not.

Can anyone chime in on whether using zed on wsl is viable, or loses all the speed benefits?



Does anyone offhand know if you bring your own key (anthropic, OpenAI, etc) does it hit the AI providers directly or does it pass it to zeds servers first?



For old-schoolers who have been living under a rock for the past few weeks :) how is this different from using Copilot/Copilot-chat?



- copilot would only predict after the cursor, whereas Cursor predicts nearby edits, which is quite helpful

- copilot-chat was just a chat sidebar last time I used it, you still had to manually apply any code suggestions. cursor will apply changes for you. It's very helpful to have a diff of what the AI wants to change.

It's been a while since i've used copilot though, so copilot chat might be more advanced then i'm remembering.

edit: formatting



Thanks! "Nearby edits" mean edits in the same file or the whole workspace?

I test Copilot Workspace from time to time, it's still far from perfect, but it already can make large-scale codebase changes across multiple files in a repository. Ultimately that's what I want from an AI assistant on my machine - give a prompt and see changes in all repo, not just current file.



> Thanks! "Nearby edits" mean edits in the same file or the whole workspace?

For the autocomplete, in the same file. So proposing adding more logging when you add a few statements, changing an error check, adding something to the class def or constructor.

They do have a multi-file editing thing called "composer" I think, which I used to make larger changes to an app (e.g. add a new page that lists all the X, and it creates that and the links to it in the other pages).

You might also be interested in aider https://github.com/paul-gauthier/aider for larger changes.



Thanks! Yes, Aider is a good attempt. I tried it a couple of times, ran into a number of issues, but should give it another try. Integration with an editor (I use nvim) is crucial though.



> It's been a while since i've used copilot though, so copilot chat might be more advanced then i'm remembering.

Copilot is still surprisingly basic but I've heard rumours that they are working on a version with a lot more features?



I think it’s having an agile team focused on this. In the past it was because cursor index your code (vector search) so any question you ask the llm has context of your code. Now it’s the autocomplete feature (their own model). Next i think it will be composer (multi file edit, still in beta).



I’ve heard great things about Cursor and Claude but haven’t tried them yet. I just feel like: how do I even get started?

To me it feels like trying to explain something (for an LLM) is harder than writing the actual code. Either I know what I want to do, and describing things like iteration in English is more verbose than just writing it; or I don’t know what I want to do, but then can’t coherently explain it. This is related to the “rubber duck method”: trying to explain an idea actually makes one either properly understand it or find out it doesn’t make sense / isn’t worthwhile.

For people who experience the same, do tools like Cursor make you code faster? And how does the LLM handle things you overlook in the explanation: both things you overlooked in general, and things you thought were obvious or simply forgot to include? (Does it typically fill in the missing information correctly, incorrectly but it gets caught early, or incorrectly but convincing-enough that it gets overlooked as well, leading to wasted time spent debugging later?)



At its core, it's just vscode. So I'm not stuck unable to write code.

In general, it's like autocomplete that understands what you're doing better. If I've added a couple of console.logs and I start writing another after some new variable has been set/whatever it'll quickly complete that with the obvious thing to add. I'll start guessing where next to move the cursor as an autocomplete action, so it'll quickly move me back and forth from adding a new var in a class to then where I'm using it for example.

As a quick example, I just added something to look up a value from a map and the autocomplete suggestion was to properly get it from the map (after 'const thing = ' it added 'const thing = this.things.get(...)' and then added checking if there was a result and if not throwing an error.

It's not perfect. It's much better than I expected.

For larger work, I recently tried their multi-file editing. I am writing a small app to track bouldering attempts, and I don't know react or other things like that so well. I explained the basic setup I needed and it made it. "Let's add a problem page that lists all current problems", "each attempt needs a delete button", "I need it to scan QR codes", "Here's the error message". I mostly just wrote these things and clicked apply-all. I'm not explaining exactly how or what to do, then I'd just do it.

I'm surprised at how much it gets right first time. The only non-obvious problem to a novice/non-developer it got stuck on was using "id" somewhere, which clashed with an expected use and caused a weird error. That's where experience helps, having caused very similar kinds of problems before.

Sometimes I think as programmers we like to think of ourselves doing groundbreaking brand new work, but huge amounts of what we do is pretty obvious.



With an LLM integrated into your IDE like Cursor or Copilot, oftentimes the LLM autocompletes the correct code faster than I can think about what must be done next. I’ve been coding for 15 years.



Two answers here:

In languages I know well, I use copilot like a smart autocomple. I already know what I want to write and just start typing. Copilot can usually infervery well what I'm going to write for a few lines of code, and it saves time.

In languages I don't know well, where I don't fully know the various standard library and dependency APIs, I write a quick explanation to get the basic code generated and then tweak manually.



The fact this was created so quickly implies to me, having AI assistance embedded in your editor is not a competitive moat/differentiator.

Curious to see how all this VC money into editors end up.



I'm convinced the 60M Cursor round was a blunder. Tools like this and Aider being open source along with VS Code/Vim/Emacs/IntelliJ's robust plugin support means they have basically no moat.



Cody plugin is a great alternative if you prefer Jetbrains IDEs. I've tried cursor several times and the AI integration is fantastic, but the plugin quality is low, navigation and refactorings are worse for me and I'm struggling to configure it the way I like :(



Btw if anyone is trying out a move from JetBrains IDEs to Cursor (or VSCode base) I found it essential to select the JetBrains mapping in the VSCode keyboard config. Many of the refactors / diff jumping / commit shortcuts are supported out of the box and it's a much smoother transition when you don't need to retrain muscle memory / look up whether a given feature is supported while learning the new editor



I get that it's still early stage, but the dependencies already look like a mess to me. No way I'm installing nui.nvim just to rock this plug-in.



Hello, I am the author of avante.nvim. Thank you for your suggestion, it's very helpful for avante.nvim!

I plan to abandon nui.nvim for implementing the UI (actually, we only use nui's Split now, so it's exceptionally simple to abandon). Regarding the tiktoken_core issue, everything we did was just to make installation easier for users. However, the problem you mentioned is indeed an issue. I plan to revert to our previous approach: only providing installation documentation for tiktoken_core instead of automatically installing it for users.

As for why avante.nvim must depend on tiktoken_core, it's because I've used the powerful prompts caching feature recently introduced by the Anthropic API. This feature can greatly help users save tokens and significantly improve response speed. However, this feature requires relatively accurate token count calculations, as it only takes effect for tokens greater than 1024; otherwise, adding the caching parameter will result in an error.



Check out that Makefile. It’s scary af: literally just downloading the latest release of a package not even controlled by the author with 0 documentation. What’s stopping the owner of that repo from uploading a supply chain attack which will get distributed to every user of Avante.

Suggestion to the author: fork the repo and pin it to a hash.



Not to dismiss your criticism, but I think supply chain attacks are generally a weak point of the vim/neovim plugin ecosystem, especially with all the fancy autoupdate package managers.

No package signing, no audits, no curation. Just take over one popular vim package and you potentially gain access to a lot of dev departments.



Nui is a wide spread plugin in Neovim ecosystem, it’s use to have high quality UI widgets.

Probably it also use Plenary for I/O as well.

Not reinventing the wheel is a good thing, don’t see the problem with the dependencies.



Nice. Cursor just raised 60M. And yet this will eventually be more usable and yet will not see even close to that amount of money. We need a better distribution of money in the system.



> eventually be more usable

it's one thing to have preference and sense of aesthetic, it's another thing to claim that said things are universally more usable. If not for components that were invented in VSCode (LSP) then no one would be using vim these days. There are plenty of hills to die on that's much more noble than "I like this editor"



>If not for components that were invented in VSCode (LSP) then no one would be using vim these days

I hate to tell you but, Vim has always had a pretty strong user base. There are folks like me that used it before LSP, and never had interest in leaving. Now your statement might be more accurate if you said, "If it were not for LSP, no one would be leaving VSCode for NeoVim."

> There are plenty of hills to die on that's much more noble than "I like this editor"

I agree. Use whatever you want. But don't make misinformed statements about WHY folks choose something other than your choice.



I like Cursor's interface a lot, it's very focused and well thought out. I get noticeably better results with autocomplete and chat compared to alternatieves. I really like that I can copy the full chat responses as markdown with a button (you can't do that in Cody unfortunately). I like that it has all the capabilities of VS Code but is its own separate thing that I can use for special purpose tasks without mucking with my install and settings. I just don't like that it's 40/mo to get the full product. If my employer was footing the bill I would be all over it though.

Open source tooling is always going to be a different focus: giving you a toolbox to assemble functionality yourself.



I made a neovim plugin awhile back with that explicit purpose: being a toolbox to assemble your own AI stuff with. I struggled a lot to make AI useful and was hoping if the tools were there to make it easy to play around with, folks would figure out how to utilize LLMs effectively and share their results. Til now though, I'm not sure if anyone's even using the plugin beyond the starter prompts (which were only meant as examples). Maybe the API sucks.. idk.



Also shouting out Continue.dev for vscode users. I set it up yesterday, open-source version of Cursor. (not affiliated, I tried to setup Avante but I'm a neovim noob and have skill issues)



I am an paid cursor user for almost one year, still I use GitHub copilot(thanks to the open source work)just because of nvim, when I need to open single file rather than a project, and I need to ensure I am still handy with nvim.



the best part about this is that you can just change the extension. like you are actually allowed to. whereas the extension experience on vscode would require a reload, and on cursor is not possible



I want like a double blind where I do and don’t have LLM bot: I try to use the damned thing but I do stuff like Bazel primitives and NixOS CUDA stuff and Claude doesn’t know jack shit about that stuff.

If Claude could do custom shit on rules_python, I’d marry it.

But it’s a fucking newb on hard stuff.



You could try using Claude's Projects. You can give it examples, documentation and overall guidelines, and maybe that will enable Claude to generate the code you want. Claude 3.5 Sonnet has a context window of 200k tokens, which is enough for quite a lot of context material.



That's why we have jobs with good pay for maintaining these build stuff: some software engineers really hate to deal with "arcane config" for no good reason.



Why aren't more people talking about cody from sourcegraph? For just 10$/month it offers unlimited completions using top models like sonnet 3.5 and gpt4o. Not to mention, the plugins for vscode and intellij products work perfectly well.

联系我们 contact @ memedata.com