(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41302782

Zed 的人工智能 (AI) 集成可让编码人员在环境中顺利、愉快地使用。 用户更喜欢与 Anthropic 直接通信,以尽量减少中间人对其代码的访问。 然而,总体而言,用户发现了 Zed 未来潜在应用的价值。 他们很高兴 Zed 很快就能建立盈利的收入来源,从而减少对可持续性和寿命的担忧。 Zed 无需对开源编辑器收费,而是可以在两年后根据 BSD 或 MIT 等流行许可证授权源代码来实现开放核心模型。 公司无需破解许可软件,只需在运行新代码之前购买许可证密钥即可。 在不牺牲免费版本的情况下有效执行开放核心模型具有挑战性,但它们允许通过付费销售进行持续开发和更新。 除了 AI 之外,用户主要使用 Zed 来参考材料和确定性代码生成,利用不太常用的函数和方法,查找丢失的信息,排除错误,并寻求特定编码步骤组合的指导。 他们对等待人工智能扩展的响应表示不满,因为它们扰乱了用户的流程。 在考虑 AI 驱动的自动完成选项时,用户建议尝试使用 GitHub 的 Codex 或 Cursor 的 Copilot++,因为它们的速度和有效性。 用户预计人工智能准确性将取得显着进步,从而在不久的将来提高生产力水平。 尽管开源项目面临着影响 PR 的低质量 AI 输出的挑战,但用户仍然对当前的 AI 功能持批评态度。 他们鼓励产生高质量的结果,以赢得怀疑并确保在市场中生存。 最后,用户承认 Zed 与其他 AI 助手不同,专注于操作和编辑文本,并赞扬 Zed 强调提供灵活、可定制的上下文帮助。

相关文章

原文


As I’ve said in yesterday’s thread[0], contrary to many others I find the AI integration in Zed to be extremely smooth and pleasant to use, so I’m happy they’re doubling down on this.

However, personally, I prefer to have it configured to talk directly to Anthropic, to limit the number of intermediaries seeing my code, but in general I can see myself using this in the future.

More importantly, I’m happy that they might be closing in on a good revenue stream. I don’t yet see the viability of the collaboration feature as a business model, and I was worried they’re gonna have trouble finding a way to sensibly monetize Zed and quit it at some point. This looks like a very sensible way, one that doesn’t cannibalize the open-source offering, and one that I can imagine working.

Fingers crossed, and good luck to them!

[0]: https://news.ycombinator.com/item?id=41286612



>However, personally, I prefer to have it configured to talk directly to Anthropic, to limit the number of intermediaries seeing my code, but in general I can see myself using this in the future.

Same. I can kind of feel OK about my code going to Anthropic, but I can't have it going through another third party as well.

This is unfortunately IT/security's worst nightmare. Thousands of excitable developers are going to be pumping proprietary code through this without approval.

(I have been daily driving Zed for a few months now - I want to try this, I'm just sceptical for the reason above.)



Add the following to your settings.json:
  "assistant": {
    "version": "2",
    "default_model": {
      "provider": "anthropic",
      "model": "claude-3-5-sonnet-20240620"
    }
  }
  
Once this is done, you should be able to use Anthropic if you have an API key. (This was available before today's announcement and still works today as of Zed 0.149.3)


That was a part of the reasoning of open sourcing my AI assistant/software dev project. Companies like Google have strict procedures around access to customer data. The same can't always be said about a startup racing to not run out of cash.



I really wanted to try this out with a difficult bug, as I've had Zed installed for a while and haven't actually used it. But I have no idea if I'd get in trouble for that... even though our whole team uses VSCode which I'm sure has scanned all our codebases anyway.



Also, I know some people say “just let me pay for the editor” but I don’t think that’s actually a viable path.

The editor is open-source, and it being open-source is great. Others can contribute, it generally helps adoption, it will probably help with getting people to author plugins, and it means if they go in an undesirable way, the community will be able to fork.

So without making it non-open-source, they’d need to do open-core, which is incredibly hard to pull off, as you usually end up cannibalizing features in the open-source version (even if you have contributors willing to contribute them, you block it to not sabotage your revenue stream).



A lot of Zed users are developers at medium-to-large businesses, so I think full source-available with licensing is very much something they could do.

They could put the code on GitHub, allowing contributions, with a license that turns into BSD or MIT after two years, and with the caveat that you can only run the (new) code if you purchase a license key first.

In companies of reasonable size, the deterent against piracy is the existence of a license itself, the actual copy protection and its strength aren't as important. The reason those companies (mostly) don't crack software isn't that the software is hard to crack, it's that their lawyers wouldn't let them.

Sure, this would make Zed somewhat easier to crack, but I think the subset of users who wouldn't use a Zed crack if Zed was binary-only but would use one if there was source code available is very small.



But then it wouldn't be open-source anymore, would it? Making the willingness of people to contribute likely much smaller. You don't really have the safety of having an easy way to fork either, if all you can fork is a 2-year old version (which plugins are likely not compatible with anymore).

It would also (deservedly) likely anger a large part of the existing community, especially the ones most involved, like plugin authors, who put their time in assuming their contributing to an open-source ecosystem.

Thus, I believe that ship has sailed the moment they made it open-source in the first place.



My understanding of their comment is that the source is made available immediately. It’s just that you need to pay for a license to use it for the first couple years.



Yes, and open-source and source-available are two different things. The comment I responded to suggested they switch to a source-available license which falls back to an open-source license after a time period passes.



I would like us as an industry to promote paying for things we use. Crazy idea, I know.

Open source is great and fun and an incredible force multiplier for the world, but when you want to do this stuff for a living, you have to charge money for it somehow, and if you're a software business, and not just software adjacent, it means charging for software.



I’m not sure if that’s clear or not from my comments, but I completely agree (and pay for JetBrains editors, amongst other tools).

Though in the case of “fundamental / ecosystem center” tools like code editors which e.g. others will build plugins for I believe there’s immense value in it being open-source.

Thus, I’m rooting for them to find a business model where they get our money without making it non-open-source.



Everyone goes to subscriptions, because trying to get new customers every month at a rate that pays everyone's salaries, at a typical engineering level, plus everyone else on the company, and office rental, doesn't scale, specially in developer tools where many prefer to suffer with lesser tooling than paying.



Subscriptions are only okay for me if it's done like Jetbrains does it, where you can keep the old versions permanently. Partly because it implies what has to be true for monthly payments to make sense: that the software keeps getting better and keeps getting support for various tech advances that happen and tools and databases. If I'm paying monthly for something that doesn't cost them anything then that feels illogical on my side. This should become a thing with online services too, at least let me keep some kind of upgraded version of your service after I stop paying. Something please, anything to make me feel less like a chump for having paid and then ending up with nothing in the end.



Pay-for closed-source is profitable for about a week. Then some asshole offers a shittier but free facsimile and community contributions render it good enough to kill the original.



I think we're all pretty familiar with this cycle, but there do exist durable products that don't succumb to this issue within reason and the burden rests on the original authors to find ways to create themselves a moat like any other business.



Leave it open source exactly as it is now, do not put convenient downloadable binaries up on Github. Allow me to either compile it myself or pay for the binaries?

There must be some other way to monetize open source.



I just want a fast programmable text editor with a native GUI and good defaults.

But that seems really tough to find, for some reason.

Zed is so close, but I’d much rather see a focus on the “programmable” part and let the AI and collaboration features emerge later out of rich extensibility (i.e. as plugins, perhaps even paid plugins) than have them built-in behind a sign-in and unknown future pricing model.



In the case of Zed it was always inevitable, a text editor doesn't raise >$10M in venture capital unless there's a plan to stuff it full of premium subscription features.

Warp Terminal is a similar story, >$50M in funding for a terminal emulator of all things...



The funny thing is Atom was the origin story of Zed, written in some C++ and a lot of Coffeescript exactly so it could be very programmable.

Also, Spacemacs? It's technically a terminal but definitely has a lot of UI features. Very programmable.



> I just want a fast programmable text editor with a native GUI and good defaults.

It's called TextAdept. Much of it is itself built on its own Lua extensibility story, which runs on a fairly compact C core. Both native GUI and terminal versions, using the same user config (keybinds etc). Linux, Mac OS, Windows builds. LSP support built in. Plenty of community-produced extensions around (but of course not as vast a range as VSCode's VSX eco-system furnishes).

https://orbitalquark.github.io/textadept/



It doesn't mean your code is NOT being uploaded somewhere. They could add an easy switch to use the editor 'offline', not that they have to. I'll go back to Helix.



> I just want a fast programmable text editor with a native GUI and good defaults.

What would that be for each OS?

Linux: Kate (at least if using KDE; which one would it be for GTK / Gnome?)

macOS: TextMate?

Windows: Notepad++?



> with a native GUI

This means 100x more effort in the long run for a cross platform editor. Maybe if developers lived for 200 years, this could be possible. Will need to solve human ageing problem before the cross platform "native GUI" problem.



Neovim and Emacs extensibility are great!

Native GUIs offer far better accessibility (TUIs are not screen-reader accessible, and neither is Emacs' GUI currently), hugely improved UI flexibility and consistent developer APIs (Emacs GUI is inconsistent across platforms and tricky to work with, every Neovim plugin reinvents ways to draw modals/text input because there's no consistent API), reduced redraw quirks, better performance, better debugging (as a Neovim plugin dev I don't want to spend time debugging user reports that relate to the user's choice of terminal emulator this week and not to Neovim or my plugin code).



I would also cast my vote for sublime text. The performance is amazing, the defaults are great and the extensions cover a lot of the use cases



If a motivated solo dev thought there might be at least 10,000 people who would pay 100 USD a year for a text editor with better extensibility and performance than VS Code and better defaults/richer GUI APIs than vim/Emacs, I can see why it might be tempting for them to try.



Using the same platform-specific graphics API the OS vendor builds their GUI apps with, ideally, but I'll also settle for "not a TUI, not a web application shipped as a desktop app, even if the OS vendor currently builds their GUI apps as web applications shipped as desktop apps".



A GUI that uses native controls and platform UI conventions with the native behavior expected on the given platform, or a near-indistinguishable equivalent of that.



There's two ways to make extensions to Zed, providing context for AI. In the post they show off making a Rust -> WASM based extension and also mention a server based model. There's also a third option -- Zed is open source. You don't have to use their auth, it just makes collaboration easy.



> Extensions can add the following capabilities to Zed: Languages, Themes, Slash Commands

This is a great start but it's far from what most would accept as "programmable" or richly extensible.



I recently switched from neovim to zed, and I overall like Zed. I miss telescope, and think some vim navigation was better, but I suspect that it has to do with how much effort I put into configuring one over the other, so time will tell.

My biggest gripe was how bad the AI was. I really want a heavy and well-crafter AI in my editor, like Cursor, but I don't want a fork of the (hugely bloated and slow) vscode, and I trust the Zed engineering team much more to nail this.

I am very excited about this announcement. I hope they shift focus from the real-time features (make no sense to me) to AI.



Agreed about the AI, the last time I tried Zed I was also trying Cursor at the same time, and the Cursor AI integration vs what Zed offered was just night and day. So I got the Cursor subscription. But I haven't used it in 2 months (I don't get to code a lot in my job).

This was maybe 3-4 months ago, so I'm excited to try Zed again.



What were the Zed features that made you to switch? I feel like with todays ecosystem it's easier to complete neovim experience with plugins than wait for Zed devs to catch up.



Codeium works well but I really like the copilot chat plugin as well - it generally does a good job of explaining highlighted code, fixing errors, and other code interactions.



After using Cursor for some hobby stuff, it's really good. I was surprised at how well it managed the context, and the quick suggestions as you're refactoring really add up since they're generally exactly what I was about to do.



Interesting that this seems to be the announcement of Anthropic's Copilot alternative:

> A private beta of the Claude 3.5 Sonnet's new Fast Edit Mode, optimized for text editing. This upcoming mode achieves unprecedented speed in transforming existing text, enabling near-instantaneous code refactoring and document editing at scale.



yeah that caught my eye too, looks to me like speculative editing (they mentioned that its faster to each its input) + prompt-caching it would literally build up on all the tech they have



AI assistants just slow me down. Its a very rare case i find them actually useful. I am generally concerned by the amount of devs that seem to claim that it is useful. What on earth are yall accepting.



I find the only "AI" I need is really just "Intellisense". Just auto complete repetitive lines or symbol names intelligently, and that doesn't even require an AI model.



Lots of golang kubernetes work these days.

Stackoverflow is used when im stuck and searching around for an answer. Its not attempting to do the work for me. At a code level I almost never copy paste from stackoverflow.

I also utilize claud and 4o at the same time while attempting to solve a problem but they are rarely able to help.



Kubernetes, AWS, Cloudformation and Terraform etc sort of work is still not good with AI.

The current AI code rocket ship is VSCcode + Perl/Python/Node+ReactJS + Co-Pilot.

This is basically a killer combination. Mostly because large amounts of Open source code is available out there for training models.

Im guessing there will be an industry wide standardisation, and Python use will see a further mad rise. On the longer run some AI first programming language and tooling will be available which will have first class integration with the whole workflow.

For now, forget about golang. Just use Python for turbo charged productivity.



> For now, forget about golang

I write kubernetes controllers. Golang is here to stay.

> Just use Python for turbo charged productivity

This is my problem with all the "AI" bros. They seem to consistently push the idea that quickly writing code is the end all of "productivity" its akin to "just shovel more shit faster its great"

Speed != productivity



Not GP, but the kind of search I do mostly are:

- Does this language have X (function, methods,...) probably because I know X from another language and X is what I need. If it does not, I will code it.

- How do I write X again? Mostly when I'm coming back to a language I haven't touch for a while. Again I know what I want to do, just forgot the minutia about how to write it.

- Why is X happening? Where X is some cryptic error from the toolchain. Especially with proprietary stuff. There's also how to do X where X is a particular combination of steps and the documentation is lacking. I heard to forums in that case to know what's happening or get sample code.

I only need the manual/references for the first two. And the last one needs only be done once. Accuracy is a key thing for these use cases and I'd prefer snippets and scaffold (deterministic) instead of LLMs for basic code generation.



I use llms exactly and exclusively for the first two cases - just write comments like:

// map this object array to extract data, and use reduce to update the hasher

And let llms do the rest. I rarely find my self back to the browser - 80% of the time they spit out a completely acceptable solution, and for the rest 20% at least the function/method is correct. Saved me much time from context switching.



For me the quick refresh is better as I only need to do it once (until I don't use the language/library again) and that can be done without internet (local documentation) or high power consumption (if you were using local models). And with a good editor (or IDEs) all of these can be automated (snippets, bindings to the doc browser,...) and for me, it's a better flow state than waiting for a LLM to produce output.

P.S.I type fast. So as soon as I got a solution in my head, I can write it quickly and if I got a good REPL or Edit-Compile-Run setup, I can test just as fast. Writing the specs, then waiting for the LLM's code and then review it to check feel more like being a supervisor than a creator and that's not my kind of enjoyable moment.



I agree with you, creating something just feels better than reviewing code from a LLM intern ;D

That's why I almost never use the 'chat' panel in those AI-powered extensions, for I have to wait for the output and that will slow me down/kick me out of the flow.

However, I still strongly recommend that you have a try at *LLM auto completion* from Copilot(GitHub) or Copilot++(Cursor). From my experience it works just like context aware, intelligent snippets and heck, it's super fast - the response time is 0.5 ~ 1s on average behind a corporate proxy, sometimes even fast enough to predict what I'm currently typing.

I personally think that's where the AI coding hype is going to bear fruit - faster, smarter, context+documentation aware small snippets completion to eliminate the need for doc lookups. Multi file editing or full autonomous agent coding is too hyped.



I'm just as baffled by the people who use stackoverflow daily. Its increasingly rare that I use it these days, to the point where I deleted my account a few years back and haven't missed it. Don't people read docs anymore? In many ways I feel lucky that I learned at a time when I only had offline docs, which forced me to become good at understanding documentation since its all I had.



Some of Cursor's features is appeals to my lazyness, say: "convert to javascript" and hit apply... For now its still a bit slow (streaming words) but when this is immediate? Not a change against the fastest Vimmer. Select code, dictate the change, review, apply - will save my wrists.



> If it did not then I dont see how looking up with AI can be worse

Looking up with AI is worse because its WRONG a lot more. Random rabbit holes, misdirection. Stuff that SOUNDS right but is not. It takes a lot of time and energy to discern the wheat from the chaff.

Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.



This is my experience with trying to use AI for coding tasks too. I've had back-and-forths with AI that involve me trying to get it to fix things to get a final working function, but since it doesn't actually understand the code, it fails to implement the fixes correctly.

Meanwhile, the stuff you find through a traditional web search tends to either be from a blog post where someone is posting actual working code snippets, or from StackOverflow where the code tends to be untested initially but then gets comments, votes, and updates over time that help boost confidence in the code. It's far more reliable to do a web search.



why do people pretend that google search is straight forward

> the stuff you find through a traditional web search tends to either be from a blog post

Is that so? Most of my hits have been stack overflow and github issues, where there are false positives, same problem to AI hallucination



Because it is? I tend to do my queries as keywords instead as questions and I tend to get good result. But most of the time, I'm just seeking the online manual to understand how things works and what is happening, not an exact solution. It's the equivalent of using a library to write a thesis. That only requires to get familiar with the terminology of the domain, know where the best works are and how to use indexes and content tables.



>>Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.

This was the case only 2 - 3 months back. But the assistants all moved to GPT-4/Sonnet and the newer versions are just a whole lot better and accurate.

That's the whole idea behind AI, when you do find something is wrong, the error function kicks in and the weights are tweaked to more correct values.

When GPT-5 comes along it will be another whole level accurate. In fact its already close to 90% accurate for most tasks with GPT-5 you can say that number could go to 95% or so. Which is actually fairly good enough for nearly all the production work you could do.

Of course in coming years, Im guessing coding without AI assistance will be somewhat similar to writing code on paper or something like that. You can still do it for fun, but you won't be any where productive at a job.



I use gpt4o and sonnet regularly. They are so often wrong. Just yesterday gpt4o spit out consistently incorrect tree sitter queries and refused to accept it was wrong. Its all so pointless and slowed me down compared to just reading the documentation.



What’s the point of this argument? If the user you’re replying to has been on this site, they’ve probably seen this counterpoint before.

“Aha!” They say, “I only realized my folly after the 25th time someone pointed out googling also takes time!”

Maybe there’s some interesting difference in experiences that shouldn’t just be dismissed.



Anthropic is an evil company. I wanted to try their subscription for a month, now they are storing my credit card info forever without an option to remove it, yet a single click of a button will instantly resubscribe me. I don't understand how one can seriously think they will not sell all your data at the first opportunity to make money, cause with such shady subscription/payment practices it instantly gives a money-before-all vibes for the whole product and the company.



I was not telling about the credit card data obviously, they have much more data they can actually sell.

In my opinion, storing someone's credit card data online after purchase, without a clear option to delete it should be illegal.



The main issue with these set of tools is that I mostly read and understand code more than writing it myself.

Not enough attention is been given to this imbalance.

It is impressive having an AI that can write code for you, but an AI that helps me understand which code we (as a team) should write would be much more useful.



My immediate instinct was to agree with you whole-heartedly. Then I remembered that Copilot has an "Explain this" button that I never click.

Maybe this is because I'm just not used to it, maybe the workflow isn't good enough, or maybe it's because I don't trust the model enough to summarize things correctly.

I do agree that this is an area that could use improvement and I see a lot of utility there.



My issue is not explaining how the code work. My issue is understanding why the code exists at all.

The default workflow is to follow the commit history until I don't get to where and when the code in it's current shape was introduced. Then trying reading the commit message that generally link to a ticket and then acquire from tribal knowledge of the team why it was done like that. If it is still necessary, what can we do today instead, etc...

And similarly when designing new code that needs to integrate on existing piece of code... Why there are such constraints in place? Why was it done like that? Who in the team know best?



AI could help with this. It could pull the version history for a chunk of code you have highlighted, pull up the related tickets, sift through them, and then try to summarize from commit messages and comments on the ticket why this was added.

What I could have used the other day was "find the commit where this variable became unused". I wanted to know if the line that used it was intentionally deleted or if it got lost in a refactor. I eventually found it but I had to click through dozens of commits to find the right one.



I get the frustration with this workflow too. This is where having all of your issues in Git would be great, but alas no one wants to collaborate in ticket comments via Git commits...

Inevitably the issue tracker gets moved/replaced/deprecated, and so all of that context disappear into the void. The code however is eternal.



This is great feature, but targeted mostly at junior developers who don't yet understand how particular library or framework work. But when I read the code, I spend most of my effort trying to understand what did original developer meant by this, and LLMs are not yet very helpful in that regard.



I partially agree with you: not knowing what particular functions etc. do is one use case, another for me would be to detangle complicated control flow.

Even if I can reason about each line individually, sometimes code is just complicated and maintains invariants that are never specified anywhere. Some of that can come down to "what were the constraints at the time and what did we agree on", but sometimes it's just complicated even if it doesn't have to be. The latter is something I would love for a LLM to simplify, but I just don't trust it to do that consistently correct.



There are a lot of tools that promise to help you understand existing code. However, it's a really hard problem and to me none of them is production ready.

Personally, I think the problem is that if the AI got it wrong, it would waste you a lot of time trying to figure out whether it's wrong or not. It's similar to outdated comments.



No one should be submitting code that's difficult to understand, regardless of how it was 'written'. This problem exists just the same as a developer who's going to copy large blocks of StackOverflow without much thought.



It isn't always the piece of code that is hard to read. It's about how it fits into the 200,000+ line application you're working with, what it was trying to solve, whether it solved the right problem, and whether there will be unexpected interactions.



This looks cool!

Feature requests: have something like aider's repo-map, where context always contains high level map of whole project, and then LLM can suggest specific things to add to context.

Also, a big use case for me, is building up understanding of an unfamiliar code base, or part of a code base. "What the purpose of X module?", "How does X get turned into Y?".

For those, its helpful to give the LLM a high level map of the repo, and let it request more files into the context until it can answer the question.

( Often I'm in learning mode, so I don't yet know what the right files to include are yet. )



I hope that in a few years we look back at this era of "prompt an LLM for a shell command and instantly run whatever it spits out by pressing enter" with collective embarrassment.



Why not both? What do you see wrong with having gpt spit out a complicated ffmpeg filter chain and explaining every step of the output, which you can then look up in the docs and unit test before implementing? I find verifying the gpt output is still quicker than consulting giant wall of text man pages



Just looking at the "inline transformations animation"...

How is typing "Add the WhileExpression struct here" better or easier than copy/pasting it with keyboard and/or mouse?

I want something that more quickly and directly follows my intent, not makes me play a word game. (I'm also worried it will turn into an iterative guessing game, where I have to find the right prompt to get it to do what I want, and check it for errors at every step.)



Ive been using Zed's assistant panel heavily and have really enjoyed the experience. The UI can be a bit frustrating. Sometimes, when you write it's hard to get it to send your query. The new /workflow seems to really bridge the last gap to effectively edit the parts that im asking for help with changes.

I'm already paying for OpenAI API access, definitely gonna try this



It's the opposite for me ahaha. I'm very excited for Zed as a performant and powerful text editor, an "updated + open-source sublime text" if you will. But I have absolutely no interest in AI and copilot and github integrations and whatnot.

This is not a criticism of zed though, I simply have no interest. Much the contrary: I can only praise Zed as to how simple it is to disable all these integrations!



>Claude 3.5 Sonnet's new Fast Edit Mode, optimized for text editing. This upcoming mode achieves unprecedented speed in transforming existing text, enabling near-instantaneous code refactoring and document editing at scale

I wonder what this is. Have they finetuned a version which is good at producing diffs rather than replacing an entire file at once? In benchmarks sonnet 3.5 is better than most models when it comes to producing diffs but still does worse than when it replaces the whole file.



Not releasing a cross-platform code editor on the dominant OS seems quite weird in my opinion. (I know they plan to do it, but as someone who has built cross-platform apps, this is not rocket science to have Win32 support from the start.)



Anecdotally, I have never seen Windows widely used for development, outside of .NET shops (but that shouldn't be a surprise).

Moreover, there's plenty of quirks Windows has with respect to:

- Unicode (UTF-16 whereas the world is UTF-8; even Java uses UTF-8 nowadays, so it's only Windows where UTF-8 is awkward to use)

- filenames (lots of restrictions that don't exist on Mac or Linux)

- text encoding (the data in the same string type changes depending on the user's locale)

- UUIDs (stored in a mixed-endian format)

- limit of open files (much lower than Mac and Linux; breaks tools like Git)

If you write software in Java, Golang, or Node.js, you'll quickly encounter all of these issues and produce software with obscure bugs that only occur on Windows.

I'm not sure about Rust, but using languages that claim cross-platform support isn't enough to hide the details of an OS.

In every job I've had, the vast majority of devs were on Mac OS, with Linux coming in at a close second (usually senior devs). So I wasn't surprised Zed first supported Mac then Linux. Windows support is nice for students, game developers, and people maintaining legacy .NET software, but those people won't be paying for an editor.



- Why is unicode an issue? The editor should use UTF-8. I don't think its forced to follow the Windows standard if they have their own font rendering? - Filenames is an issue, but it shouldn't be too hard to pull in a path library to do most of that for you. Which is why you should code for Windows from the start because you will quickly realize, oh yeah, maybe I couldn't hardcode forward slashes everywhere because it's going to be a pain to refactor later. And oh yeah, maybe case-insensitivity is a thing that exists. - Text encoding..? Just don't do that. Encode it the same way. Every modern editor, including Notepad, can handle \n instead of \r\n now. You needn't be re-encoding code that was pulled down from Git or whatever. - Also don't see why UUIDs are relevant - Limit on open files is probably a legit issue. But can be worked around of course.

Anyway, none of these sound like major hurdles. I think the bigger hurdles are going to be low-level APIs that Rust probably doesn't have nice wrappers for. File change notifications and... I don't know what. Managing windows. Drivers.



None of that matters in practice, because in this context it would be trivial to solve with a correctly built OS abstraction layer.

And Windows is by large the development platform of choice for any serious gamedev work.



The macOS reference is irrelevant to Java here.

The charset defines the encoding which applies to first and foremost I/O behavior on how it treats otherwise untyped stream of bytes that are being converted to or from (UTF-16) text as stored by Java.

https://openjdk.org/jeps/400 is yesterday's news and something that .NET has been doing since long time ago (UTF8 encoding is an unconditional default starting with .NET Core 1.0 (2017)).

Whether Win32 APIs take UTF-8 or something else (well, it's usually ANSI or UTF-16) is something for the binding libraries or similar abstraction packages for a language of choice to deal with, and has rather minor impact on the overall flamegraph if you profile a sample application.

I find it strange having to defend this, but the UTF-8 vs UTF-16 argument really has no place in 2024 as dealing with popular encodings is as solved problem as it gets in all languages with adequate standard library.



For those that want to try it, and have a rust development environment installed, the following runs Zed on Windows. (The editor part at least, I haven't tried the collaborative functions).
  git clone https://github.com/zed-industries/zed
  cargo run --release


* Both Linux and MacOS are unices, there is less effort.

* The framework they use supports X11 and Wayland out of the box, it wasn't as much effort as you'd think.

* They accept contributions.



I like the idea of Zed, and I recently went editor hopping. I installed Zed but was immediately hit with "your gpu is not supported/ will run incredibly slow" message. Gah...



In my case this pointed out a problem with my NVIDA drivers that I didn’t know about. Once I fixed that issue my whole KDE system ran much faster and allowed Zed to run



I don't mind them having AI features, but I wish they'd fix some of the most basic performance issues considering that their entire reason to exist was "performance".

You know, things like not rerendering the entire UI on the smallest change (including just moving your mouse) without damage reporting.



Tried Zed AI for a bit as a heavy user of Cursor, a few thoughts - I like that they are trying something different with the assistant panel view by providing end users full control of the context as opposed to Cursor's "magic" approach. There is a huge tradeoff between the granularity of control and efficiency however. The friction for manually filling in context for the assistant window might repel devs from using it constantly. - Zed AI is still missing a lot of UX details in terms of their inline assistant capabilities. e.g. pressing ctrl+? for inline assist only selects the current line, and users will have to manually select a block of code for inline assist, which is really annoying. In cursor, cmd+k automatically selects the surrounding code block - Definitely a huge plus that we get to choose our own LLM providers with Zed AI.



I think Zed is starting with a more transparent elegant foundations and then they'll build in more optional magic from there. For example, they're working on automatic codebase RAG



I just want a perplexity-style agentic integration that researches dozens of pages first, does internal brainstorming before printing output in my editor.

I just had a many-hour long hacking session with Perplexity to generate a complex code module.



I've been trying to mainline Zed for the past few months...and overall I really do like it - but there are enough quirks/bugs that make me frustrated.

A simple example: Something as simple as the hotkeys for opening or closing the project panel with the file tree isn't consistent and doesn't work all the time.

To be clear: I am excited about this new addition. I understand there's a ton of value in these LLM "companions" for many developers and many use cases, and I know why Zed is adding it...but I really want to see the core editor become bullet proof before they build more features.



Meanwhile html tags in JSX/TSX files still do not autocomplete/close. Speaking as someone who used Zed for nearly 7 months, it seems like should be prioritizing features that will make the editor more usable. I’d be excited to go back to Zed, but the issues drove me to Neovim.



Anthropic working with Paul Gauthier and Zed being aider-aware would be phenomenal. He's been working this for a while:

https://news.ycombinator.com/item?id=35947073

When familiar with Aider, it feels as if this Zed.ai post is chasing Paul's remarkably pragmatic ideas for making LLMs adept at codebases, without yet hitting the same depth of repo understanding or bringing automated smart loops to the process.

Watching Aider's "wait, you got that wrong" prompt chains kick in before handing the code back to you is a taste of "AI".

If your IDE is git savvy, then working with Aider in an Aider REPL terminal session with frequent /commits that update your IDE is like pair programming with a junior dev that happens to have read all the man pages, docs wikis, and stackoverflow answers for your project.



What has slowed me down is all the garbage pop ups and things that get in my way. Every time I open VSCode it tries reconnecting to some SSH I had open before it lets me do anything. And god forbid I have two different workspaces. The constant "what's new" and "please update me now"s don't help either.

I love IntelliJ but it does not start up quickly, which is a problem if I just want to look at a little code snippet.



Missing from this announcement is language around Privacy. Cursor for example has a Privacy Mode that promises not to store code, and this seems like a critical feature for any AI enhanced dev tools.



Let me be very direct - what's the strength over the competition, e.g. Cody? The fact that it's its own text editor? I'm seeing the assistant emphasized but that just looks like Cody to me.



Agreed and Cody has recently upped their free tier and Sonnet 3.5 can be used for free for completions and up to 200 chat messages per month. Plus you can use it in VS Code and IntelliJ - no need to learn a new text editor.



It’s great news that they provide it for free. It’s hard to subscribe to all the LLM providers. Even with a pro subscription, you need to buy credits to be able to use with the editors, which gets very expensive if you use them a lot.

On another side, I really like the experience of coding with GitHub Copilot. It suggests code directly in your editor without needing to switch tabs or ask separately. It feels much more natural and faster than having to switch tabs and request changes from an AI, which can slow down the coding process.



Has any long-term Emacs user delved into Zed and ported the cool features yet?

Don't take it as sarcasm, I am genuinely interested. I think Emacs' malleability is what still keeps it alive.



In my understanding Zed is "Figma for code". Huge focus on collaboration (hence the slogan "multiplayer code editor") and AI.

It's hard for me to understand what text editor itself has to do with LLM completions.



Is all the overhead required to use the AI features easily disabled with a feature flag such that zero CPU cost and zero network transmission occurs?



I had hope Zed would be a good editor for junior developers, but that ship apparently has sailed, and it's destination isn't where we need to go.



Not sure if it's what GP is talking about - but haven't you noticed how many juniors seem to be shooting themselves in the foot with LLMs these days, becoming over-reliant on it and gaining expertise slower?



This is exactly what I'm talking about. Ever since LLMs took over, I've noticed an uptick in my fellow senior developers complain about the quality of work, but I've also seen a huge increase in the poor quality of PRs to open source projects.

Like, normally my primary complaint about LLMs is their copyright violating nature, and how it is hanging everyone who has ever published the output of an LLM out to dry, but unless LLM output improves, I think it may end up being a non-issue: their use will die out, and every AI startup will die, just like all the Alt-coin startups did.

Want to change my mind on LLM quality? Have it produce code so good that I can't tell the difference between an inexperienced developer (the kind that would be hired into a junior role, not the kind that would be hired for an internship) and the output of this thing.



I've noticed the opposite: people who had never even started learning to program who are now getting stuck in because they don't have to wade through six months of weird error messages about missing semicolons to get started.

But I guess I'm talking about complete newbie developers, which is a different category from junior developers.

I have 25+ years experience at this point so I'm pretty far removed from truly understanding how this stuff benefits newcomers!



Are you talking specifically about the general UX? On initial glance it does look a little like there is a bit more of a learning curve to navigate.



I wonder if there's already a solution that allows me to ask questions about local codebases. e.g. how does this subsystem work.



This feature does exactly that. You can open up the chat panel, run "/tab name-of-tab" or "/file path-to-file" and then start asking questions about the code.



Hey! I'm Nate from Zed. You can also use the /file command to drop entire directories, or even globs into the assistant.

For example, you can do /file *.rs to load all of the rust files in your project into context.

Here is a simple but real example I used a while back:

"/file zed/crates/gpui/src/text_system.rs

I have a font I want to check if it exists on the system. I currently have a &'static str.

Is there something in here that will help me do that?"

I haven't interfaced with the lower level TextSystem that much, so rather than dig through 800 lines of code, I was able to instantly find `is_font_available()` and do what I needed to do.



Until the AI means “system thinking capability that can analyze the codebase and give real suggestions” I don’t buy it. Everything I have seen so far is waste of my time and resources and at best is useful for generating tests or docstrings.



Hey Zed team, just one little nitpick about the page. I love the keyboard shortcuts at the top for the download page and login. However, when I try to Ctrl-L and select the url, it triggers the login page shortcut.

Brave Browser Windows 10



The Zed configuration panel includes tools for adding an Anthropic API key, a Google Gemini API key, an OpenAI API key or connecting to a local instance of ollama.



Hrm. Still not quite what I crave.

Here's roughly what I want. I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.

LLMs often get code slightly wrong. That's fine! Doesn't bother me at all. What I need is an interface that allows me to iterate on code AND helps me understand the changes.

As a concrete example I recently used Claude to help me write some Python matplotlib code. It took me roughly a dozen plus iterations. I had to use a separate diff tool so that I could understand what changes were being made. Blindly copy/pasting LLM code is insufficient.



> I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.

If you squint, that's the same as using an IDE with first class git support and co-editing with a (junior) pair programmer that commits each thing you ask them to do locally, or just saves the file and lets you see stageable diffs you can reject instead of push.

Try the /commit workflow using aider.chat as a REPL in your terminal, with the same git repo open in whatever IDE you like that supports real time git sync.

The REPL talks to you in diffs, and you can undo commits, and of course your IDE shows you any Aider changes the same as it would show you any other devs' changes.

That said, I use Zed and while it doesn't have all the smarts of Aider, its inline integration is fantastic.



Ah interesting. I missed that when browsing the page.

Can you make the diff side-by-side? I’ve always hated the “inline” terminal style diff view. My brain just can’t parse it. I need the side-by-side view that lets me see what the actual before/after code is.



First the unsolicited package installation controversy now they jumped onto the AI bandwagon. Is this a speedrun attempt at crashing a newly created company?

What's next? Web3 integration? Blockchain?



Cursor is a fork of VSCode with code AI that, in my oppinion, better than Zed and other competitor because they implement the MODIFYING existing code workflow better. Most other code AI products are only good at code generation or being a better stack overflow. I don't use Copilot to tell, does it show you diff like Cursor when modifying code?



am Cursor main, dont really have any burning pains that make me want to change tools but open to what I dont know.

Zed vs Cursor review anyone?



Two areas where I think Zed might fall behind: Cursor Tab is REALLY good and probably requires some finetuning/ML chops and some boutique training data.

For composer, there's going to be more use of "shadow workspace" https://www.cursor.com/blog/shadow-workspace to create an agentic feedback loop/ objective function for codegen along with an ability to navigate the language server and look up definitions and just generally have full context like an engineer. Are there plans for the same in zed?

Also, cursor has a model agnostic apply model, whereas you all are leaning on claude.



Cursor is electron/vscode based. Zed uses a custom built rust UI and editor model that gives 120fps rendering. (Or was it 60fps)

It is really smooth on a Mac with ProMotion.



Hey! I'm Nate from Zed. There are a lot of questions about this, here are some quick thoughts...

Cursor is great – We explored an alternate approach to our assistant similar to theirs as well, but in the end we found we wanted to lean into what we think our super power is: Transforming text.

So we leaned into it heavily. Zed's assistant is completely designed around retrieving, editing and managing text to create a "context"[0]. That context can be used to have conversations, similar to any assistant chatbot, but can also be used to power transformations right in your code[1], in your terminal, when writing prompts in the Prompt Library...

The goal is for context to be highly hackable. You can use the /prompt command to create nested prompts, use globs in the /file command to dynamically import files in a context or prompt... We even expose the underlying prompt templates that power things like the inline assistant so you can override them[2].

This approach doesn't give us the _simplest_ or most approachable assistant, but we think it gives us and everyone else the tools to create the assistant experience that is actually useful to them. We try to build the things we want, then share it with everyone else.

TL;DR: Everything is text because text is familiar and it puts you in control.

[0]: https://zed.dev/docs/assistant/contexts.html

[1]: https://zed.dev/docs/assistant/inline-assistant

[2]: https://zed.dev/docs/assistant/prompting#overriding-template...



Hey! I really see the power in Zed and the extensibility and simplicity. Great approach.

I posted this above, but want you to see it:

Two areas where I think Zed might fall behind: Cursor Tab is REALLY good and probably requires some finetuning/ML chops and some boutique training data.

For composer, there's going to be more use of "shadow workspace" https://www.cursor.com/blog/shadow-workspace to create an agentic feedback loop/ objective function for codegen, along with an ability to navigate the language server and look up definitions and just generally have full context like an engineer

Also, cursor has a model agnostic apply model, whereas you all are leaning on claude.

Any plans to address this from the core team or more of a community thing? I think some of this might be a heavy lift

I really like the shared context idea, and the transparency and building primitives for an ecosystem



How do I enable Zed AI?

I'm logged in, using Zed Preview, and selecting the model does nothing. In the configuration it says I "must accept the terms of service to use this provider" but I don't see where and how I can do that.



Ditto.

EDIT: figured it out. My Zed settings was broken, because it tries to create a settings file in "~/.config/zed", and my entire "~/.config" directory is symlinked somewhere else, so macOS permissions broke Zed's ability to create that config file. So I gave Zed full-disk access in the macOS "Privacy & Security" settings, Zed is now able to make a config file (which is where the model is set when you choose one), and everything is hunky dory.



At some point an AI first programming language will have to come along which will integrate well with the AI models, Editor and Programmer input seamlessly.

Im not sure what that is, but Im guessing it will be something along the lines of Prolog.

You will basically give it some test cases, and it will write code that passes those test cases.



I've been playing with it this morning, and fuck me this is awesome. I finally feel that an LLM is being actually useful for coding. I still have to check everything, but it's like having a talented junior helping me.



Hooking into standard APIs like Ollama is definitely preferred (at least to me) because it means it’s more composable and modular. Very Unix-y. Lets you use whatever works for you instead of the vendor.



You're right, it's not comparable. Jetbrain's code completion actually integrates with the IDE to guarantee the suggested code isn't entirely a hallucination -- for example it checks for unresolved references so that its suggestions don't reference non-existent variables/methods.

I've disabled Copilot as it produces garbage way too often. I found myself "pausing" to wait for the suggestion that I'd ultimately ignore because it was completely invalid. I've left Jetbrain's code completion on though because it's basically just a mildly "smarter" autocomplete that I'll occasionally use / I don't find myself relying on.



Local compute is also getting progressively harder to make faster. I have a CPU from 6 years ago and checked some benchmarks to see if upgrades would be worth it. It's a 90% increase in performance with a 27% improvement in single threaded. Pretty substantial but not exponential. GPUs are advancing more quickly than CPUs right now but I wouldn't be surprised if they hit a wall soon.



I agree that the completions might not be that great, but for context: this is a 100M model, while most models that people compare this to are atleast 100x bigger.

They also focus on single line completions, and ship different models per programming language. All these make it possible to ship a decent completion engine with a very small download size.



Interesting, have you used it/found it to be usable? I use IntelliJ myself, but it's not known for being a lean beast. With the addition of stuff like test containers and a few plugins, I'd be suprised if my machine didn't melt adding a local LLM too.



I mainly use PyCharm and I found the auto-complete to be good. It doesn't always kick in when I expect but some of the suggestions have been surprisingly complex and correct.



Maybe its because there's literally no point in using a local llm for code-completion. You'd be spending 90% of your time correcting it. Its barely worth it to use co-pilot.



What it shows is that it can be done — in a limited way. Other people might not like those limits and chose to go a different way. I am not sure what's worth lamenting here.



> I am not sure what's worth lamenting here.

The normalisation of surrendering all our code to remove AI cloud gods maybe? The other being a super responsive IDE now having major features have network requests delaying them, although HW requirements likely make that faster for most people.



That sounds a little too spooky for my taste, but you do you. What anything beyond that means, in effect, is that you (not necessarily you) want to chose my values for me (not necessarily me).

I don't see why I should care to have you do that.



Zed is trying to position itself as a next gen hackers editor. Text editing with lessons learned from the multitude of experiments in the space. A flagship feature that is online only and requires me to trust a third party is a deal breaker to many. For instance, my employer would not be pleased if I suddenly started sharing code with another.

Take the AI out of the conversation: if you told your employer you shared the codebase, that’s an insta-fire kind of move.



Is Zed a supplier? Sounds like a random developer can sign up at will without any corporate NDAs or other established legal agreements. Will my employer hash out the NDAs for every developer tool that wants to incorporate cloud AI for the lulz?



IntelliJ is an IDE, though, not a text editor. If you want a text editor with AI, you may need to wait for Microsoft to bring ChatGPT to notepad.

It seems to be limited to a single CPU core at a time, so depending on your CPU boosting settings, some but not too much.

It's quite snappy despite not using a lot of resources. I tried replicating the effect using a CUDA-based alternative but I could get neither the snappiness nor the quality out of that.



AI autocomplete is just a toy, it has been there since the begining of AI code and I find it pretty useless. I prefer making a proper prompt to get better answers. Also most of the time engineers (use AI to) modify existing codebases rather than making new code.



> AI autocomplete is just a toy

I disagree, when I'm writing code and it just autocompletes the line for me with the correct type with the correct type params set for the generics it saves me the mental effort having to scroll around the file to find the exact type signature I needed. For these small edits it's always right if the information is in the file.



i dont know what statically typed language you are using that never requires specifying types. As far as i know only ocaml works this way.

Different languages support different levels of type inference. When im writing rust and typescript im often specifying types this way for structs and function signatures.

With llms what often happens is i write a parameter or variable name and it infers from the name what the type is because i have used the same name somewhere else in my codebase. This would not work without llm autocomplete



What programming languages do you mostly work in?

I've been wondering if the benefits of AI-autocomplete are more material to people who work in languages like Python and JavaScript that are harder to provide IDE-based autocomplete for.

If you're a Java or TypeScript developer maybe the impact is reduced because you already have great autocomplete by default.



I think typescript benefits even more than JavaScript because the type signatures add so much context. After defining the types and signatures copilot will often complete the body within a few changes of perfect



Hmm. I was excited about Zed, but it now seems painfully clear they’re headed in a completely different direction than I’m interested in. Back to neovim, I guess…

联系我们 contact @ memedata.com