(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40617082

一个新项目旨在使用语言服务器协议(LSP)开发语言服务器,允许各种编辑器使用相同的功能,而不需要特定于编辑器的实现。 梦想是作者可以写下代码或文本,并且由于遵守 LSP,这些代码或文本可以在众多兼容的编辑器中无缝工作。 现有功能包括将鼠标悬停在单词上时提供定义、检查拼写并建议语法正确的替代方案、语义语法突出显示以及集成人工智能。 然而,不同语言服务器的同时使用仍不清楚。 一种可能的解决方案涉及创建一个代理来合并来自不同语言服务器的查询。 总体而言,LSP 代表了一项有趣的尝试,旨在促进跨编辑协作和人工智能驱动的生产力提高。

相关文章

原文


Hey everyone! Thank you so much for posting and upvoting this. Just wanted to say I'm the primary author here and happy to try and answer any questions anyone might have! Feel free to ask away here! This is a very new project and we have a lot of ground we are hoping to cover. Thank you for the support!



Isn't the entire point behind "language server" (guess it's referring to LSP - Language Server Protocol) that it gets rid of editor-dependent stuff?

The dream (as I understood it) was that people could write things like this, and it'll work out-of-the-box for a bunch of other editors, as long as they follow LSP.



It's not really about the editor. The editor (the LSP client) just provides support for telling the server what the user wants a definition for, and how to display that back to the user.

As a simple example, I'm learning japanese so I built a 50-line LSP server that just looks up definitions of a word under cursor in a dictionary. This is an almost-trivial server. Its only capability is to offer definitions when hovering a position in a text document. It works perfectly well in neovim with lspconfig with 2 lines of configuration. I'm sure it would be similarly trivial to integrate in VSCode, Emacs, etc.

Other non-coding uses of LSP that I'm aware of are spell-checking and grammar suggestions (LSP clients can display diagnostics), semantic syntax highlighting (e.g. for highlighting a markdown document), and projects like the one discussed here which just integrate more general-purpose AI.



It’s an interesting experiment, but an issue is that you might not want to give up using your regular language server. Do editors let you use more than one at a time? How do they display the combined results?

If you can’t have more than one at a time, one possibility would be to make the AI language server a proxy that passes through any query results from the other language server, maybe modifying or them improving them somehow. Whatever the other language server returns might or might not be useful context for the AI.



Currently, whether multiple language servers can be active in one buffer is up to the language client. In Emacs for example, Eglot (built-in) doesn’t support this but LSP-mode (bolt-on) does.



neovim allows multiple LSPs. In my setup the different sources show virtual text comments with unique colors for each lsp, one message after the other - I think this is default behavior but i'm not sure



I’d love to get a peek at your config or setup - I’ve used Astro for a while because I just got too bogged down in my old vimrc, especially after migrating to Neo a while ago. It’s fine, but fiddling is less than stellar and I know my performance overhead is higher than it could be



Very interesting approach. The folks at rift [0] worked on LSP as an integration point for AI coding. But I think they are focusing on other things now.

Do you think the LSP abstraction can support interactions beyond copilot style autocomplete? While that's a super helpful UX, it's also pretty narrow and limited.

My project aider [1] provides a pair-programming UX, which allows complex interactions like asking for a change that will modify multiple files. Could LSP servers support more general AI coding like this?

[0] https://github.com/morph-labs/rift

[1] https://github.com/paul-gauthier/aider



The problem with aider for me is that it works in the terminal, where as coding happens in the editor, where I most likely already have the file open I want to transform with LLM. I probably even have my cursor on the thing or can at least easily select the function etc. that needs to be changed.

CopilotChat.nvim solves this somewhat elegantly for neovim, providing a streamlined UI to interact with an LLM in a way that allows accepting suggested diffs to your currently open buffer. The problem however is that it only works with GitHub's Copilot chat, as the name suggests. Not sure how well Copilot stacks up against gpt-4o for example but I'd imagine not that well.

edit: Quick demo of CopilotChat.nvim: https://github.com/raine/ghtool/assets/11027/e8d5820b-eafb-4...

This is pretty much the kind of UI I'd want for interacting with LLMs, aside from the typical Copilot style ghost autocomplete.



These are all awesome projects! After skimming them, one big difference between these and LSP-AI is that LSP-AI is a language server. That means we work out of the box with almost all popular text editors and IDEs, you don't need any plugins to get completion.

For custom use cases like chat windows and some of the things we are working on next, you will still need plugins, but making it an LSP simplifies things like synchronizing document changes and a communicating with the client.



This is actually what we are working on adding next! We are working on code crawling and a vector search for better context gathering. Stay tuned for some more info on this



I hope your LSP client module ends up being reasonably isolated and reusable! I found that client support for LSP is the weakest part of the ecosystem. The story generally seems to be "the editor has a bespoke client and there is never a reason to use LSP from any other context".



this is amazing yet docs are lacking as per instruction to install: "cargo install lsp-ai -F llama_cpp -F metal"

one gets: Package `lsp-ai v0.1.0` does not have feature `llama-cpp-2`. It has an optional dependency with that name, but that dependency uses the "dep:" syntax in the features table, so it does not have an implicit feature with that name.

not very clear what enabling feature is beyond specifying the flag whether to download git clone and build etc.



Hey thanks for bringing this up. I'm not able to reproduce the error you are getting. I just ran `cargo install lsp-ai -F llama_cpp -F metal` on my mac and received no errors and a clean install. Can give me the full output of your command either here or in a github issue?



I’ve tried these tools a bit but I’m having trouble finding long term value of them. I’d love to hear from people who use them what their specific workflow is.

Personally I find it just easier to either write the code myself, or ask ChatGPT/whatever for snippets for specific problems which I then heavily modify to suit my needs (and fix its bugs, which happen quite often). But maybe I’m just too engrained in existing behavior.



The workflow of having AI suggestions pop up on the rest of the line is really nice. You can ignore when you know what you are trying to write, but the really low-friction interaction of “write a very rigorous comment and then let the LLM autocomplete the implementation” is often enough to solve one-liners in languages where I’m not fluent in the standard lib, which means I don’t need to break flow to go read docs or whatever.

Seems small but I think it’s actually a major productivity win for polyglot programming (which is a lot of my current $dayjob).

I also like the convenience of “start a session with my current file automatically in the context”, again, lowers the friction substantially.



You have to learn how to prompt tools like Copilot. A few tricks I use a lot:

1. Write comments saying what you want the next lines of code to do

2. Write function definitions with clear function names and type annotations on the arguments - this can result in the full function body being provided if your definitions are clear enough

3. For repetitive code (like parameterized unit tests) provide a couple of examples and then use comments to hint at what it should write for you next based on those examples

4. Sometimes it's good to temporarily copy and paste a chunk of code in from elsewhere. For example, copy a CREATE TABLE SQL statement into a Python file when you are writing code that will interact with that table - or even an HTML page when you are writing out the HTML for a form associated with that table



For 4, this is where I think using the target language(s) LSP / intellesense would be useful. The AI tools should know that I'm referring to a specific tables/types/libraries through references/locality/imports and supply that automatically to the context for me.



Copilot does that in a really clever way already: any time it performs a completion it looks at other nearby files in the project and tries to find pieces of code that look semantically similar to the area you are editing, then includes those snippets in its prompt.

This works well as a general rule, but sometimes it doesn't fine the right snippets - which is why I occasionally help it out through copy and paste.



As described, that seems to miss out on files that are far away through imports, and if not similar to existing code, such as when writing unique code.

My main thoughts behind this are that

1. The LLMs tend to hallucinate library functions

2. I don't want to have to copy and paste a schema



Try Cody with gpt-4o and explicit prompting (option+k). It is a nice experience to have the llm at your finger tips instead of having to do round-trips to a different ui.

The models used for autocompletion in Github Copilot and other systems are usually not as strong but faster and cheaper.

You can still get decent results from the autocomplete models if you guide them with comments but I find explicit prompting less frustrating when I care about getting a good result.



I am having to make enhancements to a Ruby codebase where I barely know the language and the code base is completely foreign. So I tried using Cursor to provide AI autocompletion, suggests etc. It's definitely useful for this use case and so I can see the appeal for junior developers.

For my other code bases I find that the amount of time it saves is definitely nice but only barely worth paying another SaaS subscription for.

And so I'm happy to just wait for Jetbrains and Microsoft to roll this into the existing products for free.



Auto complete directly inline inside the editor is pretty magical feeling. Sometimes you'll pause for a moment and it will suggest the exact next 3-4 lines you were about to type. You press tab and move on.



Do you find yourself having to go to 3 or 4 different files to get everything chat gpt needs for its context to solve the problem? Tools like this can help with that use case.



LLMs must be trained for full-in-middle completion to be useful in this scenario, but think “the next stage of autocomplete that uses the context around it” more than “writes entire functions”.

I’ve found it great when manipulating data between two formats, like a CSV export into a JSON config. Something that might be too short to write a script for but long enough to be tedious, you can now tab complete your way through it.



You can specify FIM objective models for completion. I find that Codestral from Mistral works pretty well.

That next stage is currently what I am working. I'm building out a code splitter using TreeSitter right now and already have experimental vector search in the language server.



I use RustRover/VS Code + Codeium or Zed + Supermaven and I have used Copilot before. To be honest it takes some time to get used to the flow. I have turned them off multiple times and finally the workflow has set well in my brain. Now I feel very productive with them.

I work full-time for my own product (very early stage) but I am happy to share my own journey of using AI code assistants. Please feel free to check the commits: https://github.com/brainless/dwata



Curious how this compares to the locally run AI code assistance that Jetbrains has added to their products. I’ve found it to be pretty good for what I want (which is very minimal assistance), it’s just a bit too eager to suggest code sometimes.



Nice. I saw this coming. Next up is a "generic" webserver which just serves HTTP response data based on some system prompt. :)



Yes! I would love to hear your thoughts on our current features and roadmap. If you have any ideas or want to contribute, feel free to make a github issue.



There are no docs for configuration for Emacs. That is something we need to add! I don't have experience using emacs, but if anyone does and wants to create an issue with some examples that would be awesome, otherwise I can find some time later to add it

联系我们 contact @ memedata.com