![]() |
|
![]() |
| I haven't used LangChain, but my sense is that much of what it's really helping people with is stream handling and async control flow. While there are libraries that make it easier, I think doing this stuff right in Python can feel like swimming against the current given its history as a primarily synchronous, single-threaded runtime.
I built an agent-based AI coding tool in Go (https://github.com/plandex-ai/plandex) and I've been very happy with that choice. While there's much less of an ecosystem of LLM-related libraries and frameworks, Go's concurrency primitives make it straightforward to implement whatever I need, and I never have to worry about leaky or awkward abstractions. |
![]() |
| I've had the same experience. I thought I was the weird one, but, my god, LangChain isn't usable beyond demos. It feels like even proper logging is pushing it beyond it's capabilities. |
![]() |
| totally agree on not hiding prompts, and have tried to stop doing that as much as possible in LangChain and are not doing that at all in LangGraph
thanks for the thoughts, appreciate it |
![]() |
| I really like the idea of "good" and "bad" abstractions. I have absolutely built both.
This sentiment is echoed in this comment in reddit comment as well: https://www.reddit.com/r/LocalLLaMA/comments/1d4p1t6/comment.... Similarly to this post, I think that the "good" abstractions handle application logic (telemetry, state management, common complexity), and the "bad" abstractions make things abstract away tasks that you really need insight into. This has been a big part of our philosophy on Burr (https://github.com/dagworks-inc/burr), and basically everything we build -- we never want to tell how people should interact with LLMs, rather solve the common problems. Still learning about what makes a good/bad abstraction in this space -- people really quickly reach for something like langchain then get sick of abstractions right after that and build their own stuff. |
![]() |
| OOP is Java, and Java is OOP, right?
My point is to follow a dogmatic OOP approach (think all the nouns like Agent, Prompt, etc.) to model something rather sequential. |
![]() |
| The OOP concept described by Smalltalk is message passing. The closest OOP-ness rivals are Ruby and Objective-C (and arguably Erlang). Java has no such facilities. Java is much more like C++. |
![]() |
| LLMs are, at least at present, exactly the kind of thing where trying to use an abstraction without understanding what it actually does is exactly what's going to create a mess in the long run. |
![]() |
| Some "agents" like the minecraft bot Voyager(https://github.com/MineDojo/Voyager) have a control loop, they are given a high level task and then they use LLM to decide what actions to take, then evaluate the result and iterate. In some LLM frameworks, a chain/pipeline just uses LLM to process input data(classification, named entitiy extraction, summary, etc).
|
![]() |
| Open AI when they gave press access to gpt said that you must not publish the raw output for AI safety reasons. So naturally people self selected the best outputs to share. |
![]() |
| I don't think this is a sensible use of Markov chain because that has historic connotations in NLP for text prediction models and would not include external resources in that.
RAG has been popular for years including in models like BERT and T5 which can also make use of contextual content (either in the prompt, or through biasing output logits which GPT also supports). You can see the earliest formal work that gained traction (mostly in 2021 and 2022 by citation count) here - http://proceedings.mlr.press/v119/guu20a/guu20a.pdf - though in my group, we already had something similar in 2019 too. It definitely blossomed from November 2022 though when hundreds of companies started launching "Ask your PDF" products - check ProductHunt products of each day from mid December to late January and you can see on average about one such company per two-three days. |
![]() |
| I don't know a thing about LangChain so this is a real digression, but I often wonder if people who are critiquing GraphQL do so from the position of only having written GraphQL resolvers by hand.
If so, it would make sense. Because that's not a whole lot of fun. But a GraphQL server-side that is based around the GraphQL Schema Language is another matter entirely. I've written several applications that started out as proofs of concept and have evolved into production platforms based on this pairing: https://lighthouse-php.com https://lighthouse-php-auth.com It is staggeringly productive, replaces lots of code generation in model queries and authentication, interacts pretty cleanly with ORM objects, and because it's part of the Laravel request cycle is still amenable to various techniques to e.g. whitelist, rate-limit or complexity-limit queries on production machines. I have written resolvers (for non-database types) and I don't personally use the automatic mutations; it's better to write those by hand (and no different, really, to writing a POST handler). The rest is an enormous amount of code-not-written, described in a set of files that look much like documentation and can be commented as such. One might well not want to use it on heavily-used sites, but for intranet-type knowledgebase/admin interfaces that are an evolving proposition, it's super-valuable, particularly paired with something like Nuxt. Also pretty useful for wiring up federated websites, and it presents an extremely rapid way to develop an interface that can be used for pretty arbitrary static content generation. |
![]() |
| Evaluating technology based on its "cool kid usage" and a vague sense of complexity is likely not the best strategy. Perhaps instead you could ask "what problems does this solve/create?" |
![]() |
| prompt engineering requires the ability to see what is happening at various steps and langchain makes that harder if not impossible.
honestly I don't need that much abstraction. |
![]() |
| LangChain approach struck me as interesting, but I never really saw much inherent utility in it. For our production code we went with direct use of LLM runtime libraries and it was more than enough. |
![]() |
| Not really. It's pretty much the same for RAG as it is for everything else - just a thin additional abstraction on top of apis that are easy to call on their own. |
![]() |
| I think it was great at first when llms were new and prompting required more strategy. Now the amount of of abstractions/ bloat they have for essentially string wrappers makes no sense |
![]() |
| Has anyone else found a good way to swap out models between companies, Langchain has made it very easy for us to swap between openai/anthropic etc |
![]() |
| No.
That'd be a good explanation, but it's theoretical. In practice: A) there was no meaningful internal LLM API pre-ChatGPT. All this AI stuff was under lock and key until Nov 2022, then it was an emergency. B) the bits we're discussing are OpenAI-specific concepts that could only have occurred after OpenAI's. The API includes chat messages organized with roles, an OpenAI concept, and "tools", an OpenAI concept, both of which came well after the GPT API. Initial API announcement here: https://developers.googleblog.com/en/palm-api-makersuite-an-... |
![]() |
| Many such cases. It is very hard to balance composition and abstraction in such frameworks and libraries. And LLMs being so new it has taken several iterations to get the right patterns and architecture while building LLM based apps. With patchwork (https://github.com/patched-codes/patchwork) an open-source framework for automating development workflows we try hard to avoid it by not abstracting unless we see some client usage. As a result you do see some workflows appear longer with many steps but it makes it easier to compose them.
|
![]() |
| Frustration with LangChain is what led us (ex-CMU/UW-Madison researchers) to start building Langroid[1], a multi-agent LLM framework. We have been thoughtful about designing the right primitives and abstractions to enable a simple developer experience while supporting sophisticated workflows using single or multiple agents. There is an underlying loop-based orchestration mechanism that handles user interaction, tool handling and inter-agent handoff/communication.
We have companies using Langroid in production. [1] Langroid: https://github.com/langroid/langroid |
![]() |
| Learning LangChain is effort, but not as much as truly understanding deep learning, so you learn LangChain and it feels like progress, when it may not be |
I kept telling them that it works well if you have a standard usage case but the second you need to something a little original you have to go through 5 layers of abstraction just to change a minute detail. Furthermore, you won't really understand every step in the process, so if any issue arises or you need to be improve the process you will start back at square 1.
This is honestly such a boost of confidence.