Δ-Mem: Efficient Online Memory for Large Language Models

原始链接: https://arxiv.org/abs/2605.12357

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Hacker Newsnew | past | comments | ask | show | jobs | submitloginΔ-Mem: Efficient Online Memory for Large Language Models (arxiv.org)18 points by 44za12 59 minutes ago | hide | past | favorite | 2 comments help ktallett 5 minutes ago | next [–] The obvious energy saving step would be to utilise previous searches by others. Many of the tasks people do are rather similar, it is such an energy waste to start again each time.(Obviously ignoring the huge energy saver, which is to observe if you even need to bother doing the task at all.)replyDeathArrow 6 minutes ago | prev [–] I see lots of techniques proposed to give LLM the capacity to recall things, I even saw a lot of memory plugins for AI coding agents, I tried some myself.What I want to see is something that was tested and proved in practice to be genuinely useful, especially for coding agents.reply Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact Search:
相关文章

原文

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

联系我们 contact @ memedata.com