<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>每日HackerNews</title><link></link><description></description>
            <item>
                <title> SideX – A Tauri-based port of Visual Studio Code</title>
                <link>https://github.com/Sidenai/sidex</link>
                <guid>https://github.com/Sidenai/sidex</guid>
                <pubDate>Mon, 06 Apr 2026 07:30:07 +0000</pubDate>
                <description><![CDATA[<p></p><p>## SideX: A Lightweight VSCode Port

SideX is an early-release, open-source project aiming to recreate Visual Studio Code’s functionality with a significantly smaller footprint. It achieves this by replacing Electron with Tauri – a Rust-based backend and native webview – resulting in faster performance and reduced resource usage (targeting 200MB RAM).

The project is a heavily stripped, 1:1 architectural port of VSCode, utilizing over 5,600 TypeScript files adapted to run on a native shell. Core features like the Monaco editor, file explorer, integrated terminal, basic Git integration, and theme support are currently working, alongside extension loading from Open VSX.

However, SideX is still under active development. Many workbench features are incomplete, extension compatibility is limited, and debugging/settings UIs are unstable. 

The developers are actively seeking community contributions to address bugs, implement features (especially terminal, extensions, and debugging), and improve platform support.  If you're familiar with VSCode’s architecture, contributing is encouraged! You can find the project and contribution guidelines on GitHub: [https://github.com/Sidenai/sidex](https://github.com/Sidenai/sidex).



</p>]]></description>
            </item>
            
            <item>
                <title>展示HN：使用Gemma E2B在M3 Pro上实现的实时人工智能（音频/视频输入，语音输出）
 Show HN: Real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B</title>
                <link>https://github.com/fikrikarim/parlor</link>
                <guid>https://github.com/fikrikarim/parlor</guid>
                <pubDate>Mon, 06 Apr 2026 06:00:54 +0000</pubDate>
                <description><![CDATA[<p>## Parlor：实时、本地多模态人工智能

Parlor 是一个研究预览版，展示了使用语音和视觉进行实时人工智能对话，所有处理都在您的设备*本地*进行。它利用 Google 的 Gemma 4 E2B 来理解音频和视觉输入，并使用 Kokoro 进行文本到语音的输出。

Parlor 的驱动力是创造一个可持续、免费的英语学习工具，它通过完全在设备上运行来消除服务器成本——这得益于像 Gemma 这样更小、更强大的 AI 模型领域的最新进展。虽然它无法执行复杂的代理任务，但这代表着向人人可及的人工智能迈出的重要一步。

目前 Parlor 适用于 macOS (Apple Silicon) 和 Linux，并配备支持的 GPU，允许进行免提、可中断的对话。它可以通过 GitHub 克隆轻松运行，并需要大约 3GB 的内存。该项目旨在未来移植到手机，设想一个用户可以通过简单地展示和谈论他们周围的物体来与人工智能交互的世界，甚至利用多语言支持。
</p><p>## Parlor: Real-Time, On-Device Multimodal AI

Parlor is a research preview demonstrating real-time AI conversations using both voice and vision, all processed *locally* on your machine. It leverages Google’s Gemma 4 E2B for understanding audio and visual input, and Kokoro for text-to-speech output. 

Driven by the desire to create a sustainable, free English-learning tool, Parlor eliminates server costs by running entirely on-device – now achievable thanks to recent advancements in smaller, powerful AI models like Gemma. While not capable of complex agentic tasks, this represents a significant step towards accessible AI.

Currently available for macOS (Apple Silicon) and Linux with a supported GPU, Parlor allows for hands-free, interruptible conversations. It’s easily runnable via a GitHub clone and requires approximately 3GB of RAM. The project aims for future portability to phones, envisioning a world where users can interact with AI by simply showing and speaking about objects in their environment, even utilizing multi-lingual support.



</p>]]></description>
            </item>
            
            <item>
                <title>Usenet 档案
 Usenet Archives</title>
                <link>https://usenetarchives.com</link>
                <guid>https://usenetarchives.com</guid>
                <pubDate>Mon, 06 Apr 2026 05:35:16 +0000</pubDate>
                <description><![CDATA[<p>UsenetArchives.com 是一个历史悠久的 Usenet 帖子档案，包含数十年内容。**用户被警告，该内容未经审核，可能包含成人、冒犯或令人反感的内容。**

通过访问该网站，您**确认您已年满 18 岁**（或您所在地区的法定年龄），并理解该档案的性质。您对您的浏览选择承担全部责任，并同意不非法分发内容。

该网站包含来自公共论坛的用户生成内容，无法保证其适当性。用户可以提交内容删除请求，访问受网站的隐私政策管辖。总之，请谨慎行事，并理解可能存在的敏感材料。</p><p>UsenetArchives.com is a historical archive of decades-old Usenet posts. **Users are warned that the content is unmoderated and may include adult, offensive, or objectionable material.**

By accessing the site, you **confirm you are 18 years or older** (or the age of majority in your location) and understand the nature of the archive. You acknowledge full responsibility for your viewing choices and agree not to redistribute content illegally. 

The site contains user-generated content from public forums and cannot guarantee its appropriateness. Users can submit content removal requests, and access is governed by the site’s Privacy Policy. Essentially, proceed with caution and understanding of potentially sensitive material.</p>]]></description>
            </item>
            
            <item>
                <title>基于推拉的信号算法
 Signals, the push-pull based algorithm</title>
                <link>https://willybrauner.com/journal/signal-the-push-pull-based-algorithm</link>
                <guid>https://willybrauner.com/journal/signal-the-push-pull-based-algorithm</guid>
                <pubDate>Mon, 06 Apr 2026 05:34:05 +0000</pubDate>
                <description><![CDATA[<p>## 信号：深入了解响应式编程

信号是一种现代的响应式编程方法，在Solid、Vue等前端框架中越来越受欢迎，但往往缺乏清晰的内部理解。 它们的核心是管理状态，并在状态改变时自动更新依赖值——就像一个电子表格，单元格会根据公式自动更新。

这种响应性由一种“推拉”算法驱动。**基于推的信号**在值改变时通知订阅者（不共享新的状态本身）。**基于拉的“计算值”**是惰性的；它们只有在被读取*并且*依赖项发生变化时才会重新计算。 重要的是，计算值会自动跟踪这些依赖项，无需手动指定——这是优于传统方法的关键优势。

这种依赖跟踪利用一个全局堆栈将计算与它们访问的信号关联起来。 带有“脏”标志的缓存系统确保仅在必要时才重新评估，从而优化性能。 推用于失效和拉用于重新评估的结合，创造了一种细粒度、高效的响应式系统。

标准化工作正在进行中，旨在将信号原生集成到JavaScript中（TC39 proposal-signals），可能为未来的框架提供一个共同的基础。 这种方法侧重于*如何*传播变化，为现有的状态管理解决方案提供了一种强大的替代方案。
</p><p>## Signals: A Deep Dive into Reactive Programming

Signals are a modern approach to reactive programming, gaining traction in front-end frameworks like Solid, Vue, and others, but often lacking clear internal understanding. At their core, Signals manage state and automatically update dependent values when that state changes – like a spreadsheet where cells update based on formulas.

This reactivity is powered by a “push-pull” algorithm. **Push-based Signals** notify subscribers when their value changes (without sharing the new state itself). **Pull-based “computed” values** are lazy; they only re-calculate when read *and* invalidated by a change in their dependencies. Crucially, computed values automatically track these dependencies without manual specification – a key advantage over traditional methods.

This dependency tracking utilizes a global stack to link computations to the Signals they access. A cache system with a “dirty” flag ensures re-evaluation only happens when necessary, optimizing performance. The combination of push for invalidation and pull for re-evaluation creates a fine-grained, efficient reactivity system.

Standardization efforts are underway to natively integrate Signals into JavaScript (TC39 proposal-signals), potentially offering a common foundation for future frameworks. This approach focuses on *how* changes propagate, offering a powerful alternative to existing state management solutions.



</p>]]></description>
            </item>
            
            <item>
                <title>2026年国誉设计奖获奖者
 Winners of the 2026 Kokuyo Design Awards</title>
                <link>https://spoon-tamago.com/winners-of-the-2026-kokuyo-design-awards/</link>
                <guid>https://spoon-tamago.com/winners-of-the-2026-kokuyo-design-awards/</guid>
                <pubDate>Mon, 06 Apr 2026 05:05:15 +0000</pubDate>
                <description><![CDATA[<p>## 功敬设计奖：源于体验的设计

近25年来，功敬设计奖一直致力于推动创新文具，将概念变为现实。今年的主题是“共鸣：引发共鸣的设计”，鼓励设计师从个人经验中汲取灵感，创造出有影响力的产品。

大奖得主是神成宏树的**“Before Note”**，它重新构想了笔记本，将其设计成可定制的页面组合，使用户能够超越批量生产，个性化他们的体验。

优胜奖作品突出了微妙而有影响力的设计：高东田的**“Gram”**探索了重量对书写的影响，塚本雄二的**“边缘识别笔记本”**以可持续的方式提供优雅的组织，而五十嵐&泷泽的**“渐变日记”**则打破了刻板的计划表结构，采用了流畅的渐变布局。

其他值得关注的入围作品包括创新的包装、鼓励反思的笔以及增强阅读和捕捉灵感的工具，所有这些都体现了对用心互动和个人联系的关注。
</p><p>## Kokuyo Design Awards: Designs Rooted in Experience

For nearly 25 years, the Kokuyo Design Awards have championed innovative stationery, transforming concepts into reality. This year’s theme, “hamon: design that resonates,” prompted designers to draw from personal experience to create impactful products.

The Grand Prix winner, **“Before Note”** by Hiroki Kannari, reimagines the notebook as a customizable bundle of pages, empowering users to personalize their experience beyond mass production. 

Merit awards highlighted subtle yet impactful design: **“Gram”** by Takashi Higashide explored the effect of weight on writing, **“Notebooks Identified by Edges”** by Yuji Tsukamoto offered elegant organization with a sustainable touch, and **“Gradience Diary”** by Igarashi & Takizawa broke from rigid planner structures with a fluid, gradient-based layout.

Other notable finalists included innovative packaging, pens encouraging reflection, and tools to enhance reading and capture inspiration, all demonstrating a focus on mindful interaction and personal connection.



</p>]]></description>
            </item>
            
            <item>
                <title>使用 Slug 算法渲染任意比例的表情符号
 Rendering arbitrary-scale emojis using the Slug algorithm</title>
                <link>https://leduyquang753.name.vn/blog/2026/4/4/rendering-arbitrary-scale-emojis-using-the-slug-algorithm</link>
                <guid>https://leduyquang753.name.vn/blog/2026/4/4/rendering-arbitrary-scale-emojis-using-the-slug-algorithm</guid>
                <pubDate>Mon, 06 Apr 2026 05:04:24 +0000</pubDate>
                <description><![CDATA[<p>## HarfBuzz GPU 与 Slug 渲染彩色字体

Eric Lengyel 的 Slug 算法现已开源，并集成到 HarfBuzz 中作为 GPU 库，超越了文本塑形，进入了字形渲染领域。传统上，文本渲染依赖于在特定尺寸下栅格化位图，这对于缩放或 3D 环境来说是个问题。像符号距离场 (SDF) 这样的替代方案也有局限性，但 Slug 直接在片段着色器中计算字形覆盖率，从而实现完美的缩放和变换。

核心思想是将字形曲线预处理成数据缓冲区并上传到 GPU。虽然最初是用于单色字形，但可以通过 COLRv0 和 COLRv1 等格式扩展到矢量彩色字体（如表情符号）。COLRv0 将表情符号渲染为堆叠的彩色字形，可以通过调整现有的单色渲染来轻松支持。COLRv1 更加复杂，利用带有变换、裁剪和混合的渲染树 – 由 HarfBuzz 的 `hb-paint` 组件处理。

这涉及将绘图命令（裁剪蒙版、填充、变换、组）编码到纹理缓冲区中，并在片段着色器中执行它们，可能需要基于图层的混合方法。最终，这使得在任何应用程序中都能实现清晰、可缩放的表情符号渲染，并且即使对于单色文本也优于传统方法。作者希望该概述能够激发进一步的开发并集成到现有的渲染库中。
</p><p>## HarfBuzz GPU & Rendering Color Fonts with Slug

Eric Lengyel’s Slug algorithm is now open-source and integrated into HarfBuzz as a GPU library, moving beyond text shaping into glyph rendering. Traditionally, text rendering relied on rasterizing bitmaps at specific sizes, problematic for scaling or 3D environments. Alternatives like Signed Distance Fields (SDFs) have limitations, but Slug calculates glyph coverage directly in the fragment shader, enabling perfect scaling and transformations.

The core idea involves preprocessing glyph curves into a data buffer uploaded to the GPU. While initially for single-color glyphs, this can be extended to vector color fonts (like emojis) using formats like COLRv0 and COLRv1. COLRv0 renders emojis as stacked, colored glyphs, easily supported by adapting the existing single-color rendering. COLRv1 is more complex, utilizing a render tree with transforms, clipping, and blending – handled by HarfBuzz’s `hb-paint` component.

This involves encoding drawing commands (clip masks, fills, transforms, groups) into a texture buffer and executing them in the fragment shader, potentially requiring a layer-based blending approach.  Ultimately, this allows for crisp, scalable emoji rendering in any application, and offers advantages over traditional methods even for monochrome text. The author hopes this outline will inspire further development and integration into existing rendering libraries.



</p>]]></description>
            </item>
            
            <item>
                <title>1987年的游戏《最后的忍者》是40千字节。
 The 1987 game “The Last Ninja” was 40 kilobytes</title>
                <link>https://twitter.com/exQUIZitely/status/2040777977521398151</link>
                <guid>https://twitter.com/exQUIZitely/status/2040777977521398151</guid>
                <pubDate>Mon, 06 Apr 2026 05:02:51 +0000</pubDate>
                <description><![CDATA[<p>我们检测到您的浏览器已禁用 JavaScript。请启用 JavaScript 或切换到受支持的浏览器以继续使用 x.com。您可以在我们的帮助中心查看受支持的浏览器列表。帮助中心 服务条款 隐私政策 Cookie 政策 版权信息 广告信息 © 2026 X Corp.
</p><p>We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info © 2026 X Corp.</p>]]></description>
            </item>
            
            <item>
                <title>案例研究：恢复一个损坏的12 TB多设备池
 Case study: recovery of a corrupted 12 TB multi-device pool</title>
                <link>https://github.com/kdave/btrfs-progs/issues/1107</link>
                <guid>https://github.com/kdave/btrfs-progs/issues/1107</guid>
                <pubDate>Mon, 06 Apr 2026 04:34:11 +0000</pubDate>
                <description><![CDATA[<p>本文详细描述了在一次意外断电后，成功恢复一个严重损坏的12TB Btrfs文件系统，该文件系统跨越3个设备池（数据单副本，元数据DUP，DM-SMR磁盘）的过程。标准的`btrfs check --repair`命令失败，由于extent树和空闲空间树的问题陷入无限循环。

恢复是通过14个基于btrfs-progs API构建的定制C工具实现的，数据损失极小——大约4.59TB中的7.2MB（0.00016%）。作者分享此案例作为研究，*而非*错误报告，并为btrfs-progs的潜在改进提供建设性反馈。

提出了九个具体的改进领域，重点是增强的修复工具功能（进度检测、extent树重建、孤立inode清理）、更清晰的文档以及对已识别边缘情况的修复。定制工具的参考实现以及一个补丁已在GitHub上公开提供，作为进一步调查和讨论的资源，而非直接提交补丁。
</p><p>This document details the successful recovery of a severely corrupted 12TB Btrfs filesystem across a 3-device pool (data single, metadata DUP, DM-SMR disks) following a hard power cycle. Standard `btrfs check --repair` failed, entering an infinite loop due to issues within the extent and free space trees.

Recovery was achieved using 14 custom C tools built against the btrfs-progs API, resulting in minimal data loss – approximately 7.2MB out of 4.59TB (0.00016%). The author shares this as a case study, *not* a bug report, offering constructive feedback for potential improvements to btrfs-progs.

Nine specific areas for improvement are proposed, focusing on enhanced repair tool functionality (progress detection, extent tree rebuilding, orphan inode cleaning), clearer documentation, and fixes for identified edge cases. A reference implementation of the custom tools, along with a single patch, is publicly available on GitHub, offered as a resource for further investigation and discussion, not as direct patch submissions.



</p>]]></description>
            </item>
            
            <item>
                <title>一个开放源代码的240天线阵列，用于将信号从月球反射回来。
 An open-source 240-antenna array to bounce signals off the Moon</title>
                <link>https://moonrf.com/</link>
                <guid>https://moonrf.com/</guid>
                <pubDate>Mon, 06 Apr 2026 04:32:13 +0000</pubDate>
                <description><![CDATA[<p>利用月球反射信号——被称为地月地（EME）通信——长期以来一直是无线电爱好者的终极挑战。它需要大型天线、昂贵设备以及精确的手动指向和跟踪。我们试图将这项技术带到地面，提供体验太空通信的乐趣所需的所有工具，通过开源软件定义的相控阵来实现。
</p><p>Bouncing signals off the Moon—known as Earth-Moon-Earth (EME) communication—has long been the ultimate challenge for radio amateurs. It required large antennas, expensive equipment, and accurate manual pointing and tracking. We try to bring this down to Earth, providing all the tools needed to experience the thrill of space communication, with an open source software-defined phased array.</p>]]></description>
            </item>
            
            <item>
                <title>我们用 Bun 替换了 Node.js，吞吐量提升了 5 倍。
 We replaced Node.js with Bun for 5x throughput</title>
                <link>https://trigger.dev/blog/firebun</link>
                <guid>https://trigger.dev/blog/firebun</guid>
                <pubDate>Mon, 06 Apr 2026 04:05:47 +0000</pubDate>
                <description><![CDATA[<p>## Bun vs. Node.js：Trigger.dev 的 5 倍性能提升

Trigger.dev 在其对延迟敏感的“Firestarter”服务（一个处理数千个长轮询 HTTP 连接的预热连接代理）中用 Bun 替换了 Node.js，从而实现了 **5 倍的吞吐量提升**。初步分析显示 Node.js 实现存在瓶颈：缓慢的 SQLite 查询、过多的 Zod 解析以及低效的头部转换。

第一阶段消除了 SQLite 数据库，用复合键 Map 替换它，实现 O(1) 查找，吞吐量翻倍，延迟减半。第二阶段切换到 Bun 的原生 `bun.serve()` API，进一步将性能翻倍。随后的分析（第三阶段）识别并修复了与 Zod 验证、头部处理和调试日志记录相关的热点，将 CPU 使用率提高了 40%。最后，编译成单个二进制文件（第四阶段）又将吞吐量提高了 14%，并将镜像大小从 180MB 减少到 68MB。

一个关键发现是 Bun 的 HTTP 模型中存在内存泄漏：来自断开连接的客户端的未解决的 Promise。修复此问题稳定了内存使用并进一步提高了性能。该团队强调分析的重要性，在每个步骤进行基准测试，并理解 Bun 独特的 HTTP 生命周期。他们还为常见的 Bun 问题创建了一个调试技能。
</p><p>## Bun vs. Node.js: A 5x Performance Boost for Trigger.dev

Trigger.dev replaced Node.js with Bun in their latency-sensitive “Firestarter” service – a warm-start connection broker handling thousands of long-poll HTTP connections – resulting in a **5x throughput increase**. Initial profiling revealed bottlenecks in the Node.js implementation: slow SQLite queries, excessive Zod parsing, and inefficient header conversion.

Phase 1 eliminated the SQLite database, replacing it with a composite-key Map for O(1) lookups, doubling throughput and halving latency. Phase 2 switched to Bun’s native `bun.serve()` API, further doubling performance. Subsequent profiling (Phase 3) identified and fixed hotspots related to Zod validation, header processing, and debug logging, improving CPU usage by 40%. Finally, compiling to a single binary (Phase 4) boosted throughput another 14% and reduced image size from 180MB to 68MB.

A key discovery was a memory leak in Bun’s HTTP model: unresolved Promises from disconnected clients. Fixing this stabilized memory usage and further improved performance.  The team emphasizes the importance of profiling, benchmarking at each step, and understanding Bun’s unique HTTP lifecycle. They also created a debugging skill for common Bun issues.



</p>]]></description>
            </item>
            
            <item>
                <title>科学家首次绘制了阴蒂的所有神经。
 Scientists mapped all the nerves of the clitoris for the first time</title>
                <link>https://www.livescience.com/health/anatomy/scientists-mapped-all-the-nerves-of-the-clitoris-for-the-first-time</link>
                <guid>https://www.livescience.com/health/anatomy/scientists-mapped-all-the-nerves-of-the-clitoris-for-the-first-time</guid>
                <pubDate>Mon, 06 Apr 2026 04:03:25 +0000</pubDate>
                <description><![CDATA[<p>最近的研究正在揭示阴蒂的奥秘，这个器官由于社会禁忌和解剖学挑战，历史上一直研究不足。研究人员利用同步加速器的先进X射线技术，首次在微米级别上创建了阴蒂神经的详细三维图像。

发表在bioRxiv上的研究表明，阴蒂背神经（DNC）有力地延伸到阴蒂龟头，广泛分支并连接到周围组织，如耻骨区域。这与先前的神经逐渐变细的假设相矛盾。

这种详细的解剖学图谱具有重要意义。它可以改善针对女性生殖器切割术后患者的重建手术，并帮助外科医生在其他盆腔手术中避免神经损伤。专家强调，随着性别肯定治疗的进步，人们对生殖神经重要性的认识日益增强，并强调需要进一步研究以开发针对相关疾病的更好疗法。该团队计划扩大研究范围，纳入更多样化的捐赠者。
</p><p>Recent research is shedding light on the clitoris, a historically understudied organ due to societal taboos and anatomical challenges. Utilizing advanced X-ray technology at a synchrotron, researchers have created detailed 3D images of clitoral nerves at a micron level – a first for this area of study.

The research, published on bioRxiv, reveals the dorsal nerve of the clitoris (DNC) extends robustly into the clitoral glans, branching extensively and connecting to surrounding tissues like the mons pubis. This contradicts previous assumptions of the nerve tapering off. 

This detailed anatomical mapping has significant implications. It could improve reconstructive surgeries for those who have undergone female genital mutilation and help surgeons avoid nerve damage during other pelvic operations. Experts highlight a growing recognition of the importance of genital nerves, spurred by advancements in gender-affirming care, and emphasize the need for further research to develop better treatments for related medical conditions. The team plans to expand their study to include a more diverse range of donors.



</p>]]></description>
            </item>
            
            <item>
                <title>媒体抓取器 Gallery-dl 将因收到 DMCA 通知而迁移至 Codeberg。
 Media scraper Gallery-dl is moving to Codeberg after receiving a DMCA notice</title>
                <link>https://github.com/mikf/gallery-dl/discussions/9304</link>
                <guid>https://github.com/mikf/gallery-dl/discussions/9304</guid>
                <pubDate>Mon, 06 Apr 2026 04:00:28 +0000</pubDate>
                <description><![CDATA[<p>👍 1人点赞 👎 1人踩 😄 1人笑 🎉 1人欢呼 😕 1人困惑 ❤️ 1人爱心 🚀 1人火箭 👀 1人眼睛。 你现在无法执行该操作。
</p><p>👍 1 reacted with thumbs up emoji 👎 1 reacted with thumbs down emoji 😄 1 reacted with laugh emoji 🎉 1 reacted with hooray emoji 😕 1 reacted with confused emoji ❤️ 1 reacted with heart emoji 🚀 1 reacted with rocket emoji 👀 1 reacted with eyes emoji You can’t perform that action at this time.</p>]]></description>
            </item>
            
            <item>
                <title>LLM无法解释它们的答案——这个CLI迫使它们这样做。
 LLMs can't justify their answers–this CLI forces them to</title>
                <link>https://wheat.grainulation.com/</link>
                <guid>https://wheat.grainulation.com/</guid>
                <pubDate>Mon, 06 Apr 2026 03:35:52 +0000</pubDate>
                <description><![CDATA[<p>```
/research "GraphQL性能特征" 收集证据，读取你的代码库，搜索网络，并记录发现的内容。每个发现都有一个类型（事实、风险、预估）和一个证据等级——从“声明”（有人说）到“测试”（原型验证）。
r001 [事实|文档] GraphQL消除了移动客户端的过度获取，减少了40-60%的负载大小。
r002 [事实|网络] GraphQL N+1查询问题需要DataLoader或等效的批量处理层。
r003 [风险|文档] 现有的REST缓存（CDN、HTTP）无法直接应用于GraphQL，需要自定义解决方案。
```
</p><p>/research "GraphQL performance characteristics" Gather evidence wheat reads your codebase, searches the web, and records what it finds. Each finding gets a type (factual, risk, estimate) and an evidence grade — from "stated" (someone said it) to "tested" (prototype-validated). r001 [factual|documented] GraphQL eliminates over-fetching for mobile clients, reducing payload size 40-60% r002 [factual|web] GraphQL N+1 query problem requires DataLoader or equivalent batching layer r003 [risk|documented] Existing REST caching (CDN, HTTP) does not transfer to GraphQL without custom solutions</p>]]></description>
            </item>
            
            <item>
                <title>所有 Stamp It 程序必须报告其版本 – Michael Stapelberg
 Stamp It All Programs Must Report Their Version – Michael Stapelberg</title>
                <link>https://michael.stapelberg.ch/posts/2026-04-05-stamp-it-all-programs-must-report-their-version/</link>
                <guid>https://michael.stapelberg.ch/posts/2026-04-05-stamp-it-all-programs-must-report-their-version/</guid>
                <pubDate>Mon, 06 Apr 2026 03:34:38 +0000</pubDate>
                <description><![CDATA[<p>## 软件版本控制：为什么重要以及如何正确实施

最近的一次生产事故暴露了软件版本控制中的一个关键缺陷：无法*精确*知道正在运行的代码是什么。尽管快速识别出潜在的修复方案，但由于缺乏对构建版本和发布的可见性，仍然浪费了数小时。这促使我们重新审视最佳实践，并回忆起十多年前在 i3 窗口管理器中实施的一种解决方案。

核心问题在于令人惊讶的低版本控制标准——即使是家用电器也具有详细的版本信息！有效的软件版本控制需要三个步骤：**标记它！**（包含源代码修订版），**连接它！**（确保此信息在构建和打包过程中不会丢失），以及**报告它！**（在日志、用户界面甚至 HTTP 标头中显示版本信息）。

最有价值的信息是 VCS（版本控制系统）修订版。虽然 Go 现在可以自动使用此数据标记构建，但将其集成到 Nix 等打包系统中可能很复杂。像 `stapelberg/nix/go-vcs-stamping` 这样的叠加层可以弥合这一差距。

最终，优先考虑“版本可观测性”——在整个系统中一致报告 VCS 修订版——是一个高影响、低成本的项目，可以在事件响应期间节省大量时间和压力。不要满足于基本的版本号；要知道*精确*正在运行的代码是什么。
</p><p>## Software Versioning: Why It Matters & How to Do It Right

A recent production incident highlighted a critical gap in software versioning: knowing *exactly* what code is running. Despite quickly identifying a potential fix, hours were wasted due to a lack of visibility into build versions and rollouts. This prompted a revisit to best practices, recalling a solution implemented over a decade ago in the i3 window manager.

The core issue is surprisingly low versioning standards – even household appliances have detailed version information! Effective software versioning requires three steps: **Stamp it!** (include the source code revision), **Plumb it!** (ensure this information isn’t lost during builds and packaging), and **Report it!** (make the version visible in logs, UIs, and even HTTP headers).

The most valuable piece of information is the VCS (Version Control System) revision. While Go now automatically stamps builds with this data, integrating it into packaging systems like Nix can be complex. Solutions like the `stapelberg/nix/go-vcs-stamping` overlay can bridge this gap. 

Ultimately, prioritizing “version observability” – consistently reporting the VCS revision throughout your system – is a high-impact, low-effort project that can save significant time and stress during incident response. Don't settle for basic version numbers; know *exactly* what code is running.



</p>]]></description>
            </item>
            
            <item>
                <title>终端中的表格文件
 Sheets Spreadsheets in Your Terminal</title>
                <link>https://github.com/maaslalani/sheets</link>
                <guid>https://github.com/maaslalani/sheets</guid>
                <pubDate>Mon, 06 Apr 2026 03:03:42 +0000</pubDate>
                <description><![CDATA[<p>## Sheets：终端电子表格

Sheets 将电子表格功能带到你的终端。你可以启动它来直接读取通过 stdin 管道传入的 CSV 数据，或者打开现有的 CSV 文件。导航侧重于键绑定，允许使用 `h, j, k, l` 在单元格之间移动，并使用 `gg`（顶部）或 `B9`（单元格 B9）等命令快速跳转到特定位置。

Sheets 支持通过直接赋值 (`B7=10`) 和插入模式 (`i`) 修改单元格。强大的功能包括搜索 (`/`)、标记 (`ma`)、跳转列表 (`ctrl+o`) 和视觉选择 (`v`)，用于复制、剪切和粘贴。公式可以在视觉模式下使用 `=` 插入。

保存、打开其他文件和退出都通过激活 `:` 的命令提示符来完成。安装很简单，可以使用 Go (`go install...`) 或下载预构建的二进制文件。该项目采用 MIT 许可，并欢迎用户反馈。
</p><p>## Sheets: A Terminal Spreadsheet

Sheets brings spreadsheet functionality to your terminal. You can launch it to directly read CSV data piped in via stdin, or open existing CSV files. Navigation is key-binding focused, allowing movement between cells with `h, j, k, l` and quick jumps to specific locations using commands like `gg` (top) or `B9` (cell B9).

Sheets supports cell modification via direct assignment (`B7=10`) and editing in insert mode (`i`). Powerful features include search (`/`), marks (`ma`), jump lists (`ctrl+o`), and visual selection (`v`) for copying, cutting, and pasting. Formulas can be inserted in visual mode using `=`.

Saving, opening other files, and quitting are done through a command prompt activated with `:`. Installation is simple with Go (`go install...`) or by downloading pre-built binaries. The project is MIT licensed and welcomes user feedback.



</p>]]></description>
            </item>
            
            <item>
                <title>展示 HN：Gemma Gem – 嵌入浏览器的 AI 模型 – 无需 API 密钥，无需云服务
 Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud</title>
                <link>https://github.com/kessler/gemma-gem</link>
                <guid>https://github.com/kessler/gemma-gem</guid>
                <pubDate>Mon, 06 Apr 2026 02:33:50 +0000</pubDate>
                <description><![CDATA[<p>## Gemma Gem：您的本地AI助手

Gemma Gem 是一款 Chrome 扩展程序，它将强大的 AI 助手直接带入您的浏览器，由 Google 的 Gemma 4 模型提供支持。它完全在您的设备上运行——无需互联网连接或 API 密钥——利用 WebGPU 进行快速、私密的处理。

安装后（需要下载约 500MB），Gemma Gem 可以阅读网页、与元素交互（点击按钮、填写表单），甚至执行 JavaScript。您可以通过浏览器中的图标访问它，并通过聊天界面进行交互。

该扩展程序通过内容脚本、服务工作线程和托管模型的离屏文档系统工作。它提供截图捕获、文本提取和页面滚动等工具，所有这些都可以通过自然语言控制。

用户可以自定义 AI 的“思考”过程，并管理上下文清除和特定站点禁用等设置。提供开发版本和生产版本，详细日志可通过 Chrome 的扩展程序检查工具访问。
</p><p>## Gemma Gem: Your On-Device AI Assistant

Gemma Gem is a Chrome extension bringing a powerful AI assistant directly into your browser, powered by Google’s Gemma 4 model. It operates entirely on your machine – no internet connection or API keys needed – utilizing WebGPU for fast, private processing.

Once installed (requiring a ~500MB download), Gemma Gem can read web pages, interact with elements (clicking buttons, filling forms), and even execute JavaScript. You access it via an icon in your browser and interact through a chat interface. 

The extension works through a system of content scripts, a service worker, and an offscreen document hosting the model. It offers tools like screenshot capture, text extraction, and page scrolling, all controllable through natural language. 

Users can customize the AI’s “thinking” process and manage settings like context clearing and site-specific disabling. Development and production builds are available, with detailed logging accessible through Chrome’s extension inspection tools.



</p>]]></description>
            </item>
            
            <item>
                <title>Copilot 仅供娱乐。微软使用条款规定。
 Copilot is 'for entertainment purposes only', per Microsoft's terms of use</title>
                <link>https://techcrunch.com/2026/04/05/copilot-is-for-entertainment-purposes-only-according-to-microsofts-terms-of-service/</link>
                <guid>https://techcrunch.com/2026/04/05/copilot-is-for-entertainment-purposes-only-according-to-microsofts-terms-of-service/</guid>
                <pubDate>Mon, 06 Apr 2026 02:32:46 +0000</pubDate>
                <description><![CDATA[<p>即使是开发人工智能工具的公司，如微软的Copilot、OpenAI的模型以及xAI的产品，也在告诫用户不要盲目信任其输出结果。 他们的服务条款明确指出，这些人工智能容易出错，不应被用于重要决策。

微软目前正在向企业推广Copilot，最初包含免责声明，将该工具标明“仅供娱乐”，并建议用户“自行承担风险”。 他们承认这种措辞已过时，并计划更新。 同样，OpenAI和xAI也警告不要将他们的人工智能的回复视为绝对真理或事实信息。

这些免责声明强调了一个关键点：虽然人工智能正在迅速发展，但它仍然存在缺陷，需要对生成的内容进行批判性评估——这种观点与怀疑论者*和*人工智能开发者本人不谋而合。</p><p>Even the companies developing AI tools like Microsoft’s Copilot, OpenAI’s models, and xAI’s offerings are cautioning users against blindly trusting their outputs. Their terms of service explicitly state these AI are prone to errors and shouldn’t be relied upon for important decisions. 

Microsoft, currently promoting Copilot to businesses, initially included a disclaimer labeling the tool “for entertainment purposes only” and advising users to “use at your own risk.” They acknowledge this language is outdated and plan to update it. Similarly, OpenAI and xAI warn against treating their AI’s responses as definitive truth or factual information. 

These disclaimers highlight a crucial point: while AI is rapidly advancing, it’s still fallible and requires critical evaluation of its generated content – a sentiment echoed by both skeptics *and* the AI developers themselves.</p>]]></description>
            </item>
            
            <item>
                <title>展示HN：Modo – 我构建了一个开源的Kiro、Cursor和Windsurf的替代品。
 Show HN: Modo – I built an open-source alternative to Kiro, Cursor, and Windsurf</title>
                <link>https://github.com/mohshomis/modo</link>
                <guid>https://github.com/mohshomis/modo</guid>
                <pubDate>Mon, 06 Apr 2026 02:03:51 +0000</pubDate>
                <description><![CDATA[<p>## Modo：一个开源、规划优先的AI IDE

Modo是一个基于VS Code的Void编辑器构建的桌面IDE，旨在通过优先考虑*规划*而非直接执行来增强AI辅助编码。与典型的“提示-生成代码”工具不同，Modo通过明确的工作流程来构建开发过程：**提示 → 需求 → 设计 → 任务 → 代码。**

用户在`.modo/specs`中定义功能或错误修复，为需求、设计和实现任务创建markdown文件。然后，AI代理填充这些内容，允许审查和迭代完善。任务是持久的，并且可以从IDE中一键执行。

主要功能包括通过markdown规则进行项目指导，通过JSON钩子实现自动化操作，以及为复杂任务并行生成代理。它支持多种LLM，并与OpenAPI规范等工具集成。一个切换开关允许在自主“自动驾驶”模式和监督模式之间切换。

Modo是完全开源的（MIT许可证），并且高度可定制，提供专用主题、自定义命令以及通过可安装的“能力”进行扩展。虽然目前由社区维护，但它为寻求更结构化的AI驱动编码方式的开发者提供了强大的基础。

[https://github.com/modoeditor/modo](https://github.com/modoeditor/modo)
</p><p>## Modo: An Open-Source, Planning-First AI IDE

Modo is a desktop IDE built on VS Code’s Void editor, designed to enhance AI-assisted coding by prioritizing *planning* before execution. Unlike typical “prompt-to-code” tools, Modo structures development through a defined workflow: **prompt → requirements → design → tasks → code.**

Users define features or bug fixes within `.modo/specs`, creating markdown files for requirements, design, and implementation tasks. An AI agent then populates these, allowing for review and iterative refinement. Tasks are persistent and executable directly from the IDE with a single click.

Key features include project steering via markdown rules, automated actions through JSON hooks, and parallel agent spawning for complex tasks. It supports multiple LLMs and integrates with tools like OpenAPI specs. A toggle allows switching between autonomous “Autopilot” and supervised modes. 

Modo is fully open-source (MIT license) and highly customizable, offering a dedicated theme, custom commands, and extensibility through installable “powers.” While currently community-maintained, it provides a robust foundation for developers seeking a more structured approach to AI-driven coding. 

[https://github.com/modoeditor/modo](https://github.com/modoeditor/modo)



</p>]]></description>
            </item>
            
            <item>
                <title>endian 战争和反可移植性：又是这样？
 Endian wars and anti-portability: this again?</title>
                <link>https://dalmatian.life/2026/04/03/endian-wars-and-anti-portability-this-again/</link>
                <guid>https://dalmatian.life/2026/04/03/endian-wars-and-anti-portability-this-again/</guid>
                <pubDate>Mon, 06 Apr 2026 02:02:26 +0000</pubDate>
                <description><![CDATA[<p>## 开源软件的可移植性价值

本文论述了开源社区中日益增长的对软件可移植性的敌对态度。作者反驳了常见的将移植到“旧”或不太流行的架构（如Alpha、MIPS、PowerPC或32位系统）视为无意义的观点，以及对字节序（大端与小端）的担忧。

作者认为，支持不同的架构并非沉溺于过去，而是承认活跃的社区兴趣并提高软件质量。移植可以发现隐藏的错误——一个安全漏洞甚至是通过在旧硬件上测试Linux内核发现的。大端和小端系统都有其价值，而字节序安全的代码就是*更好*的代码。

同样，32位支持仍然相关，尤其是在资源受限的环境中，甚至可以在64位系统上提高效率。最终，接受社区驱动的移植表明了对用户的尊重，并促进了更健壮、经过充分测试的代码库。作者提倡协作方法，敦促维护者拥抱贡献，即使是针对意想不到的平台，将其视为项目蓬勃发展和有价值的标志。
</p><p>## The Value of Portability in Open Source

This piece argues against the increasingly hostile attitude towards software portability within the open-source community. The author pushes back against common arguments dismissing ports to “old” or less popular architectures (like Alpha, MIPS, PowerPC, or 32-bit systems), and those concerning endianness (big vs. little endian byte order).

The author contends that supporting diverse architectures isn’t about clinging to the past, but recognizing active community interest and improving software quality. Ports can uncover hidden bugs – a security flaw was even found in the Linux kernel through testing on older hardware. Both big and little endian systems are valuable, and endian-safe code is simply *better* code. 

Similarly, 32-bit support remains relevant, particularly in resource-constrained environments, and can even boost efficiency on 64-bit systems. Ultimately, accepting community-driven ports demonstrates respect for users and fosters a more robust, well-tested codebase. The author champions a collaborative approach, urging maintainers to embrace contributions, even to unexpected platforms, as a sign of a thriving and valuable project.



</p>]]></description>
            </item>
            
            <item>
                <title>Apex 协议 – 一种基于 MCP 的开放式 AI 智能体交易标准
 Apex Protocol – An open MCP-based standard for AI agent trading</title>
                <link>https://apexstandard.org/</link>
                <guid>https://apexstandard.org/</guid>
                <pubDate>Mon, 06 Apr 2026 02:01:36 +0000</pubDate>
                <description><![CDATA[<p>APEX v0.1.0-alpha 定义了人工智能代理与经纪商、交易所、交易商和其他执行场所的通信方式。 一种协议。实时状态。自主安全。多资产设计。连接FIX协议的电子交易。APEX连接代理交易。
</p><p>v0.1.0-alpha APEX defines how AI agents communicate with brokers, exchanges, dealers, and other execution venues. One protocol. Realtime state. Autonomous safety. Multi-asset by design. FIX Protocol connected electronic trading. APEX connects agentic trading.</p>]]></description>
            </item>
            
            <item>
                <title>展示HN：Mdarena – 对你的Claude.md进行基准测试，与你自己的PRs对比。
 Show HN: Mdarena – Benchmark your Claude.md against your own PRs</title>
                <link>https://github.com/HudsonGri/mdarena</link>
                <guid>https://github.com/HudsonGri/mdarena</guid>
                <pubDate>Mon, 06 Apr 2026 01:35:03 +0000</pubDate>
                <description><![CDATA[<p>## mdarena：评估您的 CLAUDE.md 文件效果

**mdarena** 是一款工具，用于评估您的 `CLAUDE.md` 文件（您提供给 Claude 等代理的指令）相对于您的代码库实际 PR 的有效性。研究表明，这些文件通常会*降低*代理的成功率并增加成本。

**工作原理：**

1. **`mdarena mine`**: 提取合并的 PR 以创建任务集，自动从您的 CI/CD 配置中检测测试命令。
2. **`mdarena run`**: 通过将不同的 `CLAUDE.md` 配置（或无上下文）应用于 PR 之前的提交并评估生成的代理补丁来测试它们。它可以运行您现有的测试（如 SWE-bench），或退回到差异重叠评分。
3. **`mdarena report`**: 将代理补丁与原始 PR 差异进行比较，衡量测试通过/失败率、代码重叠、成本和统计显著性。

**生产 monorepo 测试的关键发现：** 与没有它们的基线相比，提供针对性上下文的按目录 `CLAUDE.md` 文件显著提高了测试解决率（约 27%），并且优于合并的单文件方法。

**mdarena 优先考虑安全性**，通过隔离检出防止访问未来的提交。它支持 SWE-bench 任务，并需要 Python 3.11+、`gh` 和 `claude` CLI。
</p><p>## mdarena: Evaluate the Impact of Your CLAUDE.md Files

**mdarena** is a tool to benchmark the effectiveness of your `CLAUDE.md` files – the instructions you provide to agents like Claude – against your codebase’s actual PRs. Research suggests these files often *decrease* agent success and increase costs.

**How it works:**

1. **`mdarena mine`**: Extracts merged PRs to create a task set, automatically detecting test commands from your CI/CD configuration.
2. **`mdarena run`**:  Tests different `CLAUDE.md` configurations (or no context) by applying them to pre-PR commits and evaluating the resulting agent-generated patches. It can run your existing tests (like SWE-bench) or fall back to diff overlap scoring.
3. **`mdarena report`**: Compares agent patches to the original PR diffs, measuring test pass/fail rates, code overlap, cost, and statistical significance.

**Key Findings from a production monorepo test:** Per-directory `CLAUDE.md` files offering targeted context significantly improved test resolution (~27%) compared to a baseline without them, and outperformed a consolidated, single-file approach. 

**mdarena prioritizes security** with isolated checkouts preventing access to future commits. It supports SWE-bench tasks and requires Python 3.11+, `gh`, and `claude` CLIs.



</p>]]></description>
            </item>
            
            <item>
                <title>召回 – 本地多模态语义搜索您的文件
 Recall – local multimodal semantic search for your files</title>
                <link>https://github.com/aayu22809/Recall</link>
                <guid>https://github.com/aayu22809/Recall</guid>
                <pubDate>Mon, 06 Apr 2026 01:33:56 +0000</pubDate>
                <description><![CDATA[<p>## 本地多模态记忆：用搜索查找任何内容

这个工具允许您使用自然语言创建本地、可搜索的文件数据库——包括图像、音频、视频、PDF 和文本。只需嵌入您的文件，然后使用描述性查询*查找*它们，即使没有任何现有的元数据或标签。例如，搜索“团队晚餐”将显示相关的照片！

它由 Gemini Embedding 2（免费版本）和 ChromaDB 提供支持，所有内容都存储在您本地机器上以保护隐私。设置过程经过简化，并配备了一个动画向导，Raycast 扩展程序提供带有缩略图的即时视觉搜索。

**主要特点：**

* **跨模态搜索：** 无缝处理所有文件类型。
* **本地和私密：** 数据保留在您的机器上。
* **易于设置：** 动画向导引导您完成整个过程。
* **Raycast 集成：** 从您的启动器进行快速视觉搜索。

**开始使用：** [https://github.com/hughminhphan/vector-embedded-finder](https://github.com/hughminhphan/vector-embedded-finder)
</p><p>## Local Multimodal Memory: Find Anything with a Search

This tool lets you create a local, searchable database of your files – images, audio, video, PDFs, and text – using natural language. Simply embed your files, and then *find* them using descriptive queries, even without any existing metadata or tagging. For example, searching "team dinner" will surface relevant photos!

It’s powered by Gemini Embedding 2 (free tier) and ChromaDB, storing everything locally on your machine for privacy. Setup is streamlined with an animated wizard, and a Raycast extension provides instant visual search with thumbnails. 

**Key Features:**

* **Cross-modal search:** Works seamlessly across all file types.
* **Local & Private:** Data stays on your machine.
* **Easy Setup:** Animated wizard guides you through the process.
* **Raycast Integration:** Quick visual search from your launcher.

**Get Started:** [https://github.com/hughminhphan/vector-embedded-finder](https://github.com/hughminhphan/vector-embedded-finder)



</p>]]></description>
            </item>
            
            <item>
                <title>展示HN：我构建了一个小型的LLM来揭示语言模型的工作原理。
 Show HN: I built a tiny LLM to demystify how language models work</title>
                <link>https://github.com/arman-bd/guppylm</link>
                <guid>https://github.com/arman-bd/guppylm</guid>
                <pubDate>Mon, 06 Apr 2026 01:33:08 +0000</pubDate>
                <description><![CDATA[<p>## GuppyLM：一个小型、可训练的语言模型

GuppyLM是一个拥有900万参数的语言模型，旨在证明构建自己的LLM并不需要大量的资源。它使用单个Colab笔记本和最少的训练时间（约5分钟）创建，展示了语言模型的内部运作——从数据到输出，使其不再像一个“黑盒子”。

该模型体现了一条名为Guppy的小鱼，用简短的、小写的句子交流，专注于它的水生世界：水、食物和鱼缸生活。它故意缺乏对复杂人类概念的理解。

GuppyLM在6万条合成对话上进行训练，涵盖60个主题，利用简单的香草Transformer架构。该项目优先考虑简单性，放弃了像GQA或RoPE这样的高级技术，以保持小规模的清晰度和效率。

代码和预训练模型已在HuggingFace上公开提供，允许任何人进行实验并直接与Guppy聊天。它是一个实际的例子，证明即使没有博士学位或巨大的计算能力，LLM的创建也是可行的。
</p><p>## GuppyLM: A Tiny, Trainable Language Model

GuppyLM is a 9 million parameter language model designed to demonstrate that building your own LLM doesn’t require extensive resources. Created with a single Colab notebook and minimal training time (around 5 minutes), it showcases the inner workings of language models – from data to output – making them less of a “black box.”

The model embodies a small fish named Guppy, communicating in short, lowercase sentences focused on its aquatic world: water, food, and tank life. It intentionally lacks understanding of complex human concepts. 

GuppyLM is trained on 60,000 synthetic conversations covering 60 topics, utilizing a simple vanilla transformer architecture. The project prioritizes simplicity, foregoing advanced techniques like GQA or RoPE to maintain clarity and efficiency at a small scale. 

The code and pre-trained model are publicly available on HuggingFace, allowing anyone to experiment and chat with Guppy directly. It’s a practical example proving LLM creation is accessible, even without a PhD or massive computing power.



</p>]]></description>
            </item>
            
            <item>
                <title>雇主利用你的个人数据来确定你愿意接受的最低工资。
 Employers use your personal data to figure out the lowest salary you'll accept</title>
                <link>https://www.marketwatch.com/story/employers-are-using-your-personal-data-to-figure-out-the-lowest-salary-youll-accept-c2b968fb</link>
                <guid>https://www.marketwatch.com/story/employers-are-using-your-personal-data-to-figure-out-the-lowest-salary-youll-accept-c2b968fb</guid>
                <pubDate>Mon, 06 Apr 2026 01:07:10 +0000</pubDate>
                <description><![CDATA[<p>请启用 JavaScript 并禁用任何广告拦截器。
</p><p>Please enable JS and disable any ad blocker</p>]]></description>
            </item>
            
            <item>
                <title>斯帕斯和斯普兰
 Spath and Splan</title>
                <link>https://sumato.ai/posts/2026-04-04-spath-and-splan.html</link>
                <guid>https://sumato.ai/posts/2026-04-04-spath-and-splan.html</guid>
                <pubDate>Mon, 06 Apr 2026 01:06:11 +0000</pubDate>
                <description><![CDATA[<p>## 超越文件系统：利用语义寻址改进AI编程

传统上，代码组织依赖于文件系统，模仿人类可理解的层级结构。然而，这迫使AI编程代理在较低的“操作”层面工作，需要大量的工具（如IDE）来管理基于代码*存储位置*而非*功能*的代码。

为了提高AI效率，Sumato AI 建议将重点转向**叙事卫生**——为AI代理编写清晰、直接的“故事”。 这通过两个关键创新实现：**Spath** 和 **Splan**。

**Spath** 是一种语义寻址格式，允许代理在*不*引用文件或行号的情况下识别代码符号。 **Splan** 定义了一种表达对这些符号的*操作*的语法——完整的意图，例如“用这个新版本替换这个函数”——同样不依赖于文件系统。

通过操作语义含义而非文件路径，Spath & Splan 减少了浪费的计算，提高了AI的“认知质量”，并实现了事务性代码更改（全部成功或全部失败）。 两者都设计为开放、可适应的语法，具有特定于语言的“方言”。

Sumato AI 正在开源这些工具，相信转向语义抽象将释放AI辅助编码的重大进步。
</p><p>## Beyond Filesystems: Improving AI Coding with Semantic Addressing

Historically, code organization has relied on filesystems, mirroring human-understandable hierarchies. However, this forces AI coding agents to operate at a low, “operational” level, requiring extensive tooling (like IDEs) to manage code based on *where* it’s stored, not *what* it does.  

To improve AI efficiency, Sumato AI proposes shifting focus to **narrative hygiene** – crafting clear, direct “stories” for AI agents. This is achieved through two key innovations: **Spath** and **Splan**.

**Spath** is a semantic addressing format allowing agents to identify code symbols *without* referencing files or line numbers. **Splan** defines a grammar for expressing *actions* on those symbols – complete intentions like “replace this function with this new version” – also without filesystem dependencies. 

By operating on semantic meaning, rather than file paths, Spath & Splan reduce wasted processing, improve AI “cognitive quality,” and enable transactional code changes (all succeed or none do).  Both are designed as open, adaptable grammars with language-specific “dialects.” 

Sumato AI is open-sourcing these tools, believing a shift towards semantic abstraction will unlock significant advancements in AI-assisted coding.



</p>]]></description>
            </item>
            
            <item>
                <title>OpenAI的失势，以及投资者涌向Anthropic。
 OpenAI's fall from grace as investors race to Anthropic</title>
                <link>https://www.latimes.com/business/story/2026-04-01/openais-shocking-fall-from-grace-as-investors-race-to-anthropic</link>
                <guid>https://www.latimes.com/business/story/2026-04-01/openais-shocking-fall-from-grace-as-investors-race-to-anthropic</guid>
                <pubDate>Mon, 06 Apr 2026 01:04:36 +0000</pubDate>
                <description><![CDATA[<p>投资者兴趣正在迅速从OpenAI转向其竞争对手Anthropic，导致OpenAI的股票在二级市场变得难以出售。投资者正寻求抛售约6亿美元的OpenAI股票，但需求疲软，与Anthropic准备投资的20亿美元形成鲜明对比。

这种转变源于投资者认为Anthropic的风险回报更好，其3800亿美元的估值落后于OpenAI的8520亿美元，表明具有更大的增长潜力。Anthropic还在利润丰厚的企业市场中获得进展，而人们对OpenAI的高运营成本和较慢的企业采用率越来越担忧。

尽管OpenAI完成了一轮规模庞大的1220亿美元融资，但二级市场活动却讲述了不同的故事。银行甚至为OpenAI股票免除费用，而对Anthropic股票则保持收费，这表明对后者的信心。尽管两家公司都面临挑战——Anthropic面临法律纠纷和安全漏洞——但投资者目前的偏好是Anthropic的发展轨迹。
</p><p>Investor interest is rapidly shifting from OpenAI to its competitor, Anthropic, causing OpenAI shares to become difficult to sell on the secondary market. While investors are seeking to offload around $600 million in OpenAI stock, demand is weak, contrasting sharply with the $2 billion ready to invest in Anthropic.

This shift is driven by perceived better risk-reward with Anthropic, whose $380 billion valuation lags OpenAI’s $852 billion, suggesting greater potential for growth. Anthropic is also gaining traction in the lucrative enterprise market, while concerns grow over OpenAI’s high operating costs and slower enterprise adoption. 

Despite OpenAI completing a massive $122 billion fundraising round, secondary market activity tells a different story. Banks are even waiving fees for OpenAI shares, while maintaining them for Anthropic, signaling confidence in the latter. Though both companies face challenges – Anthropic with legal disputes and security breaches – investor appetite currently favors Anthropic’s trajectory.



</p>]]></description>
            </item>
            
            <item>
                <title>彼得·蒂尔对太阳能供电的牛项圈的巨额投资
 Peter Thiel's big bet on solar-powered cow collars</title>
                <link>https://techcrunch.com/2026/04/04/unpacking-peter-thiels-big-bet-on-solar-powered-cow-collars/</link>
                <guid>https://techcrunch.com/2026/04/04/unpacking-peter-thiels-big-bet-on-solar-powered-cow-collars/</guid>
                <pubDate>Mon, 06 Apr 2026 01:02:45 +0000</pubDate>
                <description><![CDATA[<p>## Halter：针对十亿头牛的虚拟围栏

Founders Fund 最近领投了 Halter 2.2 亿美元的投资，Halter 是一家总部位于新西兰的初创公司，正在彻底改变牛群管理。 Halter 系统利用太阳能智能项圈、塔网络和智能手机应用程序来创建“虚拟围栏”，使农民能够远程管理牛群并优化放牧模式。

Halter 由 Craig Piggott 创立，旨在解决在广阔、偏远的土地上管理牛群的难题，而无需使用传统方法（如牧犬或车辆）。 这些项圈使用音频和振动提示来训练牛群，同时收集行为数据以监测动物健康和生育能力——从而创建了世界上最大的牛群行为数据集。

农民可以通过优化放牧来提高土地生产力 20%，从而带来可观的经济回报。 虽然 Merck 的 Vence 和基于无人机的系统等竞争对手存在，但 Halter 强调可靠性和经过验证的结果，这些结果是在新西兰九年时间里磨练出来的。 Halter 目前在新西兰、澳大利亚和美国 22 个州运营，旨在在全球范围内扩张，目标市场为十亿头牛，目前的渗透率仅为一百万头。 该公司专注于可证明的投资回报率，是其在传统上采用农业科技速度较慢的行业中取得成功的关键。
</p><p>## Halter: Virtual Fencing for a Billion Cattle

Founders Fund recently led a $220 million investment in Halter, a New Zealand startup revolutionizing cattle management. Halter’s system utilizes solar-powered smart collars, a network of towers, and a smartphone app to create “virtual fences,” allowing farmers to remotely manage herds and optimize grazing patterns. 

Founded by Craig Piggott, Halter addresses the challenge of managing cattle across vast, remote landscapes without traditional methods like dogs or vehicles. The collars use audio and vibration cues to train cattle, while simultaneously collecting behavioral data to monitor animal health and fertility – creating the world’s largest cattle behavior dataset. 

Farmers see a potential 20% increase in land productivity by optimizing grazing, translating to significant financial returns. While competitors like Merck’s Vence and drone-based systems exist, Halter emphasizes reliability and proven results, honed over nine years in New Zealand. Currently operating in New Zealand, Australia, and 22 US states, Halter aims to expand globally, targeting a market of one billion cattle with its current penetration at just one million. The company’s focus on demonstrable ROI is key to its success in a traditionally slow-to-adopt agricultural tech sector.



</p>]]></description>
            </item>
            
            <item>
                <title>展示HN：YouTube搜索几乎没用，我做了一个带有高级过滤器的搜索表单。
 Show HN: I made a YouTube search form with advanced filters</title>
                <link>https://playlists.at/youtube/search/</link>
                <guid>https://playlists.at/youtube/search/</guid>
                <pubDate>Mon, 06 Apr 2026 01:01:41 +0000</pubDate>
                <description><![CDATA[<p>使用高级搜索前缀改善您的搜索结果。搜索：YouTube Google 视频 搜索 播放列表 0 个视频被选中。未找到结果。请尝试调整您的搜索词或筛选器。© 2026 Playlists.at · Cookie & 隐私政策 · 版权声明
</p><p>Improve your search results with advanced search prefixes. Search with YouTube Google Video Search Playlist 0 videos selected No results found Try adjusting your search terms or filters © 2026 Playlists.at · Cookie &amp; Privacy Policy · Imprint</p>]]></description>
            </item>
            
            <item>
                <title>展示 HN：TermHub – 为 AI 代理构建的开源终端控制网关
 Show HN: TermHub – Open-source terminal control gateway built for AI Agents</title>
                <link>https://github.com/duo121/termhub</link>
                <guid>https://github.com/duo121/termhub</guid>
                <pubDate>Mon, 06 Apr 2026 00:34:57 +0000</pubDate>
                <description><![CDATA[<p>## Termhub：AI驱动的终端控制

Termhub 是一款原生AI工具，旨在简化与终端会话的交互。它使AI能够智能管理终端——打开窗口/标签页，启动会话，发送命令，并仅捕获*新的*输出。这种“发送-捕获增量循环”是高效AI工作流程的关键。

**主要特性：**

* **AI驱动自动化：** 基于AI检查和需求自动化终端任务。
* **跨平台：** 支持macOS (iTerm2, Terminal) 和 Windows (Windows Terminal, CMD)。
* **提供SDK：** 提供JavaScript SDK，用于对终端进行程序化控制。
* **核心命令：** 包括 `open`、`list`、`resolve/find`、`send`、`press`、`capture`、`focus`、`close`、`doctor` 和 `spec`。
* **会话管理：** 使用会话ID或句柄进行精确的目标定位。
* **增量捕获：** 仅捕获命令*发送后*的输出，提高效率。

**安装：** 可通过npm (`@duo121/termhub`)、Homebrew (macOS) 或从GitHub Releases直接下载。

Termhub简化了将终端交互集成到AI应用程序中，为自动化任务提供了一个强大而灵活的解决方案。
</p><p>## Termhub: AI-Powered Terminal Control

Termhub is an AI-native tool designed to streamline interaction with terminal sessions. It enables AI to intelligently manage terminals – opening windows/tabs, launching sessions, sending commands, and capturing only the *new* output. This "send-to-capture delta loop" is key for efficient AI workflows.

**Key Features:**

* **AI-Driven Automation:** Automates terminal tasks based on AI inspection and needs.
* **Cross-Platform:** Supports macOS (iTerm2, Terminal) and Windows (Windows Terminal, CMD).
* **SDK Available:** Offers a JavaScript SDK for programmatic control of terminals.
* **Core Commands:** Includes `open`, `list`, `resolve/find`, `send`, `press`, `capture`, `focus`, `close`, `doctor`, and `spec`.
* **Session Management:** Uses session IDs or handles for precise targeting.
* **Delta Capture:** Captures only output *after* a command is sent, improving efficiency.

**Installation:** Available via npm (`@duo121/termhub`), Homebrew (macOS), or direct download from GitHub Releases. 

Termhub simplifies integrating terminal interactions into AI applications, offering a robust and flexible solution for automated tasks.



</p>]]></description>
            </item>
            
            <item>
                <title>意大利电视台因英伟达自己的DLSS 5片段对英伟达提出版权索赔 (YouTube)
 Italian TV Copyright-Strikes Nvidia over Nvidia's Own DLSS 5 Footage</title>
                <link>https://twitter.com/NikTek/status/2040898312262324362</link>
                <guid>https://twitter.com/NikTek/status/2040898312262324362</guid>
                <pubDate>Mon, 06 Apr 2026 00:05:44 +0000</pubDate>
                <description><![CDATA[<p>我们检测到您的浏览器已禁用 JavaScript。请启用 JavaScript 或切换到受支持的浏览器以继续使用 x.com。您可以在我们的帮助中心查看受支持的浏览器列表。帮助中心 服务条款 隐私政策 Cookie 政策 版权信息 广告信息 © 2026 X Corp.
</p><p>We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info © 2026 X Corp.</p>]]></description>
            </item>
            </channel></rss>