Show HN: Spegel, a Terminal Browser That Uses LLMs to Rewrite Webpages

原始链接: https://simedw.com/2025/06/23/introducing-spegel/

Spegel是一款概念验证终端网页浏览器,它利用大型语言模型(LLM)来个性化您的浏览体验。它是在一个周末开发的,它抓取HTML,使用大型语言模型(由谷歌的 Gemini 2.5 Pro Lite 优化)根据用户定义的提示处理内容,并使用 Textual 在您的终端中将输出渲染为Markdown。 与Lynx等传统的终端浏览器不同,Spegel并非旨在取代它们,而是提供一种定制的网页视图。用户可以定义带有特定提示的“视图”,用于执行诸如仅从博文中提取食谱或简化复杂文章之类的任务。这消除了杂乱,并专注于您需要的信息。 该浏览器目前仅支持GET请求,但开发者正在探索处理表单的方法。您可以通过pip安装它并配置您自己的提示来尝试Spegel。源代码可在GitHub上找到,供想要贡献的人使用。

Here's a summary of the Hacker News discussion on Spegel: Spegel is a terminal browser that uses LLMs to rewrite webpages for better viewing. Users find it interesting for alternative browsing combining search and prompts, enabling multi-tab summaries across sources (news outlets, Wikipedia). Ground.news was suggested as an existing example of this. Some users envision personal agents summarizing information based on user knowledge, filtering redundant content. Others suggest using Firefox's reader mode or extracting structured data (e.g., Recipe schema) to reduce LLM processing. Concerns arise about LLMs changing content, leading to incorrect information. One user noted a bug where the LLM hallucinated a recipe due to truncation. Suggestions include: handling POST requests, adding scripting, DOM manipulation, or cache to reduce energy and token usage. Another suggestion was to allow the LLM to report on a page being a SPA. Users also talk about future ad-blocking by LLMs and P2P cache of the "raw" data. Overall, Spegel sparks discussion about personalized browsing, LLM limitations, and potential applications in accessibility and information filtering.
相关文章

原文

June 23, 2025

TL;DR Spegel is a proof-of-concept terminal web browser that feeds HTML through an LLM and renders the result as markdown directly in your terminal.

Two weekends ago, after my family had gone to sleep, I found myself unsupervised with a laptop and an itch to build something interesting. A couple of hours later, I had a minimal web browser running in my terminal (no JavaScript, GET requests only) that transformed web content based on my custom prompts.

Then, a few days later, Google released Gemini 2.5 Pro Lite, significantly faster inference speed, suddenly my little weekend hack became a tad more practical.

Personalisation

Adapting content to suit individual needs isn’t a new idea, think about translating books or summarising lengthy articles. However, this used to be slow and expensive. LLMs have changed this dramatically, making these transformations quick and easy.

Spegel ("mirror" in Swedish) lets you explore web content through personalized views using your own prompts. A single page can have multiple views, maybe one simplifying everything down to ELI5 or another highlighting key actions. It's entirely up to you and your prompting skills.

Sometimes you don't want to read through someone's life story just to get to a recipe. Recipe Example That said, this is a great recipe

Example configuration

[[views]]
id = "recipe"
name = "Recipe"
hotkey = "7"
order = "7"
enabled = true
auto_load = false
description = "Get to the point in recipes"
icon = "🍳"
prompt = """Parse a recipe and extract only the essential parts.
Format the output like this:

# Ingredients
* 1 tbsp salt  
* 400 g beef  

# Steps
1. Preheat the oven to 200°C (Gas Mark 6).  
2. Wash and chop the carrots.  
-----
**Instructions:**

* Use **metric units** (not imperial).
* inches -> cm 
* pounds -> kg
* cups -> dl 
* Keep the output **clean and minimal** no extra commentary, tips, or nutrition facts.
* Include the servings."""

How it works

The pipeline is straightforward.

Spegel flowchart

Spegel fetches HTML content, processes it through an LLM using prompts stored in a config file (~/.spegel.toml), and outputs markdown rendered via Textual. Prompts and views can be adjusted live during a browsing session.

This was my first experience using Textual for a TUI, and it's been delightful, possibly too delightful, as I found myself adding a few unnecessary interface elements just because it was easy.

One gotcha was ensuring only completed lines (ending in newline characters) were streamed; otherwise, the markdown renderer would parse incomplete markdown and fail to recover formatting

buffer: str = ""
async for chunk in llm_client.stream(full_prompt, ""):
    if not chunk:
        continue
    buffer += chunk
    while "\n" in buffer:
        line, buffer = buffer.split("\n", 1)
        yield line + "\n"

if buffer:
    yield buffer 

Other terminal browsers

There are a lot of great terminal browsers out there, Lynx and Links2 are close to my heart. There are also modern attempts like Browsh that can even render graphs using half-block Unicode characters (▄█).

Links2 Wikipedia Screenshot

Spegel isn’t meant to replace these, it’s more of an exploration or proof-of-concept. It currently doesn't support POST requests (though I have some ideas on handling <form> elements by creating on-the-fly UIs).

But most modern websites aren’t designed with terminal browsing in mind. They rely on CSS and JS, making them cumbersome in small terminal windows, full of clutter and noise. Spegel tries to clear away distractions, providing content tailored more closely to your needs.

Try it

Spegel is still in the early stages, so expect some rough edges, but it’s usable and kind of fun to play with.

Install it via pip:

Then just run it with a URL:

spegel simedw.com # or your favourite website

Don't forget to configure your own ~/.spegel.toml, (example)

Want to check out the source or contribute? It’s all on GitHub:

https://github.com/simedw/spegel

联系我们 contact @ memedata.com