展示 HN:wxpath – 使用 XPath 进行声明式网页抓取
Show HN: wxpath – Declarative web crawling in XPath

原始链接: https://github.com/rodricios/wxpath

## wxpath:一种声明式网页爬虫 wxpath 是一种新颖的网页爬虫,基于 XPath 构建,允许用户以*声明式*的方式定义爬取行为——指定*跟踪和提取什么*,而不是*如何*。它并发执行这些 XPath 表达式,并在找到结果时流式传输。通过 `url(...)` 操作符和 `///` 语法,可以实现深度分页爬取。 主要特性包括:异步操作(使用 `asyncio/aiohttp`),通过 `elementpath` 支持 XPath 3.1,以及阻塞/并发请求的选项。爬虫默认尊重 `robots.txt`,并可配置自定义标头(对于维基百科等网站很重要)。 wxpath 通过可插拔的钩子系统提供灵活性,用于数据预处理/后处理,并支持将结果缓存到 SQLite 或 Redis。提供一个命令行界面 (CLI) 以便快速实验。 **注意:** 该项目仍在积极开发中(低于 1.0.0),因此 API 可能会发生变化。鼓励用户报告问题,尤其是死锁,并提出功能建议。谨慎使用 `max_depth` 和 XPath 谓词对于防止无限爬取至关重要。

## wxpath:使用 XPath 的声明式网页抓取 Rodricios 推出 **wxpath**,一个全新的 Python 库,用于网页抓取和数据提取,完全基于 XPath 表达式构建。wxpath 允许你定义 *要* 抓取和提取的内容,直接在单个 XPath 查询中完成,而无需传统的命令式代码。 主要特性包括异步抓取、内置礼貌机制(限速)和定制化钩子系统。一个新的 `url()` 操作符用于获取 HTML 以进行 XPath 处理,从而实现深度遍历和分页。 该库支持 XPath 3.1 标准特性,如映射和函数,使复杂的数据提取更加简洁。提供的示例展示了抓取维基百科以构建知识图谱和抓取 Hacker News 评论。 目前,wxpath 仅限于 HTTP 抓取(不支持 JavaScript 渲染),并且缺乏抓取持久化功能,但这些功能将根据用户反馈进行规划。 **项目 GitHub 地址:** [https://github.com/rodricios/wxpath](https://github.com/rodricios/wxpath) ,可通过 `pip install wxpath` 安装。
相关文章

原文

Python 3.10+

wxpath is a declarative web crawler where traversal is expressed directly in XPath. Instead of writing imperative crawl loops, wxpath lets you describe what to follow and what to extract in a single expression. wxpath executes that expression concurrently, breadth-first-ish, and streams results as they are discovered.

By introducing the url(...) operator and the /// syntax, wxpath's engine is able to perform deep (or paginated) web crawling and extraction.

NOTE: This project is in early development. Core concepts are stable, but the API and features may change. Please report issues - in particular, deadlocked crawls or unexpected behavior - and any features you'd like to see (no guarantee they'll be implemented).

import wxpath
from wxpath.settings import CRAWLER_SETTINGS

# Custom headers for politeness; necessary for some sites (e.g., Wikipedia)
CRAWLER_SETTINGS.headers = {'User-Agent': 'my-app/0.4.0 (contact: [email protected])'}

# Crawl, extract fields, build a knowledge graph
path_expr = """
url('https://en.wikipedia.org/wiki/Expression_language')
  ///url(//main//a/@href[starts-with(., '/wiki/') and not(contains(., ':'))])
    /map{
        'title': (//span[contains(@class, "mw-page-title-main")]/text())[1] ! string(.),
        'url': string(base-uri(.)),
        'short_description': //div[contains(@class, 'shortdescription')]/text() ! string(.),
        'forward_links': //div[@id="mw-content-text"]//a/@href ! string(.)
    }
"""

for item in wxpath.wxpath_async_blocking_iter(path_expr, max_depth=1):
    print(item)

Output:

map{'title': 'Computer language', 'url': 'https://en.wikipedia.org/wiki/Computer_language', 'short_description': 'Formal language for communicating with a computer', 'forward_links': ['/wiki/Formal_language', '/wiki/Communication', ...]}
map{'title': 'Advanced Boolean Expression Language', 'url': 'https://en.wikipedia.org/wiki/Advanced_Boolean_Expression_Language', 'short_description': 'Hardware description language and software', 'forward_links': ['/wiki/File:ABEL_HDL_example_SN74162.png', '/wiki/Hardware_description_language', ...]}
map{'title': 'Machine-readable medium and data', 'url': 'https://en.wikipedia.org/wiki/Machine_readable', 'short_description': 'Medium capable of storing data in a format readable by a machine', 'forward_links': ['/wiki/File:EAN-13-ISBN-13.svg', '/wiki/ISBN', ...]}
...

Note: Some sites (including Wikipedia) may block requests without proper headers.
See Advanced: Engine & Crawler Configuration to set a custom User-Agent.

The above expression does the following:

  1. Starts at the specified URL, https://en.wikipedia.org/wiki/Expression_language.
  2. Filters for links in the <main> section that start with /wiki/ and do not contain a colon (:).
  3. For each link found,
    • it follows the link and extracts the title, URL, and short description of the page.
    • it repeats step 2 until the maximum depth is reached.
  4. Streams the extracted data as it is discovered.

url(...) and ///url(...) Explained

  • url(...) is a custom operator that fetches the content of the user-specified or internally generated URL and returns it as an lxml.html.HtmlElement for further XPath processing.
  • ///url(...) indicates a deep crawl. It tells the runtime engine to continue following links up to the specified max_depth. Unlike repeated url() hops, it allows a single expression to describe deeper graph exploration. WARNING: Use with caution and constraints (via max_depth or XPath predicates) to avoid traversal explosion.

See DESIGN.md for details of the language design. You will see the core concepts and design the language from the ground up.

wxpath evaluates an expression as a list of traversal and extraction steps (internally referred to as Segments).

url(...) creates crawl tasks either statically (via a fixed URL) or dynamically (via a URL derived from the XPath expression). URLs are deduplicated globally, on a best-effort basis - not per-depth.

XPath segments operate on fetched documents (fetched via the immediately preceding url(...) operations).

///url(...) indicates deep crawling - it proceeds breadth-first-ish up to max_depth.

Results are yielded as soon as they are ready.

wxpath is asyncio/aiohttp-first, providing an asynchronous API for crawling and extracting data.

import asyncio
from wxpath import wxpath_async

items = []

async def main():
    path_expr = "url('https://en.wikipedia.org/wiki/Expression_language')///url(//@href[starts-with(., '/wiki/')])//a/@href"
    async for item in wxpath_async(path_expr, max_depth=1):
        items.append(item)

asyncio.run(main())

Blocking, Concurrent Requests

wxpath also provides an asyncio-in-sync API, allowing you to crawl multiple pages concurrently while maintaining the simplicity of synchronous code. This is particularly useful for crawls in strictly synchronous execution environments (i.e., not inside an asyncio event loop) where performance is a concern.

from wxpath import wxpath_async_blocking_iter

path_expr = "url('https://en.wikipedia.org/wiki/Expression_language')///url(//@href[starts-with(., '/wiki/')])//a/@href"
items = list(wxpath_async_blocking_iter(path_expr, max_depth=1))

wxpath respects robots.txt by default via the WXPathEngine(..., robotstxt=True) constructor.

The wxpath Python API yields structured objects.

Depending on the expression, results may include:

  • lxml.* and lxml.html.* objects
  • elementpath.datatypes.* objects (for XPath 3.1 features)
  • WxStr (string values with provenance)
  • dictionaries / maps
  • lists or other XPath-native values

The CLI flattens these objects into plain JSON for display. The Python API preserves structure by default.

wxpath uses the elementpath library to provide XPath 3.1 support, enabling advanced XPath features like maps, arrays, and more. This allows you to write more powerful XPath queries.

path_expr = """
    url('https://en.wikipedia.org/wiki/Expression_language')
    ///url(//div[@id='mw-content-text']//a/@href)
    /map{ 
        'title':(//span[contains(@class, "mw-page-title-main")]/text())[1], 
        'short_description':(//div[contains(@class, "shortdescription")]/text())[1],
        'url'://link[@rel='canonical']/@href[1]
    }
"""
# [...
# {'title': 'Computer language',
# 'short_description': 'Formal language for communicating with a computer',
# 'url': 'https://en.wikipedia.org/wiki/Computer_language'},
# {'title': 'Machine-readable medium and data',
# 'short_description': 'Medium capable of storing data in a format readable by a machine',
# 'url': 'https://en.wikipedia.org/wiki/Machine-readable_medium_and_data'},
# {'title': 'Domain knowledge',
# 'short_description': 'Specialist knowledge within a specific field',
# 'url': 'https://en.wikipedia.org/wiki/Domain_knowledge'},
# ...]

wxpath provides a progress bar (via tqdm) to track crawl progress. This is especially useful for long-running crawls.

Enable by setting engine.run(..., progress=True), or pass progress=True to any of the wxpath_async*(...) functions.

items = wxpath.wxpath_async_blocking("...", progress=True)
> 100%|██████████████████████████████████████████████████████████▎| 469/471 [00:05<00:00, 72.00it/s, depth=2, yielded=457]

wxpath provides a command-line interface (CLI) to quickly experiment and execute wxpath expressions directly from the terminal.

The following example demonstrates how to crawl Wikipedia starting from the "Expression language" page, extract links to other wiki pages, and retrieve specific fields from each linked page.

NOTE: Due to the everchanging nature of web content, the output may vary over time.

> wxpath --depth 1 \
    --header "User-Agent: my-app/0.1 (contact: [email protected])" \
    "url('https://en.wikipedia.org/wiki/Expression_language') \
    ///url(//div[@id='mw-content-text']//a/@href[starts-with(., '/wiki/') \
        and not(matches(@href, '^(?:/wiki/)?(?:Wikipedia|File|Template|Special|Template_talk|Help):'))]) \
    /map{ \
        'title':(//span[contains(@class, 'mw-page-title-main')]/text())[1], \
        'short_description':(//div[contains(@class, 'shortdescription')]/text())[1], \
        'url':string(base-uri(.)), \
        'backlink':wx:backlink(.), \
        'depth':wx:depth(.) \
        }"

{"title": "Computer language", "short_description": "Formal language for communicating with a computer", "url": "https://en.wikipedia.org/wiki/Computer_language", "backlink": "https://en.wikipedia.org/wiki/Expression_language", "depth": 1.0}
{"title": "Machine-readable medium and data", "short_description": "Medium capable of storing data in a format readable by a machine", "url": "https://en.wikipedia.org/wiki/Machine_readable", "backlink": "https://en.wikipedia.org/wiki/Expression_language", "depth": 1.0}
{"title": "Domain knowledge", "short_description": "Specialist knowledge within a specific field", "url": "https://en.wikipedia.org/wiki/Domain_knowledge", "backlink": "https://en.wikipedia.org/wiki/Expression_language", "depth": 1.0}
{"title": "Advanced Boolean Expression Language", "short_description": "Hardware description language and software", "url": "https://en.wikipedia.org/wiki/Advanced_Boolean_Expression_Language", "backlink": "https://en.wikipedia.org/wiki/Expression_language", "depth": 1.0}
{"title": "Data Analysis Expressions", "short_description": "Formula and data query language", "url": "https://en.wikipedia.org/wiki/Data_Analysis_Expressions", "backlink": "https://en.wikipedia.org/wiki/Expression_language", "depth": 1.0}
{"title": "Jakarta Expression Language", "short_description": "Computer programming language", "url": "https://en.wikipedia.org/wiki/Jakarta_Expression_Language", "backlink": "https://en.wikipedia.org/wiki/Expression_language", "depth": 1.0}
{"title": "Rights Expression Language", "short_description": [], "url": "https://en.wikipedia.org/wiki/Rights_Expression_Language", "backlink": "https://en.wikipedia.org/wiki/Expression_language", "depth": 1.0}
{"title": "Computer science", "short_description": "Study of computation", "url": "https://en.wikipedia.org/wiki/Computer_science", "backlink": "https://en.wikipedia.org/wiki/Expression_language", "depth": 1.0}

Command line options:

--depth                <depth>       Max crawl depth
--verbose              [true|false]  Provides superficial CLI information
--debug                [true|false]  Provides verbose runtime output and information
--concurrency          <concurrency> Number of concurrent fetches
--concurrency-per-host <concurrency> Number of concurrent fetches per host
--header               "Key:Value"   Add a custom header (e.g., 'Key:Value'). Can be used multiple times.
--respect-robots       [true|false] (Default: True) Respects robots.txt
--cache                [true|false] (Default: False) Persist crawl results to a local database

wxpath optionally persists crawl results to a local database. This is especially useful when you're crawling a large number of URLs, and you decide to pause the crawl, change extraction expressions, or otherwise need to restart the crawl.

wxpath supports two backends: sqlite and redis. SQLite is great for small-scale crawls, with a single worker (i.e., engine.crawler.concurrency == 1). Redis is great for large-scale crawls, with multiple workers. You will be encounter a warning if you min(engine.crawler.concurrency, engine.crawler.per_host) > 1 when using the sqlite backend.

To use, you must install the appropriate optional dependency:

pip install wxpath[cache-sqlite]
pip install wxpath[cache-redis]

Once the dependency is installed, you must enable the cache:

from wxpath.settings import SETTINGS

# To enable caching; sqlite is the default
SETTINGS.http.client.cache.enabled = True

# For redis backend
SETTINGS.http.client.cache.enabled = True
SETTINGS.http.client.cache.backend = "redis"
SETTINGS.http.client.cache.redis.address = "redis://localhost:6379/0"

# Run wxpath as usual
items = list(wxpath_async_blocking_iter('...', max_depth=1, engine=engine))

See settings.py for details of the settings.

wxpath supports a pluggable hook system that allows you to modify the crawling and extraction behavior. You can register hooks to preprocess URLs, post-process HTML, filter extracted values, and more. Hooks will be executed in the order they are registered. Hooks may impact performance.

from wxpath import hooks

@hooks.register
class OnlyEnglish:
    def post_parse(self, ctx, elem):
        lang = elem.xpath('string(/html/@lang)').lower()[:2]
        return elem if lang in ("en", "") else None

NOTE: Hooks may be synchronous or asynchronous, but all hooks in a project should follow the same style. Mixing sync and async hooks is not supported and may lead to unexpected behavior.

from wxpath import hooks

@hooks.register
class OnlyEnglish:
    async def post_parse(self, ctx, elem):
        lang = elem.xpath('string(/html/@lang)').lower()[:2]
        return elem if lang in ("en", "") else None

JSONLWriter (aliased NDJSONWriter) is a built-in hook that writes extracted data to a newline-delimited JSON file. This is useful for storing results in a structured format that can be easily processed later.

from wxpath import hooks
hooks.register(hooks.JSONLWriter)

Requires Python 3.10+.

For persisted/cached, wxpath supports the following backends:

pip install wxpath[cache-sqlite]
pip install wxpath[cache-redis]

See EXAMPLES.md for more usage examples.

See COMPARISONS.md for comparisons with other web-scraping tools.

Advanced: Engine & Crawler Configuration

You can alter the engine and crawler's behavior like so:

from wxpath import wxpath_async_blocking_iter
from wxpath.core.runtime import WXPathEngine
from wxpath.http.client.crawler import Crawler

crawler = Crawler(
    concurrency=8,
    per_host=2,
    timeout=10,
    respect_robots=False,
    headers={
        "User-Agent": "my-app/0.1.0 (contact: [email protected])", # Sites like Wikipedia will appreciate this
    },
)

# If `crawler` is not specified, a default Crawler will be created with
# the provided concurrency, per_host, and respect_robots values, or with defaults.
engine = WXPathEngine(
    # concurrency: int = 16, 
    # per_host: int = 8,
    # respect_robots: bool = True,
    # allowed_response_codes: set[int] = {200},
    # allow_redirects: bool = True,
    crawler=crawler,
)

path_expr = "url('https://en.wikipedia.org/wiki/Expression_language')//url(//main//a/@href)"

items = list(wxpath_async_blocking_iter(path_expr, max_depth=1, engine=engine))
  • Enable declarative, crawling and scraping without boilerplate
  • Stay lightweight and composable
  • Asynchronous support for high-performance crawls
  • URLs are deduplicated on a best-effort, per-crawl basis.
  • Crawls are intended to terminate once the frontier is exhausted or max_depth is reached.
  • Requests are performed concurrently.
  • Results are streamed as soon as they are available.

The following features are not yet supported:

  • Automatic proxy rotation
  • Browser-based rendering (JavaScript execution)
  • Strict result ordering
  • Be respectful when crawling websites. A scrapy-inspired throttler is enabled by default.
  • Deep crawls (///) require user discipline to avoid unbounded expansion (traversal explosion).
  • Deadlocks and hangs are possible in certain situations (e.g., all tasks waiting on blocked requests). Please report issues if you encounter such behavior.
  • Consider using timeouts, max_depth, and XPath predicates and filters to limit crawl scope.

Commercial support / consulting

If you want help building or operating crawlers/data feeds with wxpath (extraction, scheduling, monitoring, breakage fixes) or other web-scraping needs, please contact me at: [email protected].

If you like wxpath and want to support its development, please consider donating.

wxpath follows semver: <MAJOR>.<MINOR>.<PATCH>.

However, pre-1.0.0 follows 0.<MAJOR>.<MINOR|PATCH>.

MIT

联系我们 contact @ memedata.com