展示 HN:Apate API 模拟/原型服务器和 Rust 单元测试库
Show HN: Apate API mocking/prototyping server and Rust unit test library

原始链接: https://github.com/rustrum/apate

## Apate: API 原型设计与模拟服务器 Apate 是一个稳定、独立的 Rust 应用程序和 Web 服务器,专为 API 原型设计、集成和端到端测试而设计。它以希腊女神阿帕忒(欺骗之神)命名,允许开发者模拟 API 行为,而无需依赖实时服务。 **主要特性:** * **灵活的模拟:** 支持字符串和二进制响应,可通过 Jinja 模板和 Rhai 脚本进行自定义,以实现高级逻辑。 * **持久性:** 提供内存持久性,以模拟数据库行为。 * **测试优先:** 包含用于单元测试的 Rust 库,并促进集成/负载测试。 * **配置:** 可通过环境变量、CLI 参数和 REST API(用于规范管理的 GET/POST 端点)进行配置。 * **Docker 准备就绪:** 可通过 Docker 轻松部署,并提供示例命令。 * **可定制:** 允许使用 Rust 扩展构建自定义服务器,并利用 Rhai 脚本处理复杂场景。 Apate 通过消除运行依赖服务的需要,简化了本地开发,并能够针对可预测的 API 端点进行强大的测试。 提供了详细的示例和文档,包括规范格式和脚本提示。 采用 MIT 许可,具体条款见 `LICENSE-TERMS`。

## Apate:Rust API 模拟与原型设计 一个名为 Apate 的新项目在 Hacker News 上分享——一个用于 API 模拟和原型设计的 Rust 库和服务器。它允许开发者轻松模拟后端服务,用于测试和开发。 讨论很快集中到 Apate 的**非标准许可证**上,该许可证是 MIT 许可证的一种变体,引发了对兼容自动化许可证检查以及潜在采用障碍的担忧。 几位评论者建议更简单的版权声明会更清晰。 其他讨论点包括与现有工具(如 `httpmock`)的比较、对 Rust 适合 Web 开发(使用 Axum 和 SQLx 等框架)的赞扬,以及关于过去一起涉及骚扰一位 Rust 批评者的事件的简短争论。 值得注意的是,一位评论者赞赏了 README 真实、非 LLM 生成的写作风格。 总体而言,该项目看起来很有希望,但许可证选择是关键的争论点。
相关文章

原文

Apate API mocking server

Crates.io Released API docs

API prototyping and mocking server that main purpose is to help with integration and end-to-end testing. Project named after Apate - the goddess and personification of deceit.

🚀 Project is stable. Almost everything works as it was planned. I will wait some time for user feedback. No breaking changes expected in the nearest future.

  • 💻⚙️ Standalone server app with web UI
  • 🔃 Live specs reloading via UI or API
  • 🎭 Mocking any string & binary responses
  • ⛩️ Jinja templates to customize response body
  • 🌿 Rhai scripting for advanced scenarios
  • 💾 In memory persistence to mimic DB behavior in some cases
  • 🛠️ Unit tests friendly rust library
  • 🦀 Ability to build custom mocking server with your rust extensions
  • 👨🏻‍💻 local development - to do not run/build other services locally or call external APIs
  • 🦀 rust unit tests - to test your client logic without shortcuts
  • 💻🛠️⚙️ integration tests - if 3rd party API provider suck/stuck/etc it is better to run test suites against predictable API endpoints.
  • 💻🏋🏻‍♂️ load tests - when deployed alongside your application Apate should respond fast, so no need to take external API delays into account.
  • 📋 API server prototyping - it could be convenient to have working API endpoint before implementing whole server logic

Launching a clean disposable container is easy with docker.

docker run --rm -tp 8228:8228 ghcr.io/rustrum/apate:latest

It will run Apate server without any URI deceit. So you should add new specification via API endpoints or web UI (see below).

To start server with some specs mount your TOML specs into docker image and provide proper ENV variables.

docker run --rm -tp 8228:8228 -v $(pwd)/examples:/specs -e APATHE_SPECS_FILE_1=/specs/apate-specs.toml ghcr.io/rustrum/apate:latest

Example above expecting you to execute docker run from the Apate git repository root.

Install & run locally via cargo

If you have cargo then just install it as cargo install apate. After that you will have apate binary in your $PATH.

Apate server configuration

Apate web UI is located at http://HOST:PORT/apate (will be http://localhost:8228/apate for most cases). Works for docker too.

Please notice that specification shown in web UI is not looking cool. All because it is automatically generated from the internal representation. Please see examples folder to figure out how to write TOML specs in pretty way.

ENV variables and CLI args

You could use next ENV variables:

  • RUST_LOG and RUST_LOG_STYLE - to configure logging
  • APATHE_PORT - to provide port to run server on (default 8228)
  • APATHE_SPECS_FILE... - any ENV variable which name is started with such prefix will be parsed as a path to spec file

Apate can be also configured with CLI arguments which has higher priority than ENV variables.

apate -p 8080 -l warn ./path/to/spec.toml ./path/to/another_spec.toml
  • -p - port to run server on
  • -l - logging level
  • positional arguments - paths to spec files

If you like curl you can configure Apate while it is running.

  • GET /apate/info - returns JSON with basic info about current server
  • GET /apate/specs - return TOML with a specs file
  • POST /apate/specs/replace - replace current specs with a new one from the request body
  • POST /apate/specs/append - add specs from request after existing
  • POST /apate/specs/prepend - add specs from request before existing

All POST methods require TOML specification in request body. Something like this:

curl -X POST http://localhost:8228/apate/specs/replace -d @./new-specs.toml

Using Apate in rust tests

Some self explanatory tests examples could be found here.

In a nutshell, you should create an instance of Apate server at the beginning of your test. And you will be able to call your API endpoints at http://localhost:8228 (or any other port you'll specify).

This is a how it will looks like in the code.

/// Yes the test does not require to be async.
#[test]
fn my_api_test() {
    let config = DeceitBuilder::with_uris(&["/user/check"])
        .require_method("POST")
        .add_header("Content-Type", "application/json")
        .add_response(
            DeceitResponseBuilder::default()
                .code(200)
                .with_output(r#"{"message":"Success"}"#)
                .build(),
        )
        .to_app_config();

    // Assign the server to some variable and it will be dropped at the end of the test.
    let _apate = ApateTestServer::start(config, 0);

    // That's all you need to do.
    // Now you can call http://localhost:8228/user/check 
    // You will get JSON response: {"message":"Success"}
    // And response will have header: "Content-Type: application/json"
}

Making your custom Apate server

It is possible to run Apate embedded into your application. You may need this to add custom rust logic into response processing. For example it could be response signature functionality. See processors example.

To understand how it works look into specification example file, it has verbose comments. There are other specification files as well with more advanced examples.

Rhai scripting language is used to extend configuration capabilities. See Rhai website, Rhai docs and configuration examples.

I expect that for most cases you will not need any Rhai scripting. It is meant only for complex scenarios.

Piece of DSL or Rhai script that returns boolean. In order to proceed further all matchers must return true.

Runs additional logic that can modify already prepared response body.

Processors are defined using Rhai script. Rust processors available only for custom applications.

String (default) - returns string from specification as is.

Binary content - handle output string as a binary content in HEX or Base64 formats. See examples here.

Jinja (minijinja) templates - respond with type="jinja" processed as a jinja template using minijinja template engine. Template syntax documentation can be found here. See also minijinja filters.

Rhai script - Similar to minijinja you can use Rhai script to generate content. See examples here.

Scripting specification hints

There are some additional functions & context that is available for Jinja templates and Rhai scripts.

Available for matchers and output rendering.

Has set of global functions:

  • random_num() || random_num(max) || random_num(from, to) - to return random number
  • random_hex() || random_hex(bytes_len) - return random hex string for some bytes length or default
  • uuid_v4() - returns random UUID v4

Has global variable ctx with next API:

  • ctx.method - returns request method
  • ctx.path - returns request path
  • ctx.response_code - get set custom response code if any (default 0 if not set)
  • ctx.load_headers() -> build request headers map (lowercase keys)
  • ctx.load_query_args() -> build map with URL query arguments
  • ctx.load_path_args() -> build arguments map from specs URIs like /mypath/{user_id}/{item_id}
  • ctx.load_body_string() -> load request body as string
  • ctx.load_body_json() -> load request body as json
  • ctx.inc_counter("key") -> increment counter by key and returns previous value

Has set of global functions:

  • random_num() || random_num(max) || random_num(from, to) - to return random number
  • random_hex() || random_hex(bytes_len) - return random hex string for some bytes length or default
  • uuid_v4() - returns random UUID v4
  • to_json_blob(value) - serialize any value to JSON blob
  • from_json_blob(blob_input) - deserialize value (array, object) from JSON blob
  • storage_read(key) - reads any value from storage by key
  • storage_write(key, value) - writes any value to storage by key

Has global variable args that contains custom user arguments from TOML specs if any.

Has global variable ctx with next API:

  • ctx.method -> returns request method
  • ctx.path -> returns request path
  • ctx.load_headers() -> build request headers map (lowercase keys)
  • ctx.load_query_args() -> build map with URL query arguments
  • ctx.load_path_args() -> build arguments map from specs URIs like /mypath/{user_id}/{item_id}
  • ctx.load_body() -> reads request body as Blob

Available for Rhai post processors.

Contains same global functions as a request context and args variable.

Has global variable body that contains response output.

Has global variable ctx with some additional functionality:

  • ctx.inc_counter(key) - increment counter by key and returns previous value
  • ctx.response_code - get set custom response code if any (default 0 if not set)

This product distributed under MIT license BUT only under certain conditions that listed in the LICENSE-TERMS file.

联系我们 contact @ memedata.com