展示HN:上下文工程如何运作,一个可运行的参考。
Show HN: A working reference implementation of context engineering

原始链接: https://github.com/outcomeops/context-engineering

该仓库提供了一个**上下文工程**的实际示例,这是一种学科,专注于为人工智能系统提供准确的、特定于组织的的信息,以获得更好的输出。与简单的提示或检索增强生成 (RAG) 不同,上下文工程将上下文视为核心工程制品——版本化、可检索和可强制执行。 该实现展示了一个五组件系统:**语料库**(组织知识)、**检索**(识别相关信息)、**注入**(将上下文提供给模型)、**输出**(生成可审查的制品,如代码)和**强制执行**(验证输出是否反映了上下文)。 该代码使用 Amazon Bedrock 和 Anthropic Claude 构建,并以 Spring PetClinic 代码库和架构决策记录 (ADR) 作为运行示例。与 RAG 的关键区别在于 **输出** 和 **强制执行** 层,从而实现可审查和可管理的 AI 生成内容。 该仓库包含可运行的示例、先决条件(Python 3.11+、AWS 账户),并强调成功实施上下文工程所需的组织变革——超越代码更改,以适应角色和决策过程。资源和进一步阅读资料在每个文件夹的 README 中提供。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 Show HN: 上下文工程的运作方式,一个可运行的参考 (github.com/outcomeops) 5 分,由 linsys 1小时前发布 | 隐藏 | 过去 | 收藏 | 讨论 我一直在本地聚会上介绍上下文工程、RAG、技能等。我甚至在 LinkedIn 上有一个 vbrownbag 即将到来,关于这个主题,所以我制作了一个基本的例子,使用 bedrock,这样我就可以在我的演讲或 vbrownbag 中使用它。希望对大家有帮助。 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系方式 搜索:
相关文章

原文

A working reference implementation of context engineering — the discipline of designing, retrieving, and injecting the information an AI system needs to produce accurate, organization-specific outputs.

This repo is the code companion to What Is Context Engineering? on outcomeops.ai. The glossary defines the concepts; this repo shows them running end-to-end against a real corpus on Amazon Bedrock.

Context engineering treats context as a first-class engineering artifact — version-controlled, retrievable, and enforceable — rather than as prompts typed into a chat window.


A context engineering system has five components. Each folder implements one against the same running example (a Spring PetClinic codebase with ADRs):

# Component What it does Folder
1 Corpus The organizational material that defines how you think, build, and decide 01-corpus/
2 Retrieval Identifies which portions of the corpus are relevant to a given request 02-retrieval/
3 Injection Gets retrieved context into the model's working memory at decision time 03-injection/
4 Output Produces reviewable artifacts (code, PRs, docs) shaped by that context 04-output/
5 Enforcement Ensures the generated output actually reflects the retrieved context 05-enforcement/

Plus comparisons/ — the same task run with and without context engineering, plus how CE differs from RAG, Copilot, and agent frameworks.

A system with only components 1–3 is a RAG system. The output and enforcement layers are what make CE different — they make the generated content reviewable and governable.


All examples use Amazon Bedrock with Claude. Each folder has its own requirements.txt and README.md with a runnable command.

Prerequisites:

  • Python 3.11+
  • AWS account with credentials configured (aws configure or env vars)
  • AWS region that supports Claude and Titan (e.g. us-east-1, us-west-2)

This repo uses Anthropic Claude for generation and Amazon Titan for embeddings. Titan and most Bedrock foundation models are auto-enabled on first invocation — no action needed.

Anthropic Claude requires a one-time First Time Use (FTU) form per AWS account. If your account has never used Anthropic models on Bedrock, the first script run will fail with AccessDeniedException. To fix:

  1. Open any Anthropic Claude model in the Bedrock model catalog
  2. Fill the First Time Use form (company, use case — about a minute)
  3. Submit — access is granted immediately, no review queue

If you're in an AWS Organization child account, the form must be submitted from the management account to inherit access.

Quickstart:

git clone https://github.com/outcomeops/context-engineering.git
cd context-engineering/01-corpus
pip install -r requirements.txt
python ingest_adrs.py ./sample-adrs

Set the model via environment variable if you want to override the default:

export BEDROCK_MODEL_ID="us.anthropic.claude-sonnet-4-5-20250929-v1:0"
export AWS_REGION="us-east-1"

Most AI coding assistants produce generic output. An engineer using a generic assistant still has to adapt the output to local patterns — the assistant doesn't know what your team decided last quarter, what your compliance framework requires, or why you picked one library over another.

A context-engineered system produces output that already conforms to local patterns, because the retrieval layer has fed the model the relevant ADRs, code, and standards at decision time. The enforcement layer ensures the output actually cites what it relied on.

This repo exists to show the pattern in code, end-to-end, so teams can build it themselves or evaluate commercial tools that claim to do it.


Context engineering changes organizations, not just code

The five-component model is the technical frame. Teams that actually deploy it consistently discover the harder shift is organizational. Roles, KPIs, and decision rights in a traditional software org were shaped by a world where AI could not read the corpus. Once it can, the middle layers of that structure start to look different — and the repo above is only useful in the first place because of those changes.


Foundational articles, reference guides, and practitioner writeups on context engineering as a discipline:

Each folder's README has its own curated reading list; the quick index:

  • Corpus — see 01-corpus/ — ADR formats, corpus bootstrapping, self-documenting architecture
  • Retrieval — see 02-retrieval/ — FAISS, "Lost in the Middle," retrieval economics
  • Injection — see 03-injection/ — prompt structure, token budgets, inference cost
  • Output — see 04-output/ — JSON Schema, Bedrock tool-use, the outcome engineer
  • Enforcement — see 05-enforcement/ — LLM-as-judge research, PR-as-guardrail
  • Comparisons — see comparisons/ — CE vs RAG vs agents vs enterprise search

Built by Brian Carpio at OutcomeOps. Questions, corrections, or contributions welcome via issues and PRs.

MIT — see LICENSE.

联系我们 contact @ memedata.com