HackEurope 2026:关于人工智能和黑客马拉松的简短评论
HackEurope 2026: A short rant on AI and hackathons

原始链接: https://duti.dev/blog/2026/spr/

## HackEurope 回顾:经验教训 HackEurope 是一次混乱的经历,但揭示了关键见解:**前端展示胜过功能性。** 优先考虑美观而非实际应用的项目在投资者和非技术评委中表现出色。 出现了一些实用的收获:仔细选择你的黑客马拉松赛道,*确认*赞助商在你所在地的参与,并专注于容易解释的问题(尽管听众的注意力有限)。 追随趋势——特别是结合“AI”——对获奖者来说是有益的,但作者警告不要为了迎合潮流而牺牲原创性。 作者的团队构建了一个安全的包注册表,旨在保护开发者免受供应链攻击,超越了简单的漏洞扫描。 他们的 MVP 使用行为分析(利用 eBPF 和,勉强地,AI)来识别恶意代码在安装前。 未来的计划包括更广泛的生态系统支持和代码行为的详细跟踪。 最终,作者批评了黑客马拉松中由 AI 驱动的从众现象,认为它扼杀了那些不属于典型 AI 训练数据范围内的真正创新想法。 他们正在将这个项目作为一家初创公司进行,目标是在五月推出。

这场 Hacker News 讨论的核心是人工智能,特别是大型语言模型 (LLM) 对黑客马拉松的影响。用户观察到一种日益严重的“趋同问题”——人工智能倾向于生成平庸的输出,导致在这些活动中广泛使用时,项目非常相似。 一位评论员指出,人工智能的“下一个词预测”和强化学习偏差会将结果推向平均水平,而人机协作由于人类输入能够偏离这个平均水平,因此表现更好。另一位评论员注意到,他们参加的一个黑客马拉松发生了巨大变化,从几乎不使用 LLM 到几乎完全依赖 LLM,甚至修改了提交要求,优先考虑概念而非可用的原型。 本质上,这场对话表明人工智能正在改变黑客马拉松的*本质*,可能会降低原创性和实际应用,而更侧重于想法的产生和展示。
相关文章

原文

HackEurope is over. In many ways, it was a complete shitshow. But now that the caffeine overdose and sleep deprivation is over, I can say that there were actually some important lessons.

TL;DR:

  1. Front-end is almost everything. There is 0 burden of proof that your project is actually functional or that it has any practical application. As long as it looks cool, investors and non-technical people will eat that up.

  2. Choose your track wisely. Make sure that the track sponsor IS ACTUALLY AT YOUR FUCKING LOCATION. Most people were under the impression that tracks were per-country when in fact there was a single €1000 prize shared across the 3 countries and the sponsor wasn't actually operating in some.

  3. Choose a problem that is easy to explain. There were 2 minutes to explain. It is a losing game whether or not you explain context. Non-technical people will tune out confused regardless. We were extremely lucky with 2/3 of the evaluators actually knowing about open-source supply chain attacks and being excited about our solution.

  4. Follow the trends. All winners had "AI" as a significant part of their solution.

That being said, I personally wouldn't follow my own advice. I went in with the goal of building something that I would want to maintain long term. Not just AI slop (I fucking hate Lovable).

So what did we actually build?

Context

Over the past year, we've had all sort of supply chain attacks. From the Shai-Hulud worm to Notepad++ being hacked. Developers are the most vulnerable. Most people install packages with no verification whatsoever. Meanwhile, $BIGCORPs hire expensive security teams with manual reviews that take forever. Lots of time wasted and duplicate work done between companies.

(Common misunderstandings: No, we're not looking at CVEs or vulnerabilities. Lots of companies like Snyk or Wiz already do that. There are valid times to use insecure but non-malicious software such as for internal tooling)

The MVP

So what our MVP was is basically a secure package registry that you simply npm config set and used in place of NPM. We take packages from NPM and generate a series of tests that would usually trigger malicious behaviors (if any). We then collect a bunch of behavioral data using eBPF (like file accesses, DNS, network connections, executed commands, etc.). This is a lot of data, and we increase the signal to noise ratio by deduplicating based on a known set of safe behavior collected from another real package. From that, we can either use "AI" (of course we had to plug that in somewhere. It was the theme of the hackathon lol) or historical data to determine whether that behavior is malicious or at least anomalous. If everything is clear, it gets uploaded to our "secure" registry.

There is still a lot to work on. There are quite a few features we're also working on:

  • Reproducible builds
  • Derivative of behavioral changes across time to determine the "normal" amount of deviance
  • Supporting PyPi, Maven, Cargo, and other ecosystems
  • Automatic tracing of behavior to source (line of code, commit introduced, etc. Reverse engineering if necessary).
  • Matching registry releases to exact source code commits
  • Use eCapture to decrypt HTTPS
  • Have honeypot data to catch exfiltration attempts

I've been working on this since last September at a slow burn (no code reused for HackEurope though) and the goal is to have a running startup by May.

If you have any comments, or just interested in general, pop me an email.


AI encourages conformity and kills creativity

A solid 90% of the projects there were just vibe coded slop. Even the ideas were AI. You can tell when multiple people implemented the exact same idea with the exact same title, description, and implementation.

While people call me a luddite, I do not particularly hate AI as a tool. My problem is that it has significantly lowered the bar for certain project types and therefore incentivize people who would have otherwise built something cool to instead fit into a mold constrained by the capabilities of AI.

A lot of cool ideas are out of distribution from the training data, and those rarely show up at hackathons anymore. The AI says they're "too hard" and people simply avoid these.

There's a lot more I want to write here but I'm getting on a flight soon. Will get back when I have the time.


I'm sure that this post will cost me some future jobs or whatever but I don't really give a fuck. Those places probably ain't worth it.

联系我们 contact @ memedata.com