纽约可能禁止聊天机器人提供医疗、法律和工程建议。
New York could prohibit chatbot medical, legal, engineering advice

原始链接: https://folding-sky.com/blog/ny-senate-bill-s7263-chatbot-liability

## 纽约法案旨在追究聊天机器人责任 一项纽约法案(S7263)即将提交参议院审议,旨在追究聊天机器人运营者因在14个持证职业(包括医学、法律和工程)中提供的不准确或有害“实质性”建议而承担法律责任,以及未经授权的法律执业。如果通过,面向消费者的聊天机器人部署者将面临民事诉讼和潜在损害赔偿,即使有免责声明。 该法案对“所有者”的定义非常广泛,不仅包括OpenAI等大型人工智能公司,还包括医院、非营利组织,甚至利用人工智能API的小团队。 担忧集中在未定义的术语“实质性”上,这可能会抑制有帮助的聊天机器人回复——例如总结法律通知或解释医学术语——以避免法律风险。 批评者认为,该法案于2026年初起草,未能反映人工智能安全方面的最新进展,并可能不成比例地损害那些依赖经济实惠、易于获取的信息的人。 它还引发了关于言论自由和潜在事先审查的第一修正案问题。 法案生效后90天的时间窗口将迫使聊天机器人功能快速变化,预计会出现法律挑战。 该法案可能会显著重塑纽约人工智能信息获取的格局。

## 纽约考虑限制聊天机器人提供建议 纽约正在考虑一项法案,该法案可能禁止聊天机器人提供医疗、法律或工程建议。争论的中心在于潜在的不准确或有害信息,以及人工智能提供错误指导时缺乏问责制。 支持者认为这可以保护消费者,而反对者则认为这会阻碍信息获取,特别是对于那些无力支付专业服务的人。人们担心现行法律已经解决了未经授权的执业和欺骗行为,而该法案可能会扼杀创新和有用的消费者指导。 一个关键的争议点是,聊天机器人是否应该被要求达到与持照专业人士相同的标准,还是应该被视为像搜索引擎一样的信息工具。一些人认为问题不在于提供的信息,而在于聊天机器人可能产生的专业知识印象。另一些人强调人工智能可以 democratize 知识获取的潜力,为缺乏资源的人提供“总比没有好”的替代方案。该法案还面临批评,因为它可能为虚假诉讼打开大门。
相关文章

原文

March 4, 2026

New York regulates 38 licensed professions. Senate Bill S7263 would make chatbot operators liable for AI responses covering at least 14 of them, plus law.

A bill heading to the New York State Senate floor would create civil liability when a consumer-facing chatbot gives "substantive" advice in licensed domains like medicine, law, licensed professional engineering, and mental health counseling (plus a long tail of other professions, including podiatry).

This hits consumers first and builders next, including government and nonprofit chatbots that explain tenant rights or basic healthcare next steps. Most lawsuits would likely cluster around ordinary requests: translate medical jargon, summarize a legal notice, or suggest next questions to ask a professional.

Senate Bill S7263, introduced by Senator Kristen Gonzalez in April 2025, reached the Senate floor calendar on February 26, 2026. If it passes the Senate, crosses to the Assembly (where companion bill A6545 already exists), and gets the Governor's signature, chatbot deployers get 90 days before liability starts.

What the bill says

The full bill is two pages (PDF). Here they are:

Page 1: Definitions and scope.

Page 2: Prohibited conduct, liability, and disclosure requirements.

S7263 adds a new section (SS 390-f) to New York's General Business Law. The core prohibition:

A proprietor of a chatbot shall not permit such chatbot to provide any substantive response, information, or advice, or take any action which, if taken by a natural person, would constitute a crime under section sixty-five hundred twelve or sixty-five hundred thirteen of the education law

In plain English, the rule sounds simple: no license, no advice. But people have always been allowed to read a textbook, government guide, or public reference and apply it to their own situation. The problem is that chatbot output often blends explanation with suggested next steps, and this bill leaves it unclear when that crosses the line into "substantive" professional guidance.

Here are the professions covered:

NY Education Law ArticleProfession

131

Medicine

133

Dentistry

135

Veterinary Medicine

136

Physical Therapy

137

Pharmacy

139

Nursing

141

Podiatry

143

Optometry

145

Engineering, Land Surveying, Geology

147

Architecture

153

Psychology

154

Social Work

163

Mental Health Practitioners

The bill also reaches "unauthorized legal practice" through Judiciary Law Article 15.

Important nuance: "engineering" here means New York Education Law Article 145 professions (professional engineering, land surveying, and geology), not software engineering.

That is a broad set of categories covering everyday questions people already ask AI: "What does this rash look like?" "Can my landlord do this?" "Is this wall load-bearing?" "What are side effects of this medication?" "How do I deal with my anxiety?"

"Proprietor" means whoever deploys the chatbot

The bill defines "proprietor" as any person, business, or entity that "owns, operates or deploys a chatbot system used to interact with users." It explicitly excludes "third-party developers that license their chatbot technology to a proprietor."

That definition is broad. It includes startups and enterprise software teams, but also hospitals, legal aid groups, nonprofits, schools, and government agencies that deploy chatbots for public guidance (and that breadth is exactly what opens the bill to stronger constitutional attack, as discussed at the end of this piece).

OpenAI, Anthropic, and Google are "proprietors" for ChatGPT, Claude, and Gemini. But when their models are licensed via API and deployed by someone else, the deployer is the proprietor. This hits everyone from the largest AI platforms to small teams shipping lightweight wrappers over OpenAI or Anthropic APIs. Whoever runs the interface likely carries the risk, and users pay the price when useful guidance gets blocked.

Disclaimers explicitly do not work

From the bill text:

A proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system.

This breaks the industry's standard playbook: put a warning label on the chatbot and move on. Under S7263, liability turns on what the bot says, not what the disclaimer says (which pushes operators to block answers in advance and fuels the prior-restraint argument discussed below).

The bill still requires disclosure (section 4): users must be told they are interacting with AI, in the same language, at prominent font size. It tells people what they are using, but it does not reduce legal exposure.

The private right of action creates a serial plaintiff goldmine

From the bill:

A person may bring a civil action to recover actual damages and, if it is found that such proprietor has willfully violated this section, the violator shall be liable for actual damages together with costs and reasonable attorneys' fees and disbursements incurred by the person bringing such action.

Fee shifting changes the economics. It makes lower-value cases worth filing because the plaintiff's lawyer can be paid by the defendant.

New York has already seen the serial-plaintiff pattern in web accessibility litigation: high-volume filings, template complaints, and settlement pressure. As summarized in Accessible Minds' 2026 ADA web lawsuit analysis, 2025 produced more than 5,000 digital accessibility lawsuits, including 1,427 repeat-defendant cases (about 45-46 percent of federal filings). The same analysis highlights concentrated targeting in a few sectors (about 70 percent e-commerce and 21 percent food service) and repeated activity by a small set of plaintiff firms. S7263 has a similar legal setup: private lawsuits, damages, fee shifting, and an ambiguous "substantive" standard.

It also notes that smaller businesses still make up most defendants. That matters here. If S7263 passes unchanged, smaller AI startups, indie wrappers, and community organizations may face the earliest settlement pressure because they have fewer legal resources to fight broad claims.

What "substantive response" actually means

Who knows! And that's part of the problem. The bill prohibits "any substantive response, information, or advice" in covered domains, but never defines "substantive." That ambiguity is a key weakness for vagueness and overbreadth challenges.

Medical example: "What is ibuprofen?" vs "Given my conditions and meds, what dosage should I take?" Legal example: "What is an eviction notice?" vs "What should I file tomorrow and by when?" Engineering example: "What is a load-bearing wall?" vs "Is this wall safe to remove in my house?"

Those boundaries are exactly where lawsuits will cluster.

This bill hurts the people it claims to protect

Access gets worse for people with the least slack

The first people hit are the ones with the fewest alternatives. A tenant who cannot afford a lawyer uses a chatbot to understand eviction timelines while waiting for legal aid. A parent at 11pm uses a chatbot to decide whether symptoms are urgent before they can reach a pediatrician. Government and nonprofit chatbots that provide this first-line guidance face the same liability pressure as private companies.

A 2025 panic bill with 2026 consequences

AI hallucinations are real. But this bill was drafted in April 2025, when model failures were louder and tools were rougher. The landscape has moved fast since then: better models, stronger guardrails, better retrieval and grounding, and clearer uncertainty signals. AI is still imperfect. This law still reads like a first-reaction policy.

I have a developer friend who still avoids AI coding tools because he tried Cursor in early 2025 and got bad results. Fair reaction then. Bad policy now. The right question is: have we tried this lately?

Protectionist effect, intended or not

Even if that was not the intent, the effect is protectionist. Restrict low-cost guidance channels, and paid professional channels become the default again. That is rent-seeking pressure, and it lands hardest on people with the least money.

What this means for consumers and AI products

If S7263 passes as written:

  • Consumers with fewer resources lose first-line legal and medical guidance.
  • Government and nonprofit chatbots face the same liability pressure as for-profit AI companies.
  • General chatbots will either over-block useful answers or accept open legal risk.
  • Vertical tools in health, law, and engineering become direct targets for litigation.
  • Customer support bots in insurance, pharmacy, and healthcare will need major restrictions.

The 90-day clock

The bill takes effect 90 days after becoming law. That is a short runway to audit responses across 14 professional domains plus law, build filters, and avoid gutting product usefulness.

It may face First Amendment challenges. Waiting for litigation is not a plan.

For teams deploying AI chatbots in New York, and for people relying on low-cost medical or legal guidance, the time to pay attention is now. The full text of S7263 is short enough to read in five minutes.

Constitutional challenges

If S7263 becomes law, constitutional challenges are probable. The strongest one is the First Amendment.

Even before any court ruling, the law would likely chill product decisions. Startups, nonprofits, and public-sector teams facing vague liability rules will ship less, block more, or try to avoid New York users (and geofencing is a weak fix when users travel, use VPNs, or appear through shifting mobile IP ranges). That is how overbroad regulation causes harm: fewer useful tools reach the people who need them, even if parts of the law are later struck down.

Likely challengeCore argumentMerits

First Amendment (content-based speech restriction)

The law restricts speech by subject matter (medical, legal, engineering, etc.) and by who is speaking (unlicensed chatbot deployers).

Strong. The statute targets the content of speech directly, and "substantive response" reaches plain informational speech. Courts may apply strict scrutiny.

First Amendment (overbreadth)

The law sweeps in a large amount of protected speech, including explanations, summaries, and educational guidance.

Strong to moderate. The broader and less precise "substantive" is interpreted, the stronger overbreadth gets.

Due Process (vagueness)

"Substantive response" is undefined, so operators cannot tell what is legal without guessing.

Strong. Vagueness plus private lawsuits plus attorney fees creates aggressive over-filtering pressure.

Prior restraint theory

The law effectively pushes pre-publication suppression: operators must block answers in advance to avoid liability.

Mixed. This is not a classic licensing board or injunction regime, so "prior restraint" is not the cleanest doctrinal fit. But the chilling effect argument is real.

Dormant Commerce Clause

New York is regulating speech products used nationwide, including out-of-state operators with no physical NY presence.

Moderate. Depends on how courts view extraterritorial burden versus NY's consumer-protection interest.

Compelled speech (disclosure mandate)

Mandatory chatbot disclosure language and font-size requirements force speech.

Weaker challenge. Courts often uphold factual disclosure requirements, especially in consumer contexts.

And yes, I used AI to help pressure-test these constitutional arguments. If this passes, does this become the last time I can get thoughtful legal analysis from a consumer AI on this exact topic?

This is one builder's policy analysis. It goes without saying (or does it) that this is not legal advice (and under S7263, even that disclaimer starts to feel like a decorative sticker).

联系我们 contact @ memedata.com