Code, Claims & Clever Drafting: The Prompt Patent Formula

Code, Claims & Clever Drafting: The Prompt Patent Formula

Event Date

Location

As generative AI systems like GPT, DALL·E, and Stable Diffusion move from experimental tools to engines of innovation, a new challenge is emerging in intellectual property (IP): What exactly should be disclosed in AI-based patent applications?

Two disclosure dilemmas dominate the conversation:

  1. Prompts – increasingly central to how foundation models behave.
  2. Source code – the technical heart of implementation.

Patent law demands disclosure sufficient for a skilled person to practice the invention (enablement), yet innovators want to avoid over-disclosing trade secrets. Striking the right balance is now one of the hardest questions in AI patent strategy.

  1. Why Prompts Matter in Patent Applications

Prompts are not just instructions—they can be part of the invention itself. In generative AI, the same foundation model can produce radically different outputs depending on how it is guided.

  • Prompts as inventive elements: A cleverly engineered prompt may unlock domain-specific accuracy, reliability, or interpretability.
  • Patent role: If an invention centers on using a model for a novel application—say, drug target identification, HR compliance summaries, or image generation—then the prompt structure can be as crucial as an algorithm.

How to handle prompts in patents:

  • Describe how the prompt is constructed (formatting, tokens, contextual cues).
  • Explain why it works (better accuracy, reduced hallucinations, improved relevance).
  • Provide representative examples in the specification, without necessarily hard-coding them into the claims.

📌 Example:
“A compliance summary generator uses a prompt of the form: ‘Summarize this policy for HR use in 3 bullet points.’ This prompt consistently yields human-readable and legally sound outputs.”

In this way, prompts act like algorithms in software patents: functional levers that define how a general-purpose engine produces a novel and repeatable result.

  1. Should Source Code Be Disclosed?

Here the trade-off is sharper.

Why you might include source code:

  • To meet enablement (esp. under U.S. 35 U.S.C. §112).
  • To prove novelty when your innovation lies in training, fine-tuning, or inference loops.
  • To show the examiner that your method is more than abstract theory.

Why you might not:

  • Patent publications become public after 18 months—your code could end up in competitors’ hands.
  • You lose the option to protect it as a trade secret.
  • Most jurisdictions accept pseudo-code, flowcharts, or structured descriptions as sufficient disclosure.

📌 Best practice: Describe algorithmic flow, data structures, and parameter logic, not raw code. A skilled programmer should be able to re-implement your system without you handing over the exact repository.

  1. The Middle Ground

Leading practitioners are converging on a hybrid disclosure strategy:

  1. Use flowcharts and pseudo-code, not full code:
    Show tokenization, model layers, feedback loops, post-processing—without exposing proprietary implementations.
  2. Provide representative prompt examples, not libraries:
    Demonstrate utility with a handful of templates, while protecting your broader prompt ecosystem as trade secret.
  3. Acknowledge source code exists, without reproducing it:
    E.g., “A Python implementation using TensorFlow is available; such implementation would be apparent to a skilled person in the art.”
  4. Claim the system and method, not the verbatim prompt or code:
    Anchor claims in the inventive use of AI, not transient textual instructions.

This hybrid approach respects enablement while preserving competitive advantage.

  1. Legal Doctrines at Play

Functional Claiming & §112(f) (U.S.)

If prompts define how a model achieves a function, they risk being treated under “means-plus-function” claiming. That triggers stricter disclosure requirements—the specification must contain a concrete structure (e.g., representative prompt patterns).

Enablement & Written Description

Following the Supreme Court’s Amgen v. Sanofi (2023) ruling, enablement standards are stricter across tech. Overly broad claims like “any prompt that yields useful output” risk invalidation.

  • Solution: Provide representative examples and demonstrate predictable results across use cases.
  1. Source Code vs. Trade Secrets: Dual IP Strategy

A practical tension: once source code appears in a patent, it cannot be separately licensed as a trade secret.

Many AI companies therefore:

  • Patent the system architecture and prompt logic,
  • Keep model weights, training data, and fine-tuning scripts as trade secrets.

This creates layered IP protection—patent for the “what,” trade secret for the “how.”

  1. Jurisdictional Divergence

AI-related disclosure expectations differ:

  • USPTO (U.S.):
    Focus on enablement and avoiding “abstract idea” rejection (Alice framework). Prompts may be novel if tied to system behavior.
  • EPO (Europe):
    Prompts alone aren’t patentable unless they produce a technical effect. Stronger protection if embedded in structured algorithms.
  • CNIPA (China):
    Prefers detailed, technical disclosure. More formalistic, but fast-moving on AI patents.

📌 Tactical tip: File region-specific claim sets—emphasize technical results in Europe, systemic interaction in the U.S., and hardware/software mapping in China.

  1. Drafting Techniques Emerging

Practitioners are experimenting with new claim framings, such as:

  • Prompt-driven inference loops
  • Generative parameter configurators
  • Template-based prompt scaffolding engines

And surrounding claims with system + method + use-case layers to build robust protection.

  1. Looking Ahead: Precedents & Policy
  • USPTO examiner training (2023–24): Prompts are potentially inventive when they produce repeatable, controllable results.
  • WIPO (2025): Confirmed again that only humans can be inventors, though guidance on AI disclosure is under development.
  • Japan (METI & JPO): New guidelines (2024–25) stress disclosure clarity, IP ownership in AI contracts, and copyright risk in training data.

Expect growing disputes around:

  • Prompt libraries for synthetic data generation,
  • AI agents using prompts for robotics and autonomous control.

Conclusion

The rise of generative AI forces patent law into new territory. Prompts can be as central as algorithms, and source code is both essential for enablement and dangerous to disclose.

The emerging best practice is a middle ground:

  • Patent system architectures, prompt structures, and algorithmic flows,
  • Protect code, model weights, and data as trade secrets,
  • Tailor disclosures to jurisdictional expectations.

AI is rewriting the language of invention—and patent practitioners must learn to draft at the intersection of law, logic, and language.

Related Posts

× How can I help you?