The Confidentiality Imperative of the Institute of European Patent Attorneys (epi)

In the rapidly evolving intersection of artificial intelligence and law, confidentiality is no longer just a matter of professional ethics—it is a strategic (and legal) imperative. As AI tools proliferate across the IP landscape, patent attorneys and IP professionals must make informed, legally secure choices about their deployment.

This article looks into the Guidelines for the Use of AI of the Institute of Professional Representatives before the European Patent Office (epi), the professional body of European Patent Attorneys. Particular focus is put on the confidentiality obligations addressed in Guidelines 2a and 2b.


1. Opportunity and Responsibility

The adoption of generative AI in legal practice is accelerating. For patent professionals, possible applications are compelling: AI-powered tools now assist with prior art analysis, patent drafting and prosecution, invalidation, and portfolio management. Innovations from AI startups and traditional legaltech companies are converging into a new ecosystem of support tools that promise not only speed and cost-efficiency but also improved access to IP protection and, more innovation, and prosperity.

But this technological shift comes with a warning sign. Most commonly available AI models—such as OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude—are cloud-based large language models (LLMs) that process queries on a distributed infrastructure that is not only not under the control of the law firm or client but may also be accessible by third parties. This poses a fundamental challenge to one of the cornerstones of legal and patent practice: confidentiality.


2. Confidentiality Obligations under the epi Guidelines

The epi Guidelines for the Use of AI make explicit that maintaining confidentiality is not optional when using AI tools.

Guideline 2a: Ensure Adequate Confidentiality

“Members when using generative AI must, to the extent called for by the circumstances, ensure adequate confidentiality of training datasets, instruction prompts and other content transmitted to AI models. If there is doubt that confidentiality will be maintained to a level that is appropriate to the prevailing context the AI model in question should not be used.”

This is an unequivocal call for diligence: patent attorneys must actively ensure that any AI tool they use preserves confidentiality to a standard suitable for their professional context. Notably, the explanatory note warns that:

  • Many generative AI tools do not assure confidentiality of input data.
  • Some are unclear or ambiguous in their privacy practices.
  • Even public domain information can become confidential when tied to a specific legal or client context.

The consequence is clear: If in doubt, do not use it.

Guideline 2b: Avoid Wilful Blindness

“In ensuring adequate confidentiality, Members must inform themselves about the likelihoods and modes of non-confidential disclosures deriving from use of specific AI models.”

The explanatory note to 2b reinforces this with a clear directive: patent professionals cannot claim ignorance. They must actively research whether a particular AI model protects confidentiality. Lack of documentation or clarity is not an excuse—it is a red flag.

Together, Guidelines 2a and 2b define a proactive duty:

  • Know what happens to your data when using an AI tool.
  • Avoid tools that do not disclose or guarantee confidentiality.
  • Treat any unclear confidentiality policies as grounds for non-use.

3. Do cloud-based AI services (SaaS) satisfy the epi’s confidentiality demands?

The answer is: not by default.

Many SaaS platforms operate on external infrastructure—often shared and global—where input data may be logged, analyzed, or reused for training unless explicitly excluded. Even if a service offers some form of data protection, this may not be sufficient for legal professionals under the Code of Conduct, GDPR, and national attorney regulations.

For a SaaS-based solution to be compliant with Guidelines 2a and 2b, it must provide:

  • Transparent, binding guarantees on how data is processed and stored
  • Data residency within EU or adequate jurisdictions (which per the EU means Switzerland, Japan, South Korea, Canada, United Kingdom – not per se the USA, where most proprietary LLMs are hosted)
  • Technical and organizational safeguards, such as encryption, role-based access, and audit trails
  • Strict exclusion from any training data workflows

If these conditions are not clearly met and auditable, the platform must be considered unsuitable for any confidential work in the legal domain. As the epi Guidelines underscore: where confidentiality is uncertain, the tool must not be used.

4. Legal and Ethical Anchors for Confidentiality

Confidentiality is embedded in the DNA of IP legal practice:

  • Attorney-client privilege: Enshrined in the epi Code of Conduct
  • Professional secrecy obligations: National regulations such as § 43a BRAO and PAO
  • Data protection requirements: GDPR mandates specific protections for personal and sensitive data, often applicable in IP contexts

Using third-party cloud-based LLMs without verified confidentiality assurances is a breach of all three categories. Each prompt you send—from invention disclosures to procedural strategies—may be:

  • Logged by the provider
  • Used for model fine-tuning or diagnostics
  • Transferred or stored outside the EU without adequate safeguards

This is not a theoretical risk; it is a known, systemic issue with many major LLM providers, as outlined in ALPHALECT.ai’s recent articles here, here, and here.


5. Real-World Pitfalls: Common Scenarios of Confidentiality Breach

Let us ground these concerns in practical examples:

a. Drafting Patent Claims with a General AI Tool
A practitioner uses a commercial LLM to draft claim sets. Technical details, including inventive features and embodiments, are disclosed via prompt.

Risk: Pre-filing disclosure jeopardizes novelty under Article 54 EPC.

b. Generating Legal Strategy with an Online AI Assistant
An IP firm uses AI to refine its EPO opposition argumentation. Uploaded materials include internal legal analysis and client correspondence.

Risk: Exposure of strategy to third-party infrastructure undermines client trust and legal protection.

c. In-House Integration of API-Driven LLM
An IP department integrates a public LLM to analyze incoming examination reports. Case identifiers and proprietary product names are included.

Risk: Disclosure of corporate IP strategy and potential GDPR violations.

These examples underscore the epi’s caution: without proven confidentiality measures, such uses are professionally and legally untenable.


6. The Solution: Confidentiality-by-Design AI for the IP Profession

The correct path forward is not to avoid AI, but to use it responsibly. That means:

  • Deploying AI Tools on Secure, Controlled Infrastructure

Deploying AI tools on secure, controlled infrastructure (i.e., environments where the data flow, storage, and access are fully governed by the legal service provider or a trusted partner under contractually and technically robust controls) (i.e., environments where the data flow, storage, and access are fully governed by the legal service provider or a trusted partner under contractually and technically robust controls)**

  • Using models fine-tuned to the language and logic of IP law
  • Verifying privacy, access control, and data lifecycle policies
  • Retaining human oversight over sensitive outputs

This approach reflects the spirit of the epi Guidelines while enabling professionals to benefit from the efficiencies AI can offer.

At ALPHALECT.ai, we are building tools with these very principles in mind:

  • Confidential by design
  • Tailored to patent professionals
  • Compliant with EU and professional regulatory frameworks

Conclusion: In the Age of AI, Diligence Is Duty

The integration of AI into IP practice is inevitable. But it must be done with vigilance. The epi Guidelines are not bureaucratic formalities—they are timely guardrails against serious risk.

Confidentiality is not a checkbox; it is the foundation of trust in patent law. Tools that cannot guarantee it have no place in professional workflows.

Patent attorneys and in-house IP teams must embrace AI tools that respect and reinforce their ethical obligations. Anything less is unacceptable in the era of responsible innovation.


Visit ALPHALECT.ai to learn how we’re helping IP professionals take this step with confidence.

At ALPHALECT.ai, we explore the power of AI to revolutionize the European IP industry, building on decades of collective experience in the industry and following a clear vision for its future. For answers to common questions, explore our detailed FAQ. If you require personalized assistance or wish to learn more about how legal AI can benefit innovators, SMEs, legal practitioners, and innovation and the society as a whole, don’t hesitate to contact us at your convenience.

Leave a Comment