Make an appointment

ACCOUNTING EXPERT - Generative AI and data protection: how to reconcile innovation and confidentiality? Cahier de l'académie n° 43

Brief summary

The meteoric rise of generative artificial intelligence is revolutionizing professional practices, particularly in AI for the financial and accounting professions.

The meteoric rise ofgenerative artificial intelligence is shaking up professional practices, particularly in AI for the legal and accounting professions. Since the end of 2022, tools such as ChatGPT, Copilot, Claude and Llama have become part of everyday practice. They enable numerous tasks to be drafted, analyzed, synthesized and automated.

But one question is becoming increasingly acute: what happens to sensitive data when it is used in these systems? Where is it stored? Who has access to it? Is it used to train models?

To address these concerns, in April 2025 theAcadémie des Sciences et Techniques Comptables et Financières published its Cahier n°43: "IA générative et protection des données: confidentialité, RGPD, secret professionnel". This document is the fruit of a collective work of experts, in which Mirabile Avocat took part.

The aim is clear: to provide legal and practical safeguards for the use of generative AI, particularly in professions subject to strict confidentiality obligations (AI and the accountancy professions).

If you would like to hire an AI lawyer, contact me!

Why generative AI is revolutionizing data protection

Generative AI is based on a simple principle: ingest vast volumes of data to produce new content (text, code, images, etc.). Its efficiency is undeniable, but it raises a paradox: the more efficient it is, the more it relies on the exploitation of often sensitive data.

In the day-to-day work of accountants and lawyers, use cases are multiplying:
- A chartered accountant tests ChatGPT to analyze accounting entries.
- A finance department submits a cash flow table to an AI tool to obtain an instant forecast.
- A lawyer submits a contract to obtain a summary or clause verification.

These practices raise fundamental questions:

  • Are the data sent to these tools stored?
  • Can they be reused to train models?
  • What guarantees are there concerning confidentiality?

In reality, generative AI doesn't invent data risks. But it does act as a gas pedal, encouraging the use of powerful tools that are often beyond the control of organizations.

Identified risks: confidentiality, RGPD and professional secrecy

The use of generative AI brings with it a series of legal and organizational risks that directly concern regulated professions and companies handling sensitive data.

Risks related to the RGPD

The General Data Protection Regulation (GDPR) strictly regulates the processing of personal data. Using an AI tool often involves:

  • transfers outside the European Union (notably to the United States),
  • no clear legal basis to justify the use,
  • a lack of transparency regarding the purpose of the processing.

For example, submitting an accounting file containing employee or customer information may constitute a breach of the GDPR if the tool does not comply with security and transfer obligations.

Confidentiality and business secrecy risks

Beyond the RGPD, the protection of confidential data and business secrecy is at stake. Depositing strategic information (cost price, commercial negotiation, restructuring plan) in a chatbot could, in the absence of guarantees, expose this data to a third party or even to uncontrolled reuse (AI and the numbers business).

Risks for regulated professions

Professions subject to professional secrecy (lawyers, chartered accountants, statutory auditors) face an additional challenge:

  • Secrecy covers all information entrusted to us by the customer.
  • Using a tool that does not guarantee that data cannot be leaked or reused may constitute a breach of confidentiality, punishable by disciplinary and criminal penalties.

A concrete example: a lawyer using ChatGPT to analyze an employment dispute. If the data captured is stored and reused, this may constitute a breach of professional secrecy under article 226-13 of the French Criminal Code.

Cahier n°43 de l'Académie: concrete benchmarks for sales professionals

Faced with these challenges, theAcadémie des Sciences et Techniques Comptables et Financières published a reference document in April 2025: Cahier n°43.

A collective effort by experts

The Cahier is written by a multidisciplinary group, bringing together jurists, lawyers, chartered accountants and academics. Among the contributors: Mirabile Avocat, a long-standing advocate of digital issues (AI and the legal profession).

Practical questions at the heart of the document

The book answers some very practical questions:

  • Can I upload an accounting file to ChatGPT or Copilot?
  • What are the risks for an auditing firm using AI to analyze a customer portfolio?
  • What are the minimum precautions to be taken by a company wishing to experiment with generative AI?

Three essential contributions of the Cahier

  1. Assessing the risks of generative AI: an analysis grid taking into account the nature of the data, its sensitivity, and the status of the profession concerned.
  2. Implement best practices: data anonymization, contracts with AI vendors, team awareness.
  3. Draw inspiration from international experience: the Cahier cites recommendations from the UK, Canada and the USA, where regulators have already published guidelines.

A central message emerges: generative AI does not create confidentiality problems, but it does amplify them. This calls for heightened vigilance, particularly in professions subject to professional secrecy (AI and the accountancy professions).

What are the best practices to adopt right now?

Cahier n°43 is more than just an observation. It provides practical recommendations that firms can implement immediately.

Limiting sensitive uses

Avoid uploading documents containing :

  • non-anonymized personal data (RGPD),
  • strategic information (business secrets),
  • information covered by professional secrecy.

Set up internal charters for the use of AI

More and more companies are creating user charters to govern their employees' use of AI. These charters define :

  • authorized tools,
  • the types of data that can be integrated,
  • pre-use validation procedures.

Securing contractual relations

Companies must insert specific contractual clauses in their agreements with AI publishers and their subcontractors:

  • data localization,
  • training ban on input data,
  • confidentiality guarantees.

Regularly audit RGPD compliance

A periodic audit verifies :

  • tools comply with safety principles,
  • that data is stored in compliance with European regulations,
  • that people's rights (access, rectification, deletion) can be exercised.

Training teams

Raising employee awareness is essential. The Cahier stresses the importance of internal training so that everyone understands :

  • what it can and cannot do with generative AI,
  • the risks involved in poor practice,
  • available alternatives (secure in-house tools, sovereign solutions).

Related articles