What We Learned While Drafting Our AI Code of Conduct
Drafting an AI Code of Conduct seems straightforward. You formulate principles, translate them into rules, and publish the document on the intranet. The real challenge is putting it into practice.
At HVG Law, we recently developed our AI Code of Conduct. Months of preparation went into it, with input from various disciplines. The result is a document we’re proud of. At the same time, we recognise that the real work is only just beginning. A Code of Conduct that exists only on paper is nothing more than a ticked box on a checklist.
When we started drafting our Code of Conduct, there were no guidelines from the Dutch Bar Association (NOvA). We looked at what colleagues in Belgium, Germany, and France had already developed, but ultimately we had to chart our own course. The NOvA has since published its long-awaited recommendations. It’s reassuring to see that the principles we formulated largely align with what the NOvA now recommends. This confirmation gives us confidence that we’re on the right track.
Why this matters now
AI tools have become commonplace in both the legal profession and the business world. This development has been accompanied by incidents that highlight the risks: fabricated case law cited in court documents and confidential information ending up in uncontrolled tools. We see these incidents happening around us, including in the Netherlands.
These incidents underscore the need for a clear framework. Not to discourage the use of AI, but to enable its responsible use. A framework that provides direction while keeping pace with a technology that’s constantly evolving.
The reflex: drafting rules
The most obvious approach is to formulate rules. You specify what is and isn’t allowed, document it, and communicate it within the organisation. This is often where it ends: document published, e-learning created, box ticked.
Rules are indispensable. Our professional rules oblige us to guarantee confidentiality and independence. These obligations translate into operational requirements: which tools are permitted, where data is stored, how client data is handled. There must be no ambiguity here. The NOvA also emphasises these themes in its recommendations.
However, rules alone aren’t enough. Without understanding the underlying rationale, they lose their effectiveness. When employees perceive rules as imposed restrictions without context, resistance arises. Those who focus exclusively on compliance create a culture of formal adherence without intrinsic motivation, or worse, a culture where people get creative with the rules.
The difference between compliance and internalisation
Compliance asks: “Are you following the rules?” Culture asks: “Do you understand why those rules exist?” The distinction may sound academic, but in practice, it plays out very differently.
From a pure compliance perspective, AI is something to be managed and controlled. From a cultural perspective, it’s a tool you shape together. In the first case, discussions revolve around boundaries and control; in the second, around possibilities and improvement.
Both approaches are necessary. What works for us is a combination of clear boundaries where professional rules require them, and room for development where the context allows.
How we built it
In developing our AI Code of Conduct, we deliberately opted for a multidisciplinary approach. In addition to legal tech and AI law, we included compliance (for the professional legal aspects), employment law (for HR implications), and privacy (for GDPR issues). This broad composition forced us to integrate different perspectives and resulted in a document that has both a solid legal foundation and practical applicability.
The structure is deliberately layered. The foundation consists of principles: guiding precepts on responsibility, independence, confidentiality, and reliability. These principles are formulated in sufficiently general terms to remain relevant regardless of future technological developments.
In addition, the Code of Conduct contains a separate chapter devoted to generative AI. Generative AI poses specific challenges that deserve separate attention.
The concrete rules form the normative layer. They specify which systems are permitted, which verification obligations apply, and how AI use must be documented. These rules are integrated into the staff handbook.
Principles versus rules
The distinction between principles and rules is a conscious choice. Principles don’t change when a new AI tool comes onto the market. Rules are context-dependent: which specific tools are permitted may change as new insights or technological developments arise.
By separating these layers, you avoid having to revise the entire document with every technological development. The principles remain the compass; the rules can move with practice.
Addressing generative AI separately
We deliberately devoted a separate chapter to generative AI. Not because it’s fundamentally different from other forms of AI in technological terms, but because it raises different professional issues.
In classic AI applications (such as document classification), user interaction is relatively limited. With generative AI, there’s an ongoing dialogue: you formulate prompts, interpret output, and ask follow-up questions. This interaction requires skills that not everyone has mastered.
By treating generative AI separately, you create space to address these skills explicitly. How do you formulate an effective prompt? How do you recognise output that’s incorrect? When is this technology suitable, and when isn’t it? These are practical questions that require concrete guidance.
Enforcement with nuance
The question of how to deal with violations is always difficult. A completely non-binding approach doesn’t work, but treating every deviation as a violation can create a culture of risk aversion.
Our approach distinguishes between different situations. Certain things simply aren’t allowed, such as working with unauthorised tools. These are non-negotiable boundaries that stem directly from our professional rules.
However, the majority of AI use doesn’t fall into this category. If someone formulates a prompt suboptimally, this doesn’t have immediate consequences. It is, however, a reason for coaching and guidance. Making this distinction consistently is essential. Treating every deviation as a compliance issue makes AI something to be avoided, and avoidance is the last thing you want.
From document to behaviour
Our Code of Conduct is published on our intranet. The rules are integrated into the staff handbook. In a sense, that completes the formal implementation.
But whether it works isn’t apparent from the document. It’s apparent from what happens in daily practice. Are the principles internalised in daily decisions? Do people feel comfortable asking questions when they’re in doubt? Is there room to experiment and learn from mistakes?
These are the real indicators for us. You can’t enforce them through documentation alone. You have to cultivate them, facilitate them, and above all, demonstrate them yourself.
In our previous article on mentoring junior lawyers, we described how this works in daily practice: seniors learning to ask the right questions and a culture where openness about AI use is self-evident. This dynamic is intrinsically linked to the Code of Conduct. The document provides the framework; the conversations provide the substance.
Alignment with the NOvA recommendations
We had already implemented the NOvA recommendations, such as drafting a firm-wide AI policy and using paid, specialised legal AI tools instead of free consumer versions. In an earlier article on anonymisation and professional secrecy, I wrote that even the paid consumer version of ChatGPT (Plus) doesn’t offer sufficient safeguards for sensitive information by default. The NOvA now makes this point as well, albeit less explicitly.
We disagree with the NOvA recommendations on one point. The NOvA advises asking clients for permission in advance before using AI. We question this. Rule of Conduct 13 and its explanatory notes refer to consultation with the client when engaging auxiliary persons and automated systems, but don’t prescribe permission.
The recommendation also raises practical questions. What if a client refuses? Are you then not allowed to use AI at all? Not even for general legal research or literature reviews? And how does this relate to other tools for which we don’t ask explicit consent either? We opt for transparency about our use of AI in general, but not for a consent requirement per matter. In our view, this aligns better with the spirit of the professional rules and with workable practice.
With our Code of Conduct as a practical translation and the NOvA’s recommendations as a frame of reference, we have a solid foundation. The confirmation that our early choices align with what the NOvA now recommends strengthens our conviction that we’re on the right track.
Practical recommendations
Many firms are currently in the process of shaping their AI policy. Based on our experience so far, we have the following practical recommendations:
Develop your AI Code of Conduct as a multidisciplinary effort. Involve compliance, privacy, employment law, and IT. This prevents blind spots and increases buy-in.
Distinguish between principles and operational rules. Principles are timeless; rules are context-dependent. This separation ensures agility without losing your compass.
Treat generative AI separately. Interacting with AI tools requires specific skills. A separate chapter accommodates this nuance.
Integrate into existing policy structures. Make the rules part of the staff handbook. This gives them weight (and enforceability) without creating a parallel structure.
Appoint clear points of contact. Make sure people know where to turn with questions.
Enforce with nuance. Strict enforcement where professional rules require it, room to experiment and learn where possible. Not every deviation is a violation.
View publication as the starting point. The document is the beginning of implementation. Plan now how you’ll monitor and adjust as you go.
About the authors: Emelie Wesselink is a lawyer at HVG Law, specialising in AI law, responsible for the (legal) development of the AI Code of Conduct. Elgar Weijtmans is Head of Technology & AI at HVG Law, provided the technical input for the Code of Conduct, and writes weekly about legal tech in this newsletter.




