To AI or Not to AI: Resolving Legal Tech’s Ethical Dilemma
Loading the Elevenlabs Text to Speech AudioNative Player…

Authors: Juan Vasquez and Rick Sanchez

Date: February 15, 2025

In the modern legal landscape, artificial intelligence is both a promise and a peril for lawyers. The phrase “to be or not to be” (or its more edgy cousin: “damned if you do and damned if you don’t”) aptly captures the dilemma attorneys face regarding AI. On one hand, embracing AI tools without caution can lead to ethical landmines; on the other hand, ignoring AI outright may leave a lawyer lagging in competence and efficiency. 1 As one senior judge in England put it, lawyers considering AI find themselves “damned if they do and damned if they don’t”​. In a recent speech, that judge (Sir Geoffrey Vos) described two camps: the “luddites” who resist AI because it can be dangerously inaccurate, and the eager adopters who warn that lawyers will soon be obsolete if they don’t use AI to work faster and cheaper​.1 Both perspectives carry truth, and risk.

The legal profession is witnessing a rapid rise of generative AI like ChatGPT, along with mounting pressure to use these tools to enhance productivity. At the same time, cautionary tales and ethical guidelines are emerging just as fast. Many attorneys are understandably torn. A 2023 survey of over 1,000 U.S. lawyers found that while interest in AI is high, 80% had not yet used generative AI in their work, and 87% harbored ethical concerns about it​.2 Lawyers worry that using AI might violate confidentiality or produce faulty legal work. Yet there is an opposing worry, namely that failing to leverage technology could mean failing the duty of competence to clients. In this article, we explore this double bind. We’ll examine why a lawyer might feel at risk by using AI (the ethical risks of doing so) and equally at risk by not using AI (the risks of falling behind). We then discuss how to strike a balance through responsible AI adoption. We will navigate real-world examples (like the now-infamous case of AI-generated fake case law) and draw on pertinent rules and opinions from the ABA and state bars. By the end, it should be clear that while the dilemma is real, it can be managed with knowledge, prudence, and a forward-looking approach. After all, AI in the law is here to stay, and the challenge is using it in a way that enhances your practice and upholds your professional duties.

To AI: Risks of Using AI

Embracing AI in a legal practice can feel like stepping into a minefield. Yes, these tools offer “cutting-edge advantages and benefits,” as an ABA report noted, but “they also raise complicated questions implicating professional ethics”​.3 In other words, jumping headlong into AI without care can put a lawyer at risk of violating core ethical duties. Resolution 112 underscores these concerns, urging courts and lawyers to address ethical AI usage by focusing on bias, explainability, transparency, and oversight of AI vendors​.4. Here we focus on three key duties: confidentiality, supervision, and communication.

Confidentiality: Guarding Client Secrets in an AI World

Every lawyer knows that protecting client confidentiality is paramount. Model Rule 1.6 of the ABA Model Rules of Professional Conduct (and its state equivalents) mandates that a lawyer “shall not reveal information relating to the representation of a client” without informed consent or other specific exceptions. When using AI tools, this rule looms large. Why? Because many popular AI services operate on third-party servers or cloud platforms.5 Submitting client information to an AI system might be effectively the same as disclosing it to a third party, potentially breaching confidentiality and even waiving attorney–client privilege.

A stark example comes from the terms of OpenAI’s ChatGPT service. According to an analysis by legal ethics attorneys, OpenAI explicitly warns users “Please don’t share any sensitive information in your conversations,” noting that user inputs are reviewed by AI trainers and not protected as confidential. In fact, OpenAI’s own FAQ openly states that human staff may review conversations to improve the system​. Moreover, OpenAI’s Terms of Use specify that any information users enter isn’t considered “confidential information” protected by the service​.6 For lawyers, this is a flashing red light. If you were to paste a client’s secret business contract or a sensitive case memo into such a tool, you have no guarantee who might eventually see it or how it might be used. A candid scenario posed by one attorney imagines using ChatGPT to analyze a confidential bid document, only to have an OpenAI quality-control reviewer (with no confidentiality obligation) read it, someone whose spouse works for the client’s competitor​.6 This may sound far-fetched, but it illustrates the potential risk when client data isn’t kept strictly between lawyer and client.

Beyond the immediate ethical violation, disclosing client information to AI could waive attorney–client privilege. Privileged communications must be kept confidential; sharing them with a non-privileged third party (like an AI platform) can destroy that privilege​.6 As one commentator put it, if you include a privileged attorney–client email or work product in a prompt to ChatGPT, you haven’t kept it in confidence, and “this would constitute a waiver of attorney–client privilege”.6 For example, if a lawyer used an AI tool to help draft a client’s litigation strategy memo and fed in details of their private communications, those details might not be protected if later revealed. Preserving privilege is yet another reason lawyers must be extremely cautious about using AI with any client-related information​.6

Of course, many AI tools and legal-tech specific AI vendors are aware of these concerns and are starting to offer solutions like on-premises AI, end-to-end encryption, or “zero data retention” policies to assure lawyers that inputs won’t be stored or seen by others​.7 Some law firms are even exploring custom AI models that can be run internally, behind the firm’s firewall, so data never leaves their control. However, even with favorable terms of service or secure setups, the question remains: can lawyers truly trust that confidentiality will be ironclad? After all, even an in-house system could be hacked if not properly secured​.8 The North Carolina State Bar, in a recent formal ethics opinion on AI, underscored that lawyers must “make reasonable efforts” to ensure any AI service used is compatible with their duty of confidentiality​.8 This means vetting the technology: understanding how it stores data, who has access, and what could go wrong. The NC opinion advises that attorneys should educate themselves on “the nature of any publicly available AI program” they intend to use, especially since many public programs “retain and train [themselves] based on the information provided by any user”.8 In short, a lawyer has to do their homework before entrusting an AI with client secrets.

The safest course, and ethically required course in many jurisdictions, is to obtain the client’s informed consent before sharing any confidential information with a third-party AI tool​.6 In fact, Rule 1.6(a) permits disclosure of client info with informed consent, and using an AI service falls under that requirement. For example, if a lawyer thinks using a contract-review AI will save time on a deal, the lawyer should first explain the risks to the client (e.g. “We’d need to upload your contract to this AI platform; their employees might see it, and while it could speed up review, there’s a small risk of exposure”) and get the client’s okay. Some ethics bodies treat AI services similarly to outsourcing work to a vendor: client consent is needed if confidential data will be shared outside the firm​.8 The bottom line on confidentiality is clear: using AI responsibly means protecting client data as jealously as ever. If there is any doubt about an AI tool’s privacy safeguards, a prudent lawyer either limits their input to non-sensitive information, obtains client consent, or simply doesn’t use that tool for the task. Keeping client secrets is a non-negotiable duty, AI or no AI.

Supervision: AI as the Associate that Needs Oversight

Another ethical risk of “doing” AI in legal practice is the duty of supervision and diligence.9 Under the rules, senior lawyers must supervise junior attorneys and nonlawyer staff to ensure their work is competent and ethical (see ABA Model Rules 5.1 and 5.3). When you use an AI system, you should think of it as a very unpredictable junior assistant, one that works lightning-fast and writes with confidence, but has no actual understanding of truth or law. In other words, AI can generate answers that look fantastic but may be completely wrong or even fictitious. This phenomenon is often referred to as AI “hallucination,” where the system fabricates facts or case citations that sound plausible but are fake.

Lawyers who rely blindly on AI outputs without supervision can stumble into serious trouble. The now-famous Mata v. Avianca incident is a cautionary tale that has reverberated throughout the legal community. In that New York case, two attorneys decided to use ChatGPT to help them draft a brief opposing a motion to dismiss. ChatGPT confidently supplied them with six case citations to support their arguments. The problem? When the opposing counsel and judge tried to find those cases, none of them existed. ChatGPT had completely made them up, complete with bogus quotes and analyses. The lawyers had not checked the AI’s work product carefully. By filing those fake cases in court, they violated their duty of candor and their duty to ensure the accuracy of their filings. The judge was not amused. In June 2023, U.S. District Judge P. Kevin Castel sanctioned the lawyers, ordering them and their firm to pay $5,000 in fines for acting in bad faith​.10 He noted that there is nothing “inherently improper” about using AI for assistance, but the ethics rules “impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”11 In Judge Castel’s view, the attorneys had abdicated that gatekeeping duty. They treated the AI’s output as if it were a trusted senior partner’s draft, when in reality it was more like an intern’s work that needed heavy review. The result was misleading the court with fictitious precedents, a grave breach of professional conduct.

This case vividly illustrates why unmonitored use of AI is so risky. A lawyer cannot defend themselves by saying “the computer told me so.” As one ethics commentator put it, “non-human legal assistance is within the scope of the ABA’s rules, and you must supervise an AI legal assistant just as you would any other legal assistant.”12 That means if you let a generative AI draft a memo, you must review it with a critical eye, verify every citation, every quote, and every factual assertion, just as you would carefully edit and fact-check a junior lawyer’s work. If the AI’s output cites Smith v. Jones (185 F.4th 123) that you’ve never heard of, you must double-check that citation (and you’ll likely discover it’s imaginary before you ever embarrass yourself in court). Failing to do so isn’t just an innocent oversight, it can violate rules of competence (Rule 1.1) and candor to the tribunal (Rule 3.3), and as we saw, can lead to sanctions or malpractice claims.

The duty of supervision also extends to understanding the tool’s limitations. AI is powerful, but it lacks legal judgment. It doesn’t know when it has reached beyond its training or when a nuanced exception applies. It will cheerfully output an answer to any question, even if the best legal answer is “there is no clear answer.” Lawyers cannot delegate their professional judgment to a machine. The ABA Formal Opinion 2024-5 (issued by the ABA in 2024) emphasizes that the duty of competence now includes “understanding the benefits, limitations, and risks” of AI tools​.13 In practical terms, that means a competent lawyer using AI should know, for example, that ChatGPT is not reliable for checking whether a case is still good law, or that it might fabricate statutory text if asked. You wouldn’t ask your paralegal to decide the legal strategy for a case without oversight, likewise you shouldn’t ask AI to do so unsupervised.

Real-world examples of AI errors abound. Apart from phantom cases, AI might misstate the law (e.g., summarizing a holding incorrectly), overlook important distinctions, or use outdated data. If a lawyer were to copy-paste such output into a brief or client memo without catching the mistakes, the consequences could range from client misinformation to courtroom disaster. In one instance, a law firm associate reportedly used an AI tool to draft a brief and ended up including arguments based on obsolete law, something a seasoned attorney or traditional research would have caught. Again, the blame ultimately falls on the human lawyer for failing to supervise and exercise independent judgment.

To avoid these pitfalls, responsible use of AI must involve rigorous verification of AI-generated content. Think of AI as a first-draft generator or brainstorming aid, not the final authority. As a global law firm advised in a recent note, “the lawyer must always consult and confirm with other sources, verify the authenticity of the information provided and that it is up to date”.14 The AI can suggest potential cases or language, but the lawyer must “always apply the lawyer’s own judgment and experience before delivering a final product.”14 In essence, the lawyer remains the ultimate editor and fact-checker. This level of oversight is not just good practice, it’s ethically required. Model Rule 5.3 (in many states) explicitly extends a lawyer’s responsibility to the conduct of any nonlawyer assistance they employ, and recent interpretations make clear this covers technology and AI tools, not just human assistants​.12 The rule was even retitled from “nonlawyer assistants” to “nonlawyer assistance” to drive home that point​.12 So if an AI’s work product would violate the ethics rules if a lawyer did it, the lawyer can’t hide behind the AI. The lawyer must prevent and correct any such issues through diligent supervision.

Communication: Informed Client Consent and Transparency

The third ethical dimension of the “damned if you do” side is the duty of communication, which can be put as essentially being honest and upfront with clients about how their matters are being handled.15 Resolution 112 explains that Model Rule 1.4 requires lawyers to “explain a matter to the extent reasonably necessary to permit the client to make informed decisions” about the representation. If a lawyer is using AI in a way that could significantly affect the client’s case or the confidentiality of their information, or if the lawyer is not using AI but the use of AI could affect the client’s case (e.g., make it more efficient or cheaper), the client arguably should be kept in the loop. In some situations, informed consent from the client is not just advisable, but required.

We’ve already touched on one angle: if using AI involves disclosing confidential information, Rule 1.6 more or less forces the lawyer to get the client’s informed consent to that disclosure​.6 But even beyond confidentiality, consider the client relations aspect. Say a law firm decides to use a generative AI tool to draft a contract for a client. The tool will speed up the first draft, potentially saving the client money in billable hours, but it also carries a risk of error. Should the client be told that part of their legal work is being performed by an AI assistant? Reasonable minds might differ. Some ethical guidance (like the New Jersey Supreme Court’s preliminary AI guidelines) has suggested that lawyers are not strictly required to disclose the use of AI to clients or courts, as long as the lawyer adequately supervises the output​.7 The rationale is that AI can be seen as just another tool and we don’t normally tell clients “I used Westlaw for research” or “my paralegal helped draft this brief,” so if AI is used under the lawyer’s oversight, explicit disclosure might not be mandatory.

However, other voices urge more transparency. For example, some in our field have argued that if a firm allows lawyers to use AI like ChatGPT in developing work product, “the firm should consider disclosing to clients that artificial intelligence is being used as part [of] the final work product.”14 Why would disclosure be wise? One reason is managing expectations and maintaining trust. Clients might be perfectly fine with AI being used (especially if it saves them time and money), but they wouldn’t want to be surprised later to learn their lawyer heavily relied on a machine without their knowledge. In some cases, clients might actually need to consent because using AI could affect how fees are billed or how work is delegated. The North Carolina ethics opinion 2024-1 noted that if the decision to use or not use AI in a matter impacts the fees or outcome, the lawyer should inform and seek input from the client.8 For example, imagine a scenario where a particular legal task could be done manually in 10 hours (costing the client $3,000) or done with an AI tool plus lawyer oversight in 3 hours (costing $900 plus maybe a small tech fee). Some clients might prefer the cheaper, AI-assisted route, while others might be uncomfortable with AI’s involvement despite the cost. The only way to know is to have a conversation and obtain the client’s informed direction. The NC guidance suggests that client input is important in such strategic decisions, and certainly if a lawyer plans to charge the client for use of a paid AI service, that expense should be disclosed and agreed to​.8

Informed consent in this context means explaining the material risks and reasonable alternatives of using the AI. A lawyer might say: “We have an AI tool that can draft discovery requests quickly. The benefit is efficiency, but there’s a risk it might produce something inaccurate that we’ll need to carefully check. We will still review and edit everything, but I want you to know this tool is part of our process. Are you comfortable with that?” This kind of dialogue keeps the client informed as the rules intend. It’s part of treating the client with respect and not usurping their decision-making authority on significant matters. Most clients will likely appreciate the lawyer’s candor and technological savvy, and many will consent when they understand the safeguards in place.

Another aspect of communication is that lawyers should be prepared to educate clients (to a point) about what AI can and cannot do. If a client reads about AI in the news and asks, “Couldn’t we just use ChatGPT to handle my simple lease contract and save money?”, the lawyer has a duty of honesty in communication (Rule 8.4(c) and generally) not to overpromise or mislead. The correct response might be to explain the tool’s capabilities and limitations rather than a flat “no.” In fact, being conversant in AI’s strengths and weaknesses is becoming part of lawyer competence (as discussed in the next section), and it aids in communicating effectively with clients about why you might use a tool or why you must double-check its output.

In summary, using AI ethically isn’t just a matter of what the lawyer does in the shadows, it also involves what the lawyer tells the client. To avoid the “damned if you do” trap, lawyers should be upfront when AI usage could impact client interests. Gaining informed consent not only protects the lawyer under the rules, but it also fosters trust. It turns the use of AI into a collaborative, transparent strategy between lawyer and client, rather than a hidden risk. Communication is key to ensuring the client is never inadvertently misled about how their legal work is being handled.

Not to AI: Risks of Avoiding AI

Given all the minefields we just discussed, one might think the safest route is to avoid AI entirely in legal practice. If using AI can cause so many headaches, why not just stick to traditional methods and steer clear of trouble? Unfortunately, that approach has its own perils. The legal profession is evolving, and a lawyer who ignores technological advances could fall behind in competence, efficiency, and even ethical compliance. In today’s world, refusing to use available and appropriate technology can be as problematic as using it improperly. We now explore the key risks of the “damned if you don’t” scenario.

Duty of Competence and Staying Current with Technology

Lawyers have a fundamental duty to provide competent representation (ABA Model Rule 1.1). Competence isn’t static – it evolves as the practice of law changes. In 2012, the ABA recognized this by updating Comment 8 to Rule 1.1 to explicitly include technological know-how. This comment says a lawyer should “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”16. Over the past decade, nearly every U.S. state has adopted this concept of technology competence into their own ethics rules. As of the latest count, 40 states have formally added the duty of technology competence to their professional rules​.16 In practical terms, this means that a lawyer is expected to understand the technologies that lawyers commonly use (or could use) in practice, at least at a basic level.

Artificial intelligence tools are increasingly falling under the umbrella of “relevant technology” that lawyers need to be aware of. In 2019, the ABA House of Delegates approved Resolution 112, which urged courts and lawyers to proactively address the emerging ethical and legal issues of AI in law​.43 Resolution 112 highlights the duty of technology competence and essentially sounded a warning: “Courts and lawyers must be aware of the issues involved in using (and not using) AI.”4. That parenthetical – and not using – is telling. It acknowledges that there may be a professional cost to not integrating AI where it becomes standard. If most law firms are using AI-driven tools for research, document review, or analysis, and one lawyer sticks stubbornly to paper books and highlighters, could that lawyer eventually be seen as less competent? It’s quite possible.

Indeed, the ABA (through its Standing Committee on Ethics) recently went a step further. In Formal Opinion 2024-5, issued in late 2024, the ABA opined that a lawyer’s duty of competence “includes understanding and appropriately using AI tools” in applicable situations​.13 In other words, being ignorant of AI or refusing to consider its use might itself be an ethical issue. No one is saying every lawyer must become a programmer or AI expert. But as the Reuters summary of that opinion notes, “all lawyers should have a reasonable understanding of what the technology can do and its limitations.”13​. If an AI tool could materially improve the quality or efficiency of legal services, competence suggests a lawyer should at least investigate it. At a minimum, lawyers should stay literate about AI developments so they can make informed decisions about whether to use them. Resolution 112 puts it as “there does not appear to be any instance ‘in which AI represents the standard of care in an area of legal practice, such that its use is necessary.’ Nonetheless, lawyers generally must understand the technology available to improve the legal services they provide to clients. Lawyers have a duty to identify the technology that is needed to effectively represent the client, as well as determine if the use of such technology will improve service to the client.”4

Ignoring AI might also mean failing to meet the standard of care in certain legal tasks. Consider e-discovery in litigation. For large volumes of electronically stored information, courts and parties have for years now relied on “predictive coding” or technology-assisted review (a form of AI) to efficiently find relevant documents. If a lawyer today insisted on manually reviewing millions of emails without using approved AI-assisted tools, that might be seen as unreasonable (it would certainly be massively time-consuming and costly to the client). The risk is twofold: the lawyer might miss important information that a well-trained algorithm would catch, and the lawyer will surely rack up far higher fees doing it the old way. Either outcome can harm the client. In fact, a judge could question whether not using available technology verges on incompetence in a big case. We haven’t yet seen a malpractice case for “failure to use AI,” but one can imagine scenarios in the near future. For example, if a client discovers their lawyer spent 50 hours doing legal research manually on an issue that an AI research tool could have handled in 5 hours, the client might allege the lawyer was inefficient to the point of incompetence, effectively overbilling due to lack of tech adoption.

That brings us to an important point: efficiency and cost-effectiveness are increasingly intertwined with competence. The ethical duty to be diligent and prompt (Model Rule 1.3) and to charge reasonable fees (Rule 1.5) means lawyers shouldn’t waste time (or a client’s money) due to avoidable inefficiency. One commentator argued that if using technology can make legal work significantly more efficient, then “failing to become competent in technology” could lead to unreasonable fees and even “constitute an ethical violation.”17​. In other words, a lawyer can’t just bill 5 hours for something that could be done in 1 hour with readily available tech, without at least informing the client or adjusting the fee. As clients become more tech-savvy themselves, they will expect their lawyers to use tools that reduce costs. In competitive terms, lawyers who embrace AI might be able to offer faster turnaround and lower bills, attracting clients, while those who refuse may price themselves out of the market or face client dissatisfaction.

There is also a quality dimension. AI can, in some contexts, produce more accurate or data-driven results than a human working alone (for example, AI can quickly sift through thousands of cases to spot a trend or find that one precedent that a manual search might overlook). If most law offices are using such tools to not miss anything, a lawyer who doesn’t avail themselves of any AI assistance might start to miss things (e.g., an obscure case, a hidden contract clause, a pattern in past rulings, etc.) that opposing counsel with AI might catch. In a litigation setting, that could disadvantage the “technology abstainer” lawyer and their client. Over time, this could raise questions of whether the lawyer is truly providing competent, up-to-date representation.

None of this is to say that a competent lawyer must always use AI. But it does mean you can’t ignore its existence. The duty of competence includes an ongoing education component: knowing what tools are out there and the basics of how they work. Lawyers should follow developments in legal tech enough to make informed choices. For example, a lawyer who does a lot of contract drafting should know that AI contract review tools exist that can automatically spot missing provisions or unusual terms, and that many law firms use them to avoid errors. The lawyer might decide to use one, or might have other methods to double-check contracts, but they shouldn’t be ignorant that such tools exist. As the Master of the Rolls (Sir Vos) noted, the lawyers who rush to embrace AI argue that soon “clients will be refusing to pay for legal tasks performed by human lawyers when they could be done better, quicker and more cheaply with AI.”1 That may be somewhat hyperbolic today, but it captures a real pressure: clients will demand efficiency. Being competent means being able to deliver quality results efficiently, and that increasingly implies using advanced tools like AI when appropriate.

In summary, the risk of avoiding AI is that you may slowly drift into obsolescence or ethical non-compliance. A lawyer who is proudly “old school” and ignores technology may one day wake up to find that the standard practices of law have left them behind. Competence is not a one-time box to check; it’s a continuous duty to adapt. And right now, adaptation strongly points toward at least evaluating and understanding AI in your practice. The ABA and state bars are effectively telling lawyers: don’t be a luddite, or you might be breaching your duty of competence in the long run.

Finding the Balance: Responsible AI Adoption

If both uncritical embrace and total avoidance of AI carry perils, the solution lies in a balanced approach, which undoubtedly involves responsible AI adoption, “responsible” being the operative word. “Responsible” means using AI consciously, with due regard for ethical duties and best practices. The goal is to reap the efficiency and capability benefits of AI while mitigating the risks we’ve outlined. This section discusses some ways in which lawyers and firms can find that equilibrium. Key components include improving AI literacy, establishing clear policies and guidelines, and following best practices for oversight and risk management. Education and vigilance are the themes that tie these together. As the saying goes, “trust, but verify,” except perhaps modified for AI to “use, but verify.”

AI Literacy and Training

The foundation of responsible AI use is knowledge. Lawyers do not need to become data scientists, but they should invest time in understanding what AI tools do, how they work at a high level, and where their weaknesses lie. AI literacy for lawyers might include learning key concepts like, what is a large language model (LLM) and how does it generate text? What does it mean that an AI can “hallucinate”? How does an AI learn (training data) and what biases might that introduce? What types of tasks is AI currently good at in law (e.g. summarizing documents, converting legalese to plain language, suggesting contract clauses) and what tasks is it not reliable for (e.g. predicting exact case outcomes, providing legal conclusions without human review)?

Building this literacy can happen through Continuing Legal Education (CLE) courses, workshops, or internal training sessions at a law firm. Many bar associations and professional groups now offer CLEs on AI in legal practice, often covering ethics implications as well. Firms might bring in experts or consultants (or utilize organizations like 3ITAL) to educate their attorneys about AI. The International Institute for Intelligent Technology Adoption in the Law (3ITAL), for example, is an organization specifically dedicated to helping lawyers learn to use AI responsibly and ethically, with a mission to “bridge the gap between innovative AI technology and the unique demands of legal practice.”18 Through resources and training, such organizations help lawyers gain the understanding needed to avoid missteps.

Why is training so important? Because a tool is only as effective as the person wielding it. If a lawyer treats an AI like a black box oracle, they’re likely to misuse it. Training demystifies AI, which helps reduce both over-reliance and under-utilization. Over-reliance happens when a lawyer mistakenly thinks the AI is infallible, while under-utilization happens when a lawyer is so scared of AI they fail to use it even where it would help. With better knowledge, a lawyer can thread the needle, use the AI for what it’s good at, and not for what it isn’t.

For example, an AI-literate attorney knows that a tool like ChatGPT do not actually know the latest case law after its training cutoff, so they wouldn’t ask it open-ended legal research questions expecting a reliable answer without checking current sources. But they might know that the AI is great at generating a first draft of a simple motion based on form books, which they can then refine. AI literacy also involves knowing the ethics around AI, which this article has delved into, such as knowing that client confidences shouldn’t be pasted into a public AI chat, knowing that you must verify outputs, knowing that you should possibly talk to the client about AI use.

Lawyers should also train on specific AI tools that their firm adopts. If a firm brings in a new AI-driven research platform or contract review software, don’t just assume you’ll figure it out on the fly in a high-stakes moment. Proper onboarding and practice with the tool in low-risk scenarios is wise. Some firms have created “AI sandbox” environments where attorneys can experiment with generative AI on sample data to see how it works. This kind of hands-on learning can build comfort and reveal quirks of the system before it’s used on actual client work.

Furthermore, keeping current is part of competence. AI tech is evolving rapidly. What was true about a model’s capabilities or terms of use six months ago might have changed. Lawyers (or their IT advisors) should stay updated on major developments. For example, if OpenAI or another provider updates their privacy policy to better protect user data, a firm’s stance on using that tool might change. Or if a new version of an AI model significantly reduces hallucinations, tasks that were too risky last year might become feasible. Essentially, continuing tech education is now a part of a lawyer’s professional development, much like continuing legal education in substantive law.

Policy Frameworks and Guidelines

With a foundation of literacy, the next layer is having concrete policies and frameworks in place governing AI use. At the organizational level, law firms (or legal departments) should develop internal guidelines that define how attorneys may or may not use AI in their work. This provides consistency and a shared understanding, which is critical for risk management.

What might an AI usage policy include? Several key elements:

  • Permissible Use Cases: Define for which tasks AI tools are approved. For example, a firm might allow generative AI for initial drafting of memos, research outlines, or marketing content, but forbid its use for final client advice without review. Or allow AI for document review assistance, but not for tasks that require legal judgment calls. Having a list of acceptable use cases and prohibited use cases (or requiring case-by-case approval for new uses) can prevent dangerous experimentation.
  • Approved Tools: It’s wise to specify which AI platforms or software are allowed, particularly if they have been vetted for security. A firm might approve a particular contract analysis AI that has strong data protection, while disallowing generic public chatbots. The policy can require that any AI tool processing client data must be approved by the firm’s IT/security team. This way, lawyers aren’t accidentally using a risky app they found online.
  • Confidentiality Safeguards: The policy should reiterate the duty of confidentiality and perhaps outright ban inputting client-identifying or sensitive details into any AI tool that does not guarantee confidentiality. Some firms, for example, have policies like: “Do not paste full client documents or any personal data into ChatGPT or similar public tools.” If redaction or anonymization is possible, that might be encouraged (e.g., “replace names or specifics with placeholders before using the AI, and never include classified or highly sensitive info”).
  • Verification Requirement: A core component should be that any output from AI must be reviewed and validated by the attorney. This is basically codifying the supervision duty into the policy. For example: “Any legal research or draft text generated by AI must be independently checked for accuracy and completeness by the attorney or a qualified team member before it is relied upon or delivered to a client/court.” Some policies call this a “second pair of eyes” rule, the AI is never the sole drafter without human verification​.19
  • Informed Consent/Client Communication: The policy might address when lawyers need to inform clients about AI use. Perhaps it says, “If an AI tool will be used in a way that significantly affects the representation or involves client confidential data, the attorney should obtain client consent.” This aligns with the ethical considerations we discussed. It ensures lawyers think about the communication aspect as a formal step, not an afterthought.
  • Compliance with Ethics Rules: A general statement that all use of AI must comply with applicable ethics rules and the firm’s ethical obligations. This could list relevant rules like confidentiality, competence, etc., just to underscore that these are not waived when using AI.
  • Procedure for New Tools or Issues: It can be useful to include a mechanism for evaluating new AI tools or resolving questions. For example, the policy might create an “AI committee” or designate the IT department or ethics partner to review any novel AI application a lawyer wants to try. That way, there’s a checkpoint before someone uses a tool the firm hasn’t vetted. The policy could also encourage lawyers to consult the committee if unsure about a particular use scenario.

In addition to firm policies, the broader ethical guidelines from bar associations form an important framework. Many state bars and courts have now issued opinions or guidelines on AI. We’ve mentioned a few (e.g., North Carolina 2024 Formal Op. 1, New York State Bar’s task force report, New Jersey’s preliminary guidelines). These often mirror each other and stress themes like what one article dubbed the “Seven C’s” of ethical AI use: Competence, Confidentiality, Consent, Confirmation, Conflicts, Candor, and Compliance​.13

  • Competence – understand the tech (we’ve covered that).
  • Confidentiality – protect client info.
  • Consent – get client consent when needed (we just discussed).
  • Confirmation – verify the output (don’t trust without checking).
  • Conflicts – ensure the AI use doesn’t create a conflict of interest (for example, if an AI vendor is serving both sides of a case with the same data? This is an emerging concern).
  • Candor – don’t let AI cause you to mislead the court or others (e.g., the fake cases issue; also not attributing to AI something it didn’t actually find if it’s wrong).
  • Compliance – obey all laws, including privacy laws or regulations that might apply to data (for instance, GDPR for European client data going to an AI tool could be an issue).

Lawyers should be aware of their jurisdiction’s stance on these. ABA and state guidelines are useful resources to consult when drafting internal policies or when deciding on a course of action with AI. They provide a kind of checklist of concerns to address.

Another aspect of “finding the balance” at a policy level is considering insurance and liability. Law firms might check with their malpractice insurance carriers about coverage related to AI usage. Some insurers are starting to ask whether firms use generative AI and what controls are in place. Having a good policy and training program will not only reduce actual risk but might be important for insurance purposes or in defending a claim if something goes wrong.

The overarching idea is that lawyers should not approach AI ad hoc. A wild-west mentality (“let’s try this tool and see what happens”) is what leads to the “damned if you do” disasters. By instituting thoughtful policies and following established guidelines, a firm can enable its lawyers to use AI in a controlled, ethical way.

Responsible Use Best Practices

With knowledge and policy groundwork laid, what does day-to-day responsible AI use look like for a lawyer? Here are some best practices and practical tips that can serve as a compass:

  • Start Small and Low-Risk: When first using an AI tool, try it on a task that is low stakes. For example, use AI to draft a research summary for your own use, then verify it, before ever using AI to create something that goes outside. This builds familiarity and trust in a controlled way.
  • Anonymize Inputs: Wherever possible, strip out or obfuscate client-identifying details or sensitive specifics from the prompts you give an AI. Instead of pasting a whole confidential email, you might ask the AI a more generic question or change names to placeholders. Some AI tools being developed for lawyers allow you to designate certain info as private or use local processing to avoid data leaving your system.
  • Verify, Verify, Verify: This cannot be said enough. Treat AI output as a draft or suggestion. If the AI provides a case citation, pull up the case yourself from Westlaw/Lexis or a trusted source to confirm it exists and says what the AI claims. If the AI writes a section of a brief, review every line, edit for legal accuracy, and ensure the reasoning holds up. If it summarizes facts, cross-check with the source material. Think of AI as an intern who works super fast but has zero accountability, you must provide the accountability.
  • Document Your Process: It may be helpful to keep notes on how you used AI in a matter, just as part of the file. For example, “Used [Tool] to generate initial draft of section X, then verified citations and corrected factual inaccuracies.” This way, if later there’s any question (from a client or court) about your use of AI, you have a record of responsible conduct. Firms could even require attorneys to log AI usage for oversight purposes.
  • Stay in Control: Always ensure that you (the lawyer) remain in the driver’s seat. AI can sometimes produce output that is lengthy and nuanced. It’s easy to be lulled into thinking, “Wow, that sounds pretty good, I’ll just use it.” But you need to critically assess: does this align with my client’s position? Is it addressing the right question? Are we conceding anything inadvertently? You might use AI to brainstorm arguments on both sides of an issue, which can be great, but then you decide which arguments to actually pursue. You are the strategist, the AI is a tool. Keep that hierarchy clear.
  • Avoid Over-reliance/Bias: Be cautious not to let the AI’s suggestions narrow your own thinking improperly. For example, if ChatGPT gives you five potential arguments, don’t assume those are the only five; there might be a sixth one it missed. Use AI to supplement your creativity and knowledge, not replace it. Similarly, be aware of potential biases in AI outputs. If an AI tool has trained on a dataset that, say, has very few cases favoring a certain type of plaintiff, it might under-suggest those arguments. Your own legal reasoning and duty to research thoroughly still apply.
  • Continuous Learning: Treat each interaction with AI as an opportunity to learn more about its behavior. If you catch it making a certain kind of mistake, make a mental note (or share with colleagues) so that everyone can watch for that in the future. These tools often have patterns (e.g., maybe you discover it always confuses two similarly named statutes). Knowing these quirks helps in managing them.
  • Use Secure Versions or Settings: If the AI tool or service offers any kind of “privacy mode” or enterprise version that doesn’t retain data, use that for client matters. For example, OpenAI now offers business accounts where they promise not to use your data for training. If your firm subscribes to such a service, use that account. Always prefer the most secure option available, even if it costs money, over a free but more porous version.
  • Human in the Loop: Consider AI as part of a process that always has a human in the loop at critical points. For example, in contract review, you might have AI flag clauses of concern, but then you (or a human team member) examine those flagged clauses to determine if they are indeed an issue and decide what to do about them. The AI can triage or prioritize, but the human makes the judgment call.
  • Limitations Acknowledgment: Know when not to use AI. There are times when traditional legal skills or manual effort is still superior or required. For example, if an issue is extremely novel and no data exists on it, an AI won’t magically produce an answer, but a creative legal analysis by a human might. Or if a task requires personal client counseling, AI cannot replicate emotional intelligence (at least not yet). Responsible use includes recognizing those boundaries. As the adage goes, just because you have a hammer doesn’t mean every problem is a nail. Use AI where it makes sense, but not in every single task.

By following such practices, lawyers can significantly reduce the ethical risks associated with AI. This is how you avoid being “damned” for doing: you do it, but you do it carefully, ethically, and transparently. Over time, these practices can become second nature, much like we have adapted to other technologies (email, cloud storage) with a set of standard precautions (like not emailing the wrong person, or encrypting sensitive files).

It’s also advisable for firms to periodically review and audit how AI is being used. Maybe after a few months of implementation, bring the team together to discuss what’s working and what issues have arisen. Adjust policies if needed. Responsible adoption is an ongoing process, not a one-time switch.

Ultimately, the mindset to cultivate is one of informed openness. Be open to the new technology and its benefits, but informed about its pitfalls and how to counter them. Lawyers who strike this balance will find that AI truly can enhance their practice, enabling them to serve clients more efficiently and perhaps even more effectively, without triggering ethical problems. It transforms the scenario from “damned if you do, damned if you don’t” into a more optimistic one: if you do it right, you’re not damned at all – you’re ahead of the curve.

Conclusion

The rise of AI in the legal field does present a dilemma, but it’s not an unsolvable one. Lawyers indeed face a double-edged sword: use AI irresponsibly and risk breaching core duties, or shun AI entirely and risk falling behind in competence. However, as we’ve explored, the solution lies in thoughtful, educated adoption. When wielded with care, AI is not a curse on legal ethics but rather a powerful tool that can enhance legal practice.

In many ways, this is reminiscent of past transitions in law. Think of when email became prevalent, or electronic research databases, or even the telephone in an earlier era, each time, lawyers had to adapt their practices and ethical safeguards. AI is a bigger leap, no doubt, but the principle is the same: our professional responsibilities remain the north star guiding how we integrate new technology. As the legal community and regulators have emphasized, lawyers can and should embrace innovation while upholding confidentiality, competence, and integrity​. It’s not an either/or proposition. Yes, if you blindly do one or the other, you’re “damned,” but if you integrate AI responsibly, you can avoid the damning consequences altogether.

A key takeaway is that education is paramount. Lawyers must proactively educate themselves about AI’s capabilities and pitfalls, and seek out resources to do so. This is precisely the mission of organizations like 3ITAL. 3ITAL exists to “ensure that lawyers and firms can work more efficiently while safeguarding client confidentiality and upholding the highest ethical standards.” In other words, 3ITAL exists to help legal professionals bridge the gap between cutting-edge AI technology and the traditional demands of legal ethics and client service. By providing training, guidelines, AI tools reviews, and a forum for discussion, 3ITAL and similar initiatives are empowering lawyers to step into the AI era with confidence instead of fear. As we navigate this new terrain, such support will be invaluable.

Ultimately, embracing AI responsibly can be a win-win: clients get more efficient and arguably even more insightful representation, and lawyers can alleviate drudgery and focus on higher-level aspects of their work. Imagine spending less time spinning wheels on tedious tasks and more time crafting strategy, negotiating, or counseling clients, AI can help make that shift possible. But it will only be a win-win if lawyers implement AI with their eyes open: eyes open to data security, to output verification, to bias mitigation, and to ongoing ethical compliance. In closing, the legal profession stands at a crossroads with AI. Those who ignore the technology risk irrelevance and those who embrace it carelessly risk catastrophe. he sweet spot is responsible adoption. By being neither luddites nor lemmings, lawyers can harness AI’s benefits while staying true to the profession’s ethical foundation. The path forward is one of balance, which involves maintaining our role as diligent, trusted counselors and advocates, even as we leverage algorithms and machines to assist us. If we achieve that balance, we won’t be at risk at all. Rather, we’ll enhance our practice, better serve our clients, and uphold the finest traditions of the law in the digital age.

  1. Rachel Rothwell, “Will AI spark a new type of negligence claim?”, Law Society Gazette (July 17, 2024) at https://www.lawgazette.co.uk/commentary-and-opinion/will-ai-spark-a-new-type-of-negligence-claim/5120363.article[][][]
  2. Association of Corporate Counsel, “Practical Lessons from the Attorney AI Missteps in Mata v. Avianca” (Aug. 8, 2023) at https://www.acc.com/resource-library/practical-lessons-attorney-ai-missteps-mata-v-avianca[]
  3. Bob Ambrogi, “ABA Votes to Urge Legal Profession to Address Emerging Legal and Ethical Issues of AI“, ABA SciTech Section, Report to ABA House of Delegates (Aug. 2019) at https://www.lawnext.com/2019/08/aba-votes-to-urge-legal-profession-to-address-emerging-legal-and-ethical-issues-of-ai.html[][]
  4. ABA House of Delegates Adoption of Resolution 112, (Aug. 2019) at https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf[][][][]
  5. ABA House of Delegates Adoption of Resolution 112, (Aug. 2019)- Explaining that AI tools may require client confidences to be “shared” with third-party vendors, at https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf[]
  6. Foster Sayers, “Ethical Concerns with Using ChatGPT to Provide Legal Services”, Paragon Legal (Mar. 6, 2023) at https://paragonlegal.com/insights/ethical-concerns-with-using-chatgpt-to-provide-legal-services/[][][][][][][]
  7. “​Is It Legal for Lawyers to Use ChatGPT? Understand the Boundaries”, Spellbook (October 23, 2024) at https://www.spellbook.legal/learn/is-it-legal-for-lawyers-use-chatgpt[][]
  8. North Carolina State Bar, 2024 Formal Ethics Opinion 1 (Jan. 2024) at https://www.ncbar.gov/for-lawyers/ethics/adopted-opinions/2024-formal-ethics-opinion-1/[][][][][][]
  9. ABA House of Delegates Adoption of Resolution 112, (Aug. 2019)- Explaining that under Rules 5.1 and 5.3, lawyers are obligated to supervise the work of AI utilized in the provision of legal services, and understand the technology well enough to ensure compliance with the ethical duties and to ensure that the work product produced by AI is accurate and complete and does not create a risk of disclosing client confidential information, at https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf[]
  10. Sara Merken, “New York lawyers sanctioned for using fake ChatGPT cases in legal brief”, Reuters (June 22, 2023) at https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/[]
  11. Sara Merken, “New York lawyers sanctioned for using fake ChatGPT cases in legal brief”, Reuters (June 22, 2023) at https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/[]
  12. Thomson Reuters (Legal) Blog, “Generative AI and ABA ethics rules” (2023) at https://legal.thomsonreuters.com/blog/generative-ai-and-aba-ethics-rules/[][][]
  13. Marcin M. Krieger & David R. Cohen, “Navigating the seven C’s of ethical use of AI by lawyers”, Reuters (Westlaw Today) (Dec. 20, 2024) at https://www.reuters.com/legal/legalindustry/navigating-seven-cs-ethical-use-ai-by-lawyers-2024-12-20/[][][][]
  14. Dentons law firm, “Responsible use of ChatGPT by Lawyers” (June 8, 2023) at https://www.dentons.com/en/insights/articles/2023/june/8/responsible-use-of-chat-gpt-by-lawyers[][][]
  15. ABA House of Delegates Adoption of Resolution 112, (Aug. 2019)- Explaining that a lawyer’s duty of communication under Rule 1.4 includes discussing with his or her client the decision to use AI in providing legal services and that the lawyer should obtain approval from the client before using AI, and this consent must be informed, at https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf[]
  16. Robert Ambrogi, “Tech Competence”, LawSites at https://www.lawnext.com/tech-competence[][]
  17. Ivy B. Grey, “Not Competent in Basic Tech? You Could Be Overbilling Your Clients — and Be on Shaky Ethical Ground” (2017) at https://legal.intelligentediting.com/blog/not-competent-in-basic-tech-you-could-be-overbilling-your-clients-and-be-on-shaky-ethical-ground/[]
  18. 3ITAL (International Institute for Intelligent Technology Adoption in the Law) – Mission statement on bridging the gap between AI innovation and legal practice needs, ensuring efficiency while upholding confidentiality and ethics​, at 3ital.org[]
  19. Pamela Langham, “Generative AI Policies in Law Firms, A Critical Analysis. Part One: The Case for Implementing an AI Policy”, Maryland State Bar Association (January 8, 2024) at https://www.msba.org/site/site/content/News-and-Publications/News/General-News/Generative_AI_Policies_in_Law_Firms_A_Critical_Analysis.aspx[]