
Artificial intelligence (“AI”) is a rapidly evolving field focused on developing machines capable of performing tasks that traditionally require human intelligence. These tasks include learning, problem-solving, reasoning, perception, and language understanding. Not all AI systems are equal, as they vary in complexity and functionality, and some AI systems have existed for years yet only now are receiving the attention they deserve.
AI has become an integral part of everyday life, utilized in appliances, vehicles, mobile phones, and various software applications, and it is directly accessible to consumers via the web. However, despite AI’s accessibility, its capabilities are poorly understood by the general public. In the legal field, due to AI’s widespread integration into various systems—whether for personal use, employee applications, or client interactions—it is crucial to understand how AI operates, appreciate its benefits, recognize its inherent legal and corporate compliance risks, and master risk-mitigation strategies for AI use in this brave new world.
How AI Works
The primary categories of AI systems include rule-based AI, machine learning, deep learning models (“DLM”), natural language processing (“NLP”), generative AI, and reinforcement learning models. To better understand considerations for AI use and discuss them with more nuance, it is helpful to distinguish between these foundational models.
- Rule-based AI follows a strict “if-then” decision-making system. Commonly seen in customer service chatbots, it operates based on predefined instructions, much like a flowchart or recipe, where a specific input triggers a set response.
- Machine learning identifies patterns in data and improves its decision-making over time. Netflix’s recommendation system, for example, uses machine learning to refine content suggestions based on user preferences and viewing history.
- Deep learning models (DLM) are an advanced form of machine learning that mimics human brain functions to process information. Tesla’s Full Self-Driving and Autopilot features utilize DLM to analyze real-time road conditions and improve driving performance with experience while connecting to a central hub or technological brain center.
- Natural language processing (NLP) enables computers to understand and respond to human language, both spoken and written. Virtual assistants like Siri and Alexa rely on NLP to interpret questions and provide relevant answers.
- Generative AI creates new content, such as text, images, or music, based on patterns learned from existing data. For example, ChatGPT generates human-like responses, while AI-powered art programs, like Midjourney, produce original visual content.
- Reinforcement learning operates on a reward-and-penalty system, much like training a dog. These AI models learn through trial and error, improving their decision-making based on feedback.
For this article’s purposes, we will focus on next-generation models that meet the minimum thresholds of NLP and generative AI. Specifically, we will examine hybrid AI models in corporate settings, such as recurrent neural networks, convolutional neural networks, and transformers / large language models like ChatGPT.
Benefits
AI’s applications extend far beyond grammar correction and content summarization. Hybrid AI models are increasingly embedded in corporate environments for applications such as quality control, e-discovery, document review, risk mitigation, recruitment and onboarding, and fraud detection. Businesses leverage AI to ensure compliance with international laws, local regulations, and internal policies. Pain points and gaps within preexisting policy or regulations also become minimized as AI can detect and provide supplemental standards while simultaneously enhancing efficiency. AI is also used to identify trends and calculate statistical probabilities within complex datasets.
The rapid integration of AI into corporate settings is accelerating, with no signs of slowing down. Those in the legal profession have numerous opportunities to capitalize on AI’s capabilities, making it essential to understand where and how AI operates within the profession.
Risks and Legal Considerations
Despite AI’s benefits, significant risks accompany its use. Bias and legal concerns can arise from the data sources used to train these complex models. Limited or siloed training datasets can create inherent biases, leading to skewed outputs. For example, if historical data used to develop an AI model lacks diversity, the AI will reflect those limitations in its responses. Even open data reservoirs connected to the web can introduce inaccuracies or misinformation. Additionally, models trained on copyrighted or proprietary materials pose risks of intellectual property infringement and accidental plagiarism.
Authentication has grown far more difficult as these AI-based systems have advanced, allowing malicious actors to capitalize on the authentication gap and simultaneously allowing the opening of doors for other evasive tactics. For example, AI can be exploited to create or mimic contracts, agreements, and other legal or company documentation. Misinformation and deepfake technology present additional high-risk threats. AI can generate fraudulent press releases, create bots to plague corporate social media accounts, produce fake customer reviews, and create voice-cloned content—leading to financial and reputational damages.
Legal cases such as Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc.[1] have set groundbreaking precedents regarding the application of copyright laws in AI training. In a decision in this case in February, for the first time, a court declined to allow the application of the fair use doctrine to AI-produced outputs related to use of copyrighted original content. As the judiciary undergoes internal struggle in applying case law and regulatory standards to AI-related cases, which may not smoothly fit the historical mold, attorneys need to pay attention to new judicial interpretations of historically tested case law and statutes.
AI’s presence has not only rocked the judiciary but has also changed the way that lawyers conduct their own work, raising practical and ethical concerns about the usage of AI in the legal industry that have been discussed in ethics opinions or guidance from the American Bar Association and many state bars. Two key examples:
- Lawyers are using generative AI to submit briefs, some of which have contained AI hallucinations—that is, citations of cases that never existed. This has resulted in numerous sanctions and efforts by the judiciary to address lawyers’ misuse of AI.
- Lawyers are leveraging AI to prepare privilege logs, creating a potential scenario for inadvertently disclosing privileged information when they feed sensitive data into AI systems. Courts have raised confidentiality and ethical concerns over this practice.
Ethical and confidentiality concerns exist not only in the courts but also at varying corporate levels. With generative AI providing content, we have started to see major challenges to authorship and ownership rights. Furthermore, the information kept and stored by companies and how AI and/or the corporations use that information can also begin to blur our traditional understandings of corporate liability and responsibility.
Legal Hypotheticals
Two legal scenarios can help illustrate AI’s complexities:
- The Black Mirror Conundrum. A company’s terms and conditions grant it extensive rights over user data, including the ability to create AI-generated content based on customer likenesses and behaviors. This raises questions about disclosure sufficiency and the legality of profiting from personal data.
- The Double Cross. A fraud protection company uses client data to enhance its AI algorithms across multiple banking customers. This scenario raises questions as to whether such data usage violates contractual agreements, whether AI-generated insights constitute proprietary content, and whether or not there is a clear divide between original source data to train AI models and the output data that the models create.
These hypotheticals highlight the risks of AI by questioning the legality of using customer/consumer data to produce content that the business uses for its own purposes, whether for pecuniary gain or not. They also bring attention to ethical and contractual implications of leveraging client/employee data to enhance AI models across multiple entities. Most importantly, these hypotheticals bring to light the numerous legal gray areas that we may soon have to navigate—and that some already are navigating.
AI Risk-Mitigation and Forensic Considerations
It is critical for businesses to know how AI works before they begin to leverage it. However, the controls in place to govern its use are just as important. Leveraging AI can enhance productivity and innovation; however, it also introduces new risks that must be proactively addressed. To that end, robust governance frameworks are crucial to mitigating unauthorized or unethical AI usage.
Businesses can significantly reduce exposure to AI-related risks by implementing comprehensive compliance measures, such as Employer Device Management (“EDM”) and Mobile Device Management (“MDM”) systems. These technologies enable organizations to regulate access to AI tools and third-party applications across various devices—including computers, tablets, and mobile phones. With customizable access controls, companies can restrict usage to approved platforms and simultaneously monitor user activity beyond corporate domains, ensuring traceability and accountability.
In the event of litigation or internal investigations, these systems facilitate efficient application of legal holds and data recovery processes. By maintaining control over corporate and employee-owned devices, EDM and MDM technologies allow for secure data preservation while minimizing the need for costly and time-consuming physical device collections. Importantly, they also support privacy-preserving mechanisms, balancing investigative needs with employee data protection.
For organizations lacking centralized device management infrastructure, it is imperative to understand where AI-related data may reside. Beyond conventional storage mediums—such as hard drives, flash drives, and solid-state drives—many AI platforms store user queries and interactions in the cloud. While access to this data often requires user logins, some platforms allow anonymous interaction, complicating attribution. In such cases, forensic examiners rely on alternative artifacts including browsing histories, system activity logs (e.g., file creation, copy and paste events), and audit trails to trace AI usage. These indicators, recoverable through forensic imaging, can include both active and deleted data depending on the scope of forensic acquisition.
It is also critical to recognize that AI itself is not inherently harmful. Risk arises from its misuse, lack of oversight, or uninformed application. Therefore, alongside technical controls, companies must foster a culture of ethical AI use through clear policies, continuous education, and employee accountability. Developing and disseminating AI-specific training programs empowers employees to understand not only the functional aspects of AI but also the ethical, legal, and business implications of its use. Such training should cover topics including data privacy, intellectual property considerations, acceptable use, and bias mitigation. Ethical use agreements and internal awareness campaigns can further reinforce responsible behavior, placing shared responsibility on both the organization and its workforce.
On the reactive side, companies must be prepared to respond swiftly to potential incidents. Utilizing the Electronic Discovery Reference Model (“EDRM”), organizations can deploy litigation holds and document preservation strategies with minimal disruption. Digital forensic techniques complement these processes by enabling thorough investigations through metadata analysis, device event tracking, and the use of AI-detection tools such as GPTZero. Because AI-related content can be distributed across cloud storage, application logs, system metadata, and multimedia artifacts, a multidirectional forensic approach is essential for comprehensive risk assessment and incident resolution.
Effective AI governance demands a combination of proactive policies, technical enforcement, employee education, and forensic readiness. By integrating these components, businesses can harness the benefits of AI while safeguarding against its potential misuse.
Conclusion
AI presents both opportunities and challenges across various industries and legal spaces. While it enhances automation, decision-making, and efficiency, it also introduces legal, ethical, and security risks that organizations and the judiciary must address. If businesses and the legal profession are constantly playing catchup to technological advances such as AI, they inherently lose sight of laying a sound foundation to govern their use. By implementing strict compliance policies, monitoring AI-generated content, and staying informed about evolving legal frameworks, businesses can work to harness AI’s potential while mitigating its inherent risks. As we look to the future, our considerations should focus on responsible usage, not exclusion. Understanding where and how to engage AI will effectively pave the road for businesses to ethically exploit AI in a safe and responsible manner.
This article is related to a CLE program titled “Forensic, E-Discovery, and Legal Compliance in the Brave New World of AI” that took place during the ABA Business Law Section’s 2025 Spring Meeting. To learn more about this topic, listen to a recording of the program, free for members.
No. 1:20-cv-613-SB (D. Del. Feb. 11, 2025); see also Thomson Reuters Enter. Ctr. GmbH v. Ross Intel. Inc., No. 1:20-cv-613-SB (D. Del. May. 23, 2025) (highlighting the importance and difficulty of legal questions about AI’s copyright implications when certifying the case for interlocutory appeal). ↑