AI, Law & Ethics: Elliott Lipinsky’s Take on the Future of Legal Practice
Introduction
Artificial intelligence (AI) is rapidly changing professional work, and the legal field is no exception. Tools powered by machine learning and large language models now assist with document review, contract drafting, legal research, and client intake. These capabilities promise efficiency and cost savings, but they also raise significant ethical and regulatory questions. Attorneys must consider responsibility for AI-generated materials, data-protection obligations, and risks associated with inaccurate or fabricated information. Alabama attorney Elliott Owen Lipinsky, who has publicly discussed his use of AI in legal practice, offers a glimpse into how frontline lawyers are thinking about this transformative technology. This article explores the development of AI in the legal world, surveys Lipinsky’s views, and considers the ethical, practical, and professional implications of AI for future legal work.
Elliott Owen Lipinsky: Professional Background
Elliott Owen Lipinsky is an Alabama lawyer whose career has included service as an assistant or deputy district attorney and work in private practice in the Selma and Montgomery region. His firm advertises services in criminal defense, personal injury, and related areas. He has also participated in local bar activities and Alabama political circles.
It is also a matter of public record that in 2019 the Alabama Attorney General’s office announced his arrest on charges involving computer-related activity connected to a 2018 political campaign. As with all such matters, an indictment is an accusation, and defendants are presumed innocent unless proven guilty. This information is included for completeness in order to present a factual overview of the attorney’s background.
AI in the Legal Profession: History and Current Landscape
AI’s influence on legal work can be traced back to early e-discovery platforms that assisted with sorting and categorizing large document sets. The release of large language models to the general public in late 2022 dramatically expanded the range of tasks that AI could assist with, including summarizing legal authorities, drafting briefs, generating contract language, and answering natural-language legal questions. These advances triggered rapid adoption across the industry.
The American Bar Association and state bars have responded by issuing opinions explaining how traditional ethical obligations apply to AI use. These opinions emphasize that lawyers must maintain competence, protect client confidentiality, supervise the use of automated tools, and verify the accuracy of any material incorporated into legal work. Several publicized incidents illustrate the stakes. In some cases, lawyers submitted court filings containing fabricated case citations generated by AI tools, prompting sanctions and widespread professional concern.
Elliott Lipinsky’s Views on AI
Lipinsky has described himself as an early and active user of generative AI. In a 2023 interview, he noted that he “hopped on the AI train early” and expressed enthusiasm about the technology’s potential to assist both solo practitioners and large firms. He also said he was experimenting with training his own private model, reflecting a hands-on approach to adaptation.
Lipinsky has expressed the belief that AI is likely to have a greater impact on transactional and repetitive legal work than on trial practice. According to his comments, the human element of persuasion, credibility, and real-time judgment makes courtroom advocacy less susceptible to automation. He also suggested that the legal profession may need proactive guidance from the state bar to respond to disruptive technological change.
Ethical and Practical Challenges
The growing use of AI raises several ethical themes within professional responsibility frameworks. Attorneys must understand the tools they use well enough to apply them responsibly, meaning technological competence now includes familiarity with the strengths and limitations of AI. They must also ensure that confidential client information is not improperly shared with third-party platforms.
Accuracy is another central ethical concern. Generative AI sometimes produces fictional or distorted information. Lawyers who rely on AI for research or drafting must personally confirm the accuracy of all authorities and quotations before submitting them to a court or client. Courts have issued sanctions in cases where lawyers failed to verify AI-produced citations.
Supervision also remains critical. Outsourcing work to automated tools does not eliminate a lawyer’s responsibility for the content. Attorneys must ensure that AI is not used in a way that constitutes or facilitates the unauthorized practice of law.
Practical Examples and Consequences
Several well-publicized incidents demonstrate the risks of uncritical AI use. In one notable example, lawyers submitted a brief containing non-existent case law generated by an AI program. When confronted, they explained that they had not realized the citations were fabricated. The court sanctioned the lawyers, and the matter prompted state bars to issue further educational materials to prevent similar harms.
Technical studies have also documented the tendency of AI models to generate authoritative-sounding but false information when responding to legal prompts. These findings reinforce the need for human review and support the growing emphasis on verification in professional guidance.
Implications for Alabama and Smaller Legal Practices
Lipinsky’s comments have particular relevance for smaller practices common in Alabama and other rural states. Such firms often lack dedicated research staff or technology departments; for them, AI can help create efficiencies by speeding up routine tasks such as motion drafting, client intake, and legal research. Lipinsky’s experimentation with model customization illustrates how smaller firms may adopt AI in ways tailored to local practice needs.
At the same time, the risks may be concentrated for these firms. Without formal data-security programs or specialized oversight, smaller practices face challenges ensuring that confidential information remains protected and that AI-generated text is properly reviewed. State bar guidance therefore plays an important role, providing templates, training, and recommendations that can help smaller firms adopt AI without compromising ethical obligations.
Practical Themes for Responsible Use
A recurring theme across Lipinsky’s observations and professional ethical guidance is that AI should be treated as an assistant, not a replacement. Lawyers retain full responsibility for deciding what to file, what to say, and how to advise clients. Before using AI in client matters, firms should understand how a tool handles data, whether information is stored, and whether client consent is required.
Attorneys who introduce AI into their practice may benefit from phased adoption rather than relying on it immediately for complex tasks. Beginning with internal uses such as summarizing documents or drafting template language allows lawyers to understand AI’s strengths before depending on it in external-facing contexts. Professional development through continuing legal education programs can help practitioners stay up to date.
Conclusion
Elliott Lipinsky’s perspective on AI reflects a practical balance between enthusiasm and caution. He sees the technology as an important tool that enhances efficiency and expands capabilities, particularly for small firms that may not otherwise have resources for advanced research support. At the same time, he recognizes that core legal functions especially courtroom advocacy remain rooted in human judgment and interpersonal skill.
The broader regulatory landscape supports this view. Although AI tools are increasingly common, bar associations consistently emphasize lawyers’ continuing obligations to maintain competence, protect client information, supervise non-lawyer assistance, and verify work submitted to courts or clients. As legal AI develops, the profession will likely move toward a hybrid model in which technology assists with research and drafting while attorneys continue to perform the interpretive, strategic, and ethical functions that define the practice of law.
The future of legal practice will not be AI-driven or human-only; instead, it will reflect a careful partnership grounded in responsibility.
