Today's radio show discussion focused on how artificial intelligence can be a helpful starting resource in legal work, but cannot replace attorney analysis, judgment, or the human conversations needed to uncover important facts and legal rights. The speakers emphasized that AI often misses jurisdiction-specific issues, nuances in estate planning and family matters, and can even produce inaccurate or fabricated legal citations, creating ethical and professional risks if relied on too heavily. Their main message was that AI should be used only as a supplement to competent legal counsel, not as a substitute, and clients should bring any AI-generated materials to a lawyer for review.
AI as a legal resource, not a replacement for counsel
-
The discussion opened with the idea that artificial intelligence is another layer of technology added to the internet era, useful as a resource but not something lawyers can rely on without analysis. The speakers compared AI to internet-found legal forms: helpful for familiarity, but not sufficient on its own because legal nuances still need professional review. [00:56]
-
Cecil Harvell emphasized that AI belongs in the category of tools that can inform and assist, but the output must be reviewed, checked, and refined by a lawyer before it can be used responsibly. [01:32]
Limits of online forms and generic documents
-
The speakers noted that people have long been able to download wills, powers of attorney, deeds, and debt-related pleadings online, but those one-size-fits-all documents often fail to address unique facts such as second marriages, spousal rights, contingent beneficiaries, or what happens if a beneficiary dies first. [01:32]
-
The key concern was that using generic documents without legal advice can leave major gaps that only become visible later, when a dispute or death occurs. [01:32]
How AI can assist legal research at a basic level
-
Wes Collins described AI as a “surface resource” that can help generate a preliminary list of possible claims or legal issues from a fact pattern, especially in more complex cases where the facts may implicate less obvious causes of action. [05:??]
-
He said he would use AI only as a broad-brush starting point, then move to actual case law research to verify whether the AI output is correct and relevant in North Carolina. [07:28]
-
He also stressed the importance of limiting searches to North Carolina, because otherwise AI may return authorities from other states that do not apply locally. [07:28]
Ethical and professional risks for attorneys
-
The conversation turned to ethical restrictions on lawyers using AI to draft legal work product and presenting it as their own. The speakers noted that some attorneys have been disciplined after submitting AI-generated material without verifying it. [08:22]
-
A Georgia Supreme Court example was discussed in which fabricated citations and quotes were reportedly found in a brief generated with AI, demonstrating how AI can confidently produce false information. [08:22]
-
The point was that lawyers are bound by professional conduct rules and can face serious consequences, including reprimand or loss of license, if they rely on AI without proper review. [11:01]
AI and contract drafting: useful start, risky finish
-
Wes Collins said AI may be useful as a time-saving starter for standard contract drafting, but not as a final substitute for attorney review. [11:01]
-
He explained that AI-generated contracts can miss essential protections such as indemnification language, asset-vs.-stock sale terms, attorney-fee provisions, or other key clauses that matter if a dispute later arises. [11:01]
-
The speakers warned that people may assume AI can produce a complete legal document simply because it is fast and inexpensive, but unresolved issues often create greater problems and expense later. [11:01]
Problems caused by “document-in-a-box” reliance
-
The speakers said they have repeatedly seen Internet-generated or fill-in-the-blank legal documents lead to larger and more expensive disputes than properly drafted attorney-prepared documents. [15:13]
-
They framed this as a classic “ounce of prevention, pound of cure” issue: spending a little more time and money at the front end often prevents much larger costs on the back end. [15:33]
The human element in legal practice
-
A recurring theme was that legal practice depends heavily on human conversation, judgment, and follow-up questions, not just facts on paper. [17:32]
-
The hosts discussed how attorneys often discover that clients have made assumptions about inheritance, family rights, or document effects that are not legally correct, and those issues only surface through conversation. [18:??]
-
They highlighted a probate example involving a surviving spouse asking for furniture, where the real issues involved elective share rights and year’s allowance—things AI would not have uncovered because the client did not know to ask about them. [19:45]
Estate planning and probate examples
-
Cecil Harvell explained that the human interview process is essential in estate planning because it reveals hidden issues such as spousal rights, beneficiary contingencies, and family circumstances that a generic form or AI prompt would likely miss. [20:22]
-
The discussion also referenced more complicated probate disputes involving missing information, motions to compel, clerk hearings, promissory notes, and appeals, underscoring that legal cases often evolve in ways AI cannot anticipate from a simple prompt. [20:22]
-
Other examples included prenuptial agreements, assumptions about children inheriting automatically, and situations where clients must be asked uncomfortable but necessary questions to avoid future legal problems. [22:20]
Real-world law practice cannot be automated away
-
The speakers described the day-to-day practice of law as deeply human, involving conversations with therapists, family members, and other parties in emotionally complicated matters such as late-life marriages and disputed estate administration. [23:13]
-
They noted that law clerks witnessing these interactions quickly see how much of legal work depends on reading people, understanding family dynamics, and using judgment that AI cannot replicate. [23:13]
AI may reduce some front-end effort, but can increase downstream cost if misused
-
The host observed that AI is often marketed as a way to reduce time and cost, but the speakers agreed that misuse can lead to more delay and more expense when problems appear later. [26:39]
-
They reiterated that the real issue is not merely drafting a document, but asking the right question at the right time. A single attorney question can uncover a much larger issue—or a much better solution—than the client initially realized. [27:09]
Final guidance to listeners
-
Wes Collins advised listeners to seek competent legal counsel if they are at a legal crossroads, and then let that counsel use AI appropriately as a support tool rather than as the entire answer. [28:27]
-
Cecil Harvell added that clients should feel comfortable bringing AI-generated information, printouts, or research to their lawyer; if a lawyer is intimidated by that, the problem lies with the lawyer, not the client. [29:27]
-
He closed by saying AI is welcome as a positive development, but like the internet more broadly, it brings both benefits and drawbacks that must be understood and managed carefully. [30:01]