Renewed War on Drugs, harsher charging policies, stepped-up criminalization of immigrants — in the current climate, joining the NACDL is more important than ever. Members of NACDL help to support the only national organization working at all levels of government to ensure that the voice of the defense bar is heard.
Take a stand for a fair, rational, and humane criminal legal system
Contact members of congress, sign petitions, and more
Help us continue our fight by donating to NFCJ
Help shape the future of the association
Join the dedicated and passionate team at NACDL
Increase brand exposure while building trust and credibility
NACDL is committed to enhancing the capacity of the criminal defense bar to safeguard fundamental constitutional rights.
NACDL harnesses the unique perspectives of NACDL members to advocate for policy and practice improvements in the criminal legal system.
NACDL envisions a society where all individuals receive fair, rational, and humane treatment within the criminal legal system.
NACDL’s mission is to serve as a leader, alongside diverse coalitions, in identifying and reforming flaws and inequities in the criminal legal system, and redressing systemic racism, and ensuring that its members and others in the criminal defense bar are fully equipped to serve all accused persons at the highest level.
Showing 1 - 15 of 25 results
2/1/2025 - Artificial intelligence has been described as the new “industrial revolution”—and for good reason. Despite public sentiment ricocheting daily, practically every industry is now experimenting with this technology to solve their most pressing challenges and streamline operations. But in the legal world, finding problems, identifying the root cause and strategizing a path forward is far more important than the grunt work of actually solving them.
2/1/2025 - Despite dispelling the cautionary tales of law firm AI usage, there is still no grand plan for insurance that would cover law firms and attorneys against any discrepancies that generative AI may be adding to their work. Nevertheless, there are some steps lawyers and firms can take to make sure they’re covered.
1/30/2025 - Several law enforcement agencies have used or are currently using artificial intelligence to process data on how crime varies geographically, looking at trends to inform where they police. Beyond influencing where officers patrol, predictive policing algorithms can influence who officers ultimately search when the output of these algorithms is used as part of a probable cause analysis or other justification for conducting a search. AI output is difficult to analogize to existing justifications for conducting searches, creating opacity around the constitutionality of its use, an issue that is exacerbated when AI is used with minimal oversight or to inform emergency decisions. Increased access to the datasets that predictive policing algorithms use, coupled with specific policies ensuring appropriate use of AI output, can help to ensure that predictive policing algorithms do not take on an outsized importance and that human reason maintains its place in deciding which searchers will be conducted.
1/30/2025 - Facial recognition technology (FRT) is increasingly used by law enforcement, but concerns remain about privacy violations, racial bias, and potential abuses. The history of FRT shows significant advancements, leading to widespread use in law enforcement, entertainment, healthcare, and schools. State laws governing FRT vary, with some states implementing strict regulations while others have banned its use by government agencies. Recent court rulings emphasize the need for transparency and discovery related to FRT in criminal cases.
12/31/2023 - Chief Justice John G. Roberts Jr.'s year-end report focused on the new technology.
5/23/2024 - A new study reveals the need for benchmarking and public evaluations of AI tools in law.
7/29/2024 -- The American Bar Association Standing Committee on Ethics and Professional Responsibility released its first formal opinion covering the use of generative artificial intelligence (GAI) in the practice of law, pointing out that model rules related to competency, informed consent, confidentiality and fees principally apply.
7/10/2024 - Explore the complex and multifaceted nature of artificial intelligence. It AI is a broad term encompassing technologies that enable computers to perform tasks typically requiring human intelligence, such as recognizing faces, understanding speech, and driving cars. However, the definition of AI is contentious and varies widely, leading to disagreements about its capabilities and implications. The rapid evolution of AI from experimental stages to consumer products, raises questions about trust, application, and societal impact. This article suggests that while AI is a transformative technology, there is still no clear consensus on what it constitutes, even among its creators, and emphasizes the importance of public engagement in shaping the future of AI.
4/29/24 -- In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
The court, having considered the proposed rule, the accompanying comments, and the use of artificial intelligence in the legal practice, has decided not to adopt a special rule regarding the use of artificial intelligence in drafting briefs at this time. Parties and counsel are reminded of their duties regarding their filings before the court under Federal Rule of Appellate Procedure 6(b)(1)(B). Parties and counsel are responsible for ensuring that their filings with the court, including briefs, shall be carefully checked for truthfulness and accuracy as the rules already require. “I used AI” will not be an excuse for an otherwise sanctionable offense.
Jan 2019 - Intelligent machines” have long been the subject of science fiction. However, we now live in an era in which artificial intelligence (Al) is a reality, and it is having very real and deep impacts on our daily lives. From phones to cars to finances and medical care, AI is shifting the way we live. AI applications can be found in many aspects of our lives, from agriculture to industry, communications, education, finance, government, service, manufacturing, medicine, and transportation. Even public safety and criminal justice are benefiting from AI. For example, traffic safety systems identify violations and enforce the rules of the road, and crime forecasts allow for more efficient allocation of policing resources. AI is also helping to identify the potential for an individual under criminal justice supervision to reoffend. Research supported by NIJ is helping to lead the way in applying AI to address criminal justice needs, such as identifying individuals and their actions in videos relating to criminal activity or public safety, DNA analysis, gunshot detection, and crime forecasting.
5/23/2024 - Legal practice has witnessed a sharp rise in products incorporating artificial intelli- gence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext, 2023) or “avoid[ing]” hallucinations (Thomson Reuters, 2023), or guaranteeing “hallucination-free” legal citations (LexisNexis, 2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a comprehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.
4/6/2024 - This Report by the New York State Bar Association Task Force on Artificial Intelligence examines the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession.
2/17/2023 - Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed. Both champions and critics of AI, however, mistakenly assume that we inevitably face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, and it also may reflects pre-existing racial and socio-economic disparities. Unless AI is interpretable, decisionmakers like lawyers and judges who must use it will not be able to it.
3/29/2024 - Findings Of Fact and Conclusions of Law Re: Frye Hearing on Admissibility of Videos Enhanced By Artificial Intelligence