Renewed War on Drugs, harsher charging policies, stepped-up criminalization of immigrants — in the current climate, joining the NACDL is more important than ever. Members of NACDL help to support the only national organization working at all levels of government to ensure that the voice of the defense bar is heard.
Take a stand for a fair, rational, and humane criminal legal system
Contact members of congress, sign petitions, and more
Help us continue our fight by donating to NFCJ
Help shape the future of the association
Join the dedicated and passionate team at NACDL
Increase brand exposure while building trust and credibility
NACDL is committed to enhancing the capacity of the criminal defense bar to safeguard fundamental constitutional rights.
NACDL harnesses the unique perspectives of NACDL members to advocate for policy and practice improvements in the criminal legal system.
NACDL envisions a society where all individuals receive fair, rational, and humane treatment within the criminal legal system.
NACDL’s mission is to serve as a leader, alongside diverse coalitions, in identifying and reforming flaws and inequities in the criminal legal system, and redressing systemic racism, and ensuring that its members and others in the criminal defense bar are fully equipped to serve all accused persons at the highest level.
Showing 1 - 15 of 21 results
12/31/2023 - Chief Justice John G. Roberts Jr.'s year-end report focused on the new technology.
5/23/2024 - A new study reveals the need for benchmarking and public evaluations of AI tools in law.
7/29/2024 -- The American Bar Association Standing Committee on Ethics and Professional Responsibility released its first formal opinion covering the use of generative artificial intelligence (GAI) in the practice of law, pointing out that model rules related to competency, informed consent, confidentiality and fees principally apply.
7/10/2024 - Explore the complex and multifaceted nature of artificial intelligence. It AI is a broad term encompassing technologies that enable computers to perform tasks typically requiring human intelligence, such as recognizing faces, understanding speech, and driving cars. However, the definition of AI is contentious and varies widely, leading to disagreements about its capabilities and implications. The rapid evolution of AI from experimental stages to consumer products, raises questions about trust, application, and societal impact. This article suggests that while AI is a transformative technology, there is still no clear consensus on what it constitutes, even among its creators, and emphasizes the importance of public engagement in shaping the future of AI.
4/29/24 -- In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
The court, having considered the proposed rule, the accompanying comments, and the use of artificial intelligence in the legal practice, has decided not to adopt a special rule regarding the use of artificial intelligence in drafting briefs at this time. Parties and counsel are reminded of their duties regarding their filings before the court under Federal Rule of Appellate Procedure 6(b)(1)(B). Parties and counsel are responsible for ensuring that their filings with the court, including briefs, shall be carefully checked for truthfulness and accuracy as the rules already require. “I used AI” will not be an excuse for an otherwise sanctionable offense.
Jan 2019 - Intelligent machines” have long been the subject of science fiction. However, we now live in an era in which artificial intelligence (Al) is a reality, and it is having very real and deep impacts on our daily lives. From phones to cars to finances and medical care, AI is shifting the way we live. AI applications can be found in many aspects of our lives, from agriculture to industry, communications, education, finance, government, service, manufacturing, medicine, and transportation. Even public safety and criminal justice are benefiting from AI. For example, traffic safety systems identify violations and enforce the rules of the road, and crime forecasts allow for more efficient allocation of policing resources. AI is also helping to identify the potential for an individual under criminal justice supervision to reoffend. Research supported by NIJ is helping to lead the way in applying AI to address criminal justice needs, such as identifying individuals and their actions in videos relating to criminal activity or public safety, DNA analysis, gunshot detection, and crime forecasting.
5/23/2024 - Legal practice has witnessed a sharp rise in products incorporating artificial intelli- gence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext, 2023) or “avoid[ing]” hallucinations (Thomson Reuters, 2023), or guaranteeing “hallucination-free” legal citations (LexisNexis, 2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a comprehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.
4/6/2024 - This Report by the New York State Bar Association Task Force on Artificial Intelligence examines the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession.
2/17/2023 - Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed. Both champions and critics of AI, however, mistakenly assume that we inevitably face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, and it also may reflects pre-existing racial and socio-economic disparities. Unless AI is interpretable, decisionmakers like lawyers and judges who must use it will not be able to it.
3/29/2024 - Findings Of Fact and Conclusions of Law Re: Frye Hearing on Admissibility of Videos Enhanced By Artificial Intelligence
2/1/2024 - As Open AI’s ChatGPT ushers in an explosion of interest in generative AI, e-discovery experts are considering how user-entered prompts can be obtained and if they would hold up in court.
5/3/2022 - We know there are problems in the use of artificial intelligence in policing, but we don’t quite know what to do about them. One can also find many reports and white papers today offering principles for the responsible use of AI systems by the government, civil society organizations, and the private sector. Yet, largely missing from the current debate in the United States is a shared framework for thinking about the ethical and responsible use of AI that is specific to policing. There are many AI policy guidance documents now, but their value to the police is limited. Simply repeating broad principles about the responsible use of AI systems are less helpful than ones that 1) take into account the specific context of policing, and 2) consider the American experience of policing in particular. There is an emerging consensus about what ethical and responsible values should be part of AI systems. This essay considers what kind of ethical considerations can guide the use of AI systems by American police.
4/2/2024 -- The evolution of AI programs presents an interesting dichotomy: If they are proven successful by increasing efficiency and enhancing effectiveness, should there also be a threshold mandate for their use in the legal profession, and if so, what ethical mandates should sit alongside such requirements?
3/28/2024 -- While AI is improving operations and service delivery across the Federal Government, agencies must effectively manage its use. As such, this memorandum establishes new agency requirements and guidance for AI governance, innovation, and risk management, including through specific minimum risk management practices for uses of AI that impact the rights and safety of the public.