Renewed War on Drugs, harsher charging policies, stepped-up criminalization of immigrants — in the current climate, joining the NACDL is more important than ever. Members of NACDL help to support the only national organization working at all levels of government to ensure that the voice of the defense bar is heard.
Take a stand for a fair, rational, and humane criminal legal system
Contact members of congress, sign petitions, and more
Help us continue our fight by donating to NFCJ
Help shape the future of the association
Join the dedicated and passionate team at NACDL
Increase brand exposure while building trust and credibility
NACDL is committed to enhancing the capacity of the criminal defense bar to safeguard fundamental constitutional rights.
NACDL harnesses the unique perspectives of NACDL members to advocate for policy and practice improvements in the criminal legal system.
NACDL envisions a society where all individuals receive fair, rational, and humane treatment within the criminal legal system.
NACDL’s mission is to serve as a leader, alongside diverse coalitions, in identifying and reforming flaws and inequities in the criminal legal system, and redressing systemic racism, and ensuring that its members and others in the criminal defense bar are fully equipped to serve all accused persons at the highest level.
Showing 1 - 15 of 16 results
The court, having considered the proposed rule, the accompanying comments, and the use of artificial intelligence in the legal practice, has decided not to adopt a special rule regarding the use of artificial intelligence in drafting briefs at this time. Parties and counsel are reminded of their duties regarding their filings before the court under Federal Rule of Appellate Procedure 6(b)(1)(B). Parties and counsel are responsible for ensuring that their filings with the court, including briefs, shall be carefully checked for truthfulness and accuracy as the rules already require. “I used AI” will not be an excuse for an otherwise sanctionable offense.
Jan 2019 - Intelligent machines” have long been the subject of science fiction. However, we now live in an era in which artificial intelligence (Al) is a reality, and it is having very real and deep impacts on our daily lives. From phones to cars to finances and medical care, AI is shifting the way we live. AI applications can be found in many aspects of our lives, from agriculture to industry, communications, education, finance, government, service, manufacturing, medicine, and transportation. Even public safety and criminal justice are benefiting from AI. For example, traffic safety systems identify violations and enforce the rules of the road, and crime forecasts allow for more efficient allocation of policing resources. AI is also helping to identify the potential for an individual under criminal justice supervision to reoffend. Research supported by NIJ is helping to lead the way in applying AI to address criminal justice needs, such as identifying individuals and their actions in videos relating to criminal activity or public safety, DNA analysis, gunshot detection, and crime forecasting.
5/23/2024 - Legal practice has witnessed a sharp rise in products incorporating artificial intelli- gence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext, 2023) or “avoid[ing]” hallucinations (Thomson Reuters, 2023), or guaranteeing “hallucination-free” legal citations (LexisNexis, 2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a comprehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.
4/6/2024 - This Report by the New York State Bar Association Task Force on Artificial Intelligence examines the legal, social and ethical impact of artificial intelligence (AI) and generative AI on the legal profession.
2/17/2023 - Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed. Both champions and critics of AI, however, mistakenly assume that we inevitably face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, and it also may reflects pre-existing racial and socio-economic disparities. Unless AI is interpretable, decisionmakers like lawyers and judges who must use it will not be able to it.
3/29/2024 - Findings Of Fact and Conclusions of Law Re: Frye Hearing on Admissibility of Videos Enhanced By Artificial Intelligence
2/1/2024 - As Open AI’s ChatGPT ushers in an explosion of interest in generative AI, e-discovery experts are considering how user-entered prompts can be obtained and if they would hold up in court.
5/3/2022 - We know there are problems in the use of artificial intelligence in policing, but we don’t quite know what to do about them. One can also find many reports and white papers today offering principles for the responsible use of AI systems by the government, civil society organizations, and the private sector. Yet, largely missing from the current debate in the United States is a shared framework for thinking about the ethical and responsible use of AI that is specific to policing. There are many AI policy guidance documents now, but their value to the police is limited. Simply repeating broad principles about the responsible use of AI systems are less helpful than ones that 1) take into account the specific context of policing, and 2) consider the American experience of policing in particular. There is an emerging consensus about what ethical and responsible values should be part of AI systems. This essay considers what kind of ethical considerations can guide the use of AI systems by American police.
4/2/2024 -- The evolution of AI programs presents an interesting dichotomy: If they are proven successful by increasing efficiency and enhancing effectiveness, should there also be a threshold mandate for their use in the legal profession, and if so, what ethical mandates should sit alongside such requirements?
3/28/2024 -- While AI is improving operations and service delivery across the Federal Government, agencies must effectively manage its use. As such, this memorandum establishes new agency requirements and guidance for AI governance, innovation, and risk management, including through specific minimum risk management practices for uses of AI that impact the rights and safety of the public.
3/28/2024 -- Vice President Kamala Harris announced that the White House Office of Management and Budget (OMB) is issuing OMB’s first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits – delivering on a core component of the President Biden’s AI Executive Order. The Order directed action to strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more. Federal agencies have reported that they have completed all of the 150-day actions tasked by the E.O, building on their previous success of completing all 90-day actions. In recent weeks, OMB announced that the President’s Budget invests in agencies’ ability to responsibly develop, test, procure, and integrate transformative AI applications across the Federal Government.
3/14/2024 - How do machines learn? Learning is fundamental to artificial intelligence. It’s how computers can recognise speech or identify objects in images. But how can networks of artificial neurons be deployed to find patterns in data, and what is the mathematics that makes it all possible? This is the second episode in a four-part series on the evolution of modern generative ai. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as Chatgpt?
3/8/2024 - Clare Garvie, NACDL Fourth Amendment Center Training and Resource Counsel, testified before the U.S. Commission on Civil Rights during their briefing on the civil rights implications of Facial Recognition Technology (FRT). The Commission's investigation analyzed how FRT is developed, how it is being utilized by federal agencies, emerging civil rights concerns, and safeguards the federal government is implementing to mitigate potential civil rights issues.
11/22/2022 - Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed.
In this guide, you will learn about the features and uses of ChatGPT’s Advanced Data Analysis (formerly Code Interpreter) function. Advanced Data Analysis is a feature within ChatGPT’s GPT-4 that allows users to upload data directly to ChatGPT to write and test code. It is only available to premium (paid) accounts. This feature lets you run code directly on ChatGPT, significantly increasing both the use cases and accuracy of the output produced by the model. This feature is perfect for users looking to explore data, create code, and solve empirical problems with the assistance of AI tools