All News

RFK Jr Report Faces AI Citation Errors and Fictitious Sources

Robert F. Kennedy Jr.'s 'Make America Healthy Again' report contains numerous citation errors, including broken links, repeated references, and fictitious sources. Investigations reveal signs of AI involvement, specifically ChatGPT, which likely caused these inaccuracies. Despite corrections, concerns about the report's reliability and the role of AI in its creation persist.

Published May 30, 2025 at 08:10 AM EDT in Artificial Intelligence (AI)

Robert F. Kennedy Jr.'s "Make America Healthy Again" (MAHA) report, intended to address the decline in U.S. life expectancy, has come under scrutiny for numerous citation errors and questionable sources. Investigations by NOTUS and The Washington Post uncovered dozens of broken links, repeated citations, and even entirely fictitious references within the report.

A striking discovery was the presence of "oaicite" markers in several URLs, a signature of OpenAI's ChatGPT-generated content. This strongly suggests that AI tools were used to compile the report, which aligns with the known tendency of generative AI models to produce "hallucinations"—false or fabricated information presented as fact.

Among the errors, at least seven cited studies were completely fabricated, and 37 citations were duplicated multiple times. Such inaccuracies raise serious concerns about the report's credibility and the reliability of AI-assisted research in high-stakes policy documents.

Despite these issues, RFK Jr. has been a proponent of the "AI Revolution," highlighting the potential of AI to improve healthcare data management. However, the White House press secretary described the citation problems as mere "formatting issues," sidestepping direct acknowledgment of AI involvement.

Following public scrutiny, the MAHA report was updated to remove some AI-related citation markers and replace non-existent sources with alternative references. The Department of Health and Human Services maintains that these corrections do not alter the report’s substantive findings, which it calls a transformative assessment of chronic disease affecting American children.

This episode underscores the challenges and risks of integrating AI-generated content into critical public health documents. While AI can accelerate research and data synthesis, unchecked reliance on generative models without rigorous verification can lead to misinformation and erode trust in official reports.

As AI continues to reshape how information is produced and disseminated, stakeholders must implement robust validation processes. This ensures that innovations enhance rather than compromise the integrity of vital health policy research.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte offers in-depth analysis of AI's impact on research integrity and content verification. Explore how our AI-driven tools can help organizations detect and prevent AI-generated misinformation in critical reports. Stay ahead with QuarkyByte’s insights to ensure accuracy and trust in your data-driven decisions.