This page last changed 2025.06.07 11:30 [7 times today, 0 time yesterday, and 7 total times]
Summary of Washington Post 6/4/2025 Article
Information summarized from The Washington Post article
Washington Post "challenged AI helpers to decode legal contracts, simplify medical research, speed-read a novel, and make sense of Trump speeches." They asked ChatGPT, Claude, Copilot, Meta AI, and Gemini. Responses varied from good to bad. Scores are out of 10.
Literature
[7.8] ChatGPT. Best summary. Failed to address slavery and the Civil War (as did most AI bots).
[7.3] Claude. Got facts right
[4.3] Meta AI.
[3.5] Copilot.
[2.3] Gemini. Inaccurate, misleading, and sloppy. Like when Costanza watches the "Breakfast at Tiffany's" movie instead of reading the book.
Law
Understanding two common legal contracts.
[6.9] Claude. Most consistently decent answers and did well suggesting changes to their test rental agreement.
[6.1] Gemini
[5.4] Copilot
[5.3] ChatGPT. tried to reduce complex parts of the contracts to one-line summaries and missed important points (key clauses).
[2.6] Meta AI. tried to reduce complex parts of the contracts to one-line summaries. Skipped several sections and important points.
Health Science
Analyzing scientific research.
[7.7] Claude. Good summary on paper on Long Covid; scored low on another paper when accounting for racial differences
[7.2] ChatGPT
[7.0] Copilot
[6.5] Gemini. Left out key descriptions of the research on Parkinson's disease and why it mattered.
[6.0] Meta AI
Politics
Analyzing Trump's speeches.
[7.2] ChatGPT. Impressive responses to half of questions posed to it; accurate fact-checking Trump's claims about winning 2020 election.
[6.2] Claude
[5.2] Meta AI. Said Trump never said # jobs returning to MI and highlighted what Trump said about auto jobs.
[5.0] Gemini
[3.7] Copilot. Incorrect on # jobs returning to MI. Didn't capture charged nature of Trump's speech.
Overall Winner
This according to The Washington Post