The Well-Intended Perversion of Science Following the Flint Water Crisis
When moral concerns enter the picture, seeking the truth gets complicated.
In one memorable session from the 2025 Heterodox Academy conference last week, I listened as Drs. Siddhartha Roy, Marc Edwards, and Hernan Gomez presented evidence against the widely held notion that a spike of lead in the water supply permanently disabled a large cohort of young children in Flint, Michigan. Special education enrollment has spiked in the wake of the crisis, which began in 2014, but the three presenting researchers suggested that lead exposure wasn't the reason why. In fact, their analysis found that at no time did the average child in Flint have blood lead levels higher than the average child in the rest of the state of Michigan.
Instead, Roy and his colleagues argued the spike could be explained by a self-fulfilling prophecy in which Flint children and educators alike had internalized the prevailing narrative that the children were “poisoned” and would necessarily struggle to learn (a perception exacerbated by the confusing nature of the CDC’s threshold for concern over lead).
Although hopeful in one sense, this conclusion was clearly “bad” for Flint in that it undermined the community’s chances of receiving sorely needed government assistance. With or without a lead poisoning epidemic, resources in Flint were already stretched thin.
And the researchers presenting at the 2025 HxA Conference were clear that the initial water contamination crisis was real, involving a criminal betrayal of the public’s trust by the government. (You can watch a video of the session here.) Since 2014, thirteen people have died from an outbreak of Legionnaires’ disease associated with the contaminated water. Many Flint residents, outraged at this injustice, sought damages from the government for what they saw as its role in their children’s developmental delays.
At the same time, Roy and his colleagues felt that allowing children in Flint to believe claims about being “ruined for life” would represent its own injustice.
This was the dilemma that our presenters faced, and which Roy posed to the audience during the session: Should scientists remain silent for the sake of what they perceive to be social justice, allowing residents to secure more damages from the government than were strictly warranted by the evidence, or speak out to correct the record in an attempt to keep children (and the educators helping them) from losing hope in their learning abilities?

For their part, Roy and his colleagues saw the science being so twisted and the potential for psychological harm to Flint’s children as so great that they felt compelled to speak out, each paying steep professional costs as a result. Their paper, in which they proposed the self-fulfilling prophecy explanation, was rejected from 11 journals before being picked up by a European outlet.
What happened around the Flint water crisis is not unique. A robust body of work on morally motivated reasoning suggests that our political values bias our beliefs about what is true in the direction of what we want to be true. For instance, we apply stricter standards of evidence to scientific findings that challenge our political beliefs, and we believe misinformation more readily when it supports our political cause. This won’t be news to most people.
More troubling is that our moral concerns have a way of eroding our reliance on evidence as the primary basis for action or belief. For instance, researchers have found that people are more willing to share news they find “interesting-if-true,” regardless of how true they actually think it is, leading them to occasionally share news they believe to be inaccurate. One emerging theoretical perspective suggests that people adopt false or irrational beliefs (for example, that Barack Obama was born in Kenya) as a way of signaling their commitment to their ingroup. People who hold these beliefs don’t care about their rationality, according to this view, because their function is social rather than cognitive.
One particularly interesting line of work in this vein suggests that people endorse morally motivated reasoning to some extent, contrary to the dominant assumption among psychologists that people prefer to see themselves as neutral and objective. In one series of experiments, participants were presented with scenarios in which objective evidence favored one conclusion, but moral considerations favored another (for example: John and Adam are friends; drugs were found in John’s dorm room, but he denies that they are his, leaving Adam unsure of what to believe). When asked what the main character “ought” to believe in each scenario, participants prescribed a more morally desirable belief than when they were asked what the “most accurate” belief would be (e.g., they reported that Adam should believe his friend beyond what was actually supported by available evidence).
In another series of studies, participants were presented with scientific findings that either supported or challenged their political attitudes (e.g., findings that either supported or challenged the efficacy of puberty blockers as gender-affirming treatment for transgender teens). In addition to reporting greater belief in results that were consistent with their existing attitudes, participants openly reported being influenced by moral concerns (such as the potential harm to future teens if they drew the wrong conclusion about puberty blockers) and they believed that these concerns had influenced them to an appropriate extent. In other words, participants exhibited morally motivated reasoning, acknowledged it, and endorsed it.
I’ll give one last example from social science that suggests what’s happened around the Flint water crisis is not unique. In one fascinating investigation, researchers presented American participants with various “fact-flouting” (that is, inaccurate) statements made by political leaders (e.g., that the COVID-19 booster vaccines increased the likelihood of COVID-19 infection; that people vaccinated against COVID-19 could not infect others with the virus). When asked whether it was more important that each statement be based on objective evidence or that it send “the right message about American priorities,” participants from the same party as the speaker valued objective evidence relatively less than (and favored “sending the right message” more than) participants from the opposing political party.
This measure of “moral flexibility,” as the authors called it, also predicted participants’ happiness with the statements, even after controlling for how truthful they thought the statements were. In other words, people were licensing themselves to approve of politically convenient untruths on the basis of their rhetorical value in addition to their basis in objective evidence.
During their conference presentation in New York, Roy, Edwards, and Gomez shared an illustrative quote from a journal reviewer who recommended their self-fulfilling prophecy paper for rejection:
“For a chronically underserved community like that of Flint, Michigan, each opportunity to get more money, for whatever reason, is a big deal. And we all know that underserved communities most likely do not obtain extra funding by simply asking for it.”
Although I assume that the overall rationale for the rejection was grounded in methodological concerns, the inclusion of this justice-rooted rationale feels of a piece with the findings I just described: our moral concerns seem to generate feelings of deep truth that guide the search for evidence, rather than the other way around.