Why do researchers commit misconduct?

Shilpi Mehra takes a look at the pressures beneath the surface in the world of scholarly communications
“When a postdoc in my lab committed fraud, I had to face my own culpability,” says Rosalind Coleman, a principal investigator and a professor emeritus at the University of North Carolina at Chapel Hill.
Science is built on the pursuit of truth. This cornerstone has given researchers a unique position of being one of the most trusted agencies of truth in society. However, the very people we trust to find answers and solutions to pressing issues, may compromise the integrity of their work. This paradox is hard to ignore.
So, it is worth asking: “Why does this happen?” Does the answer lie in the morality of individuals or is it the system where the root of this problem lies?
When curiosity meets the system
Science doesn’t exist in a vacuum; it is part of a system shaped by ambition and competition. This system often rewards speed and productivity – the number of funding wins, the volume of published research papers, and citation metrics – these are regarded as parameters of success. In an environment where the result of years of work is subject to metrics, scientists may feel the pressure to bend the rules.
To understand why misconduct happens, we need to look at how systemic pressures and subsequent rewards shape how researchers work.
Liam Kofi Bright, philosopher and associate professor at the London School of Economics and Political Science, mentions “credit economy of science” in his paper. He argues that instead of being rewarded for being right, researchers are rewarded for being first. In this system, “scientists win credit by establishing priority, not accuracy.” To stay ahead, they must publish fast, regularly, and before anyone else makes the cut.
Under such circumstances, it’s easy to imagine that researchers care about the integrity of their work, but they may be tempted to prioritise their survival.
The triple helix and its pressures
Sociologists Henry Etzkowitz and Loet Leydesdorff describe modern research as a “triple helix” of Universities, Industry, and the State. Each one depends on the others when it comes to funding, legitimacy, and innovation. Moreover, each introduces its own pressures too. For instance, funding agencies demand a measurable impact. On the other hand, universities chase rankings, while publishers seek profit. Together, this creates a loop where researchers end up feeling the pressure to increase their speed and scale and ensure visibility.
Negative results often struggle to find a place in many journals. On the other hand, the rise of predatory journals has added volume to the system often without ensuring quality. Even open access can backfire when the promise of enhanced access is leveraged for profit. In this system, researchers may end up overseeing the core values of research.
The increased number of publications has some consequences that often go unseen. The combination of heavy reviewer workloads and increasingly subtle forms of manipulation makes detection more difficult. This is especially true with practices of scientific misconduct becoming more sophisticated.
Such conditions can overwhelm the system and create an environment that is more conducive to misconduct and make the detection of misconduct harder. These pressures explain how honest intentions can slowly bend under the weight of expectations.
Knowledge, training, and the ethics gap
Can we really blame researchers for misconduct if they’ve never been taught what integrity means in practice?
Young scientists may not always receive structured guidance on ethical best practices and research integrity when they enter research. They may be aware of the technicalities of data collection and analysis. However, they may lack a deep understanding of ethical considerations.
Neuroscientist Jordi Camí explains that “ethics” deals with societal consequences i.e. what’s right or wrong for humanity while “integrity” is about the act itself i.e. to be honest and not cheat. Most institutions have codes of ethics, but few build these as part of everyday discipline.
In the absence of training in research ethics, mentors can play a huge role in helping researchers understand the importance of upholding integrity. Russell Taichman, a dean at the University of Alabama Birmingham School of Dentistry, publicly corrected the scientific record after a postdoc in his lab duplicated data. “Clearly I am embarrassed. But I am committed to doing the right thing,” he said. When young researchers see their mentors or senior researchers owning lapses in research integrity instead of looking at such incidents as failure, they are likely to absorb the same lessons.
Impact of misconduct beyond morality
Scientific misconduct doesn’t affect the reputation of only the researcher who has committed fraud. Their institutions, collaborators, mentors, or co-authors are also brought into the spotlight. Each fraudulent paper misuses public funds, misleads other researchers, and can derail the progress of research fields.
The ripple effect of every case of misconduct goes beyond academia. It impacts public trust in science. This loss of credibility can be devastating in an age where misinformation is rampant. Science depends on trust between researchers, funders, publishers, and the public. When that trust breaks, the entire ecosystem feels its impact.
The way forward
Since the roots of fraud lie in the incentive system, then wouldn’t fixing it mean rethinking what we reward? Rather than stricter policing or harsher punishments, the focus should be on reshaping a culture that has been built to reward speed and volume. We need to build an ecosystem that values integrity as deeply as it does innovation.
Moreover, integrity can’t thrive in an environment where failure is looked down upon. So, a culture that looks at admitting mistakes as strength will be a huge step toward nurturing integrity. The true measure of scientific progress isn’t how quickly we move, but how much we hold on to integrity and purpose along the way.
While fraud in science can set progress back, it’s also a signal that we need to take a deeper look at the workings of our systems. Misconduct is not the failing of just one person. It’s a reflection of the systems that ultimately shape their behavior. When the currency of success is reduced to publication numbers and citations, we may sideline qualities like mentorship or meaningful collaboration that sustain good science.
This circles back to the triple helix of universities, industry, and the state. Their collective expectations and incentives quietly steer the decisions and day-to-day behavior of researchers.
To tackle this problem, institutions must rethink the behaviors they reward. Universities, on their part, can focus on reinforcing the quality of publications over the volume. Industry can support this by emphasizing the long-term value of research, while governments can encourage healthier research cultures.
Ultimately, research integrity cannot be ensured by policing individuals. Until we change what we consider success, misconduct will remain the hidden cost of doing business in modern science.
Shilpi Mehra is Director, Product, at Cactus Communications
