Devils Roll The Dice, Angels Roll Their Eyes

WMMM #050 - This week, I share a new edition in the Mastering Useless Information series.

Jeff Keplar Newsletter January 20, 2024 10 min read


Mastering "Useless Information" is an antiphrasis for "being interesting" as a sales skill, in addition to "being interested.”

A recent two-part episode on the Freakonomics Radio podcast grabbed my attention.

Fraud in academia was the topic, which could have been enough on its own.

But these three areas made it especially interesting to me:

  • Behavioral Science was the area of focus, and I often share data on why people make the decisions they make

  • The use case highlighted, "People are more truthful filling out an application if you put the signature block at the top," made me want to know more

  • A team of academic researchers that study academic research had a catchy name for their blog:" Data Colada."

The source for this week's edition of Win More, Make More is Stephen Dubner's Freakonomics Radio podcast: "Why is There So Much Fraud in Academia," Episodes 572 & 573, Jan 10 & 17, 2024.


Superstar

Francesca Gino was a superstar, an academic superstar.

A prestigious faculty member at Harvard, Gino authored books.

She was a highly sought-after public speaker.

Her reputation was flawless.

She was synonymous with the highest levels of research on organizational behavior.

She was a giant in the field.

Here is where I became more interested.

Organizational behavior is also variously known as behavioral science, decision science, and organizational psychology.

Many concepts I share in my weekly newsletter originate in behavioral science.

I had not heard of Francesca Gino until this podcast, but if I had, I may have read some of her research and cited it in one of my editions.

According to her website at the Harvard Business School, where she is Professor of Business Administration, Gino's research focuses on why people make the decisions they do at work.

Can you see why this is right up our alley?

Gino became a superstar by publishing many research papers in academic journals and a couple of books, the latest called "Rebel Talent - Why It Pays To Break the Rules at Work and in Life."

"She produced the kind of camera-ready research that plays perfectly into the virtuous circle of academic superstars." Dubner

The publisher or university amplifies a journal article into the mainstream media, which feeds a headline to all the firms and institutions eager to exploit the next behavioral science insight.

This, in turn, generates an even greater appetite for more useful research.

An academic who is capable of producing such work is treated like an "oracle:"

  • There are TED Talks to be given

  • Books to be written

  • Consulting jobs to be had.

Francesca Gino gave talks and consulted for Google, Disney, Walmart, the US Air Force, the US Army, and the US Navy.

But that's all over for now.


Breaking the Rules at Work - Alleged Research Misconduct

In July of 2023, Harvard Business School, responding to an analysis by academic whistleblowers, investigated Gino's work and found that she had "intentionally, knowingly, or recklessly committed research misconduct."

Gino was suspended without pay.

She then sued Harvard and the whistleblowers.

Those same whistleblowers have also produced evidence of what they call data fraud by an even more prominent behavioral scientist, Dan Ariely of Duke University.

Ariely has enjoyed the spotlight for many years, going back to his 2008 book "Predictably Irrational - The Hidden Forces that Shape Our Decisions."

Duke is said to be finalizing its investigation into Ariely, although that's been going on for a while.


Why This Matters

This is a much bigger story than two high-profile cases in behavioral science.

Research fraud in academia has consequences for all of us.

From Freakonomics, published in 2005:

"Cheating may or may not be human nature, but it is certainly a prominent feature in every human endeavor."

Cheating is getting more for less.

Why shouldn't we expect cheating, even among scientific researchers?

A new study in the journal "Nature" found that more than 10,000 research articles were retracted last year.

Fraud has existed since science has existed primarily because humans are doing the science.

People come with ideas, beliefs, motivations, biases, and reasons for doing the research that they do.

“In some cases, people are so motivated to advance an idea, or themselves, they are willing to change the evidence, fraudulently, to advance that idea, or themselves." Brian Noseck, Professor of Psychology, University of Virginia and Executive Director at Center for Open Science.

We can't have compromised research.

A finding could be translated into medicine or public policy, damaging lives, treatments, or solutions.


Why Not Roll the Dice?

The benefits that come with being a superstar professor outweigh the punishment if caught.

The academic reward system is the culprit.

Publication is the currency of advancement.

Academics need publications to have a career, advance a career, and get promoted.

Sharing their findings with their peers compromises the enhancement of their brand.

If a system has a built-in bias against transparency, there will be less transparency and more opportunity to cheat.

Admission from the Inside:

"If you were just a rational agent acting in the most self-interested way possible as a researcher in academia, I think you would cheat."

From the Industry of Academic Research:

"There is misconduct everywhere. The most likely career path for anyone who has committed misconduct is a long and fruitful career because most people skate if they are caught."

In the past, academics did not look for ways to get rich.

They looked for ways to have time to think about the problems they wanted to think about.

"These research papers aren't written by some political official, management consultant, or equity analyst. They are written by someone so devoted to their field of research that they went through the hell of getting a Ph.D. to spend their days and nights doing that research."

Now, they have pathways to get rich.


Their Employers Merely Roll Their Eyes

When it comes to academic fraud, universities have a habit of downplaying charges against their superstar professors because it reflects poorly on them.

They don't want to bring negative attention to the school because of the possibility of guilt by association.

Generally, universities have been very slow to act and investigate.

In 2019, Duke University settled with the US Government for $112.5M because they had repeatedly alleged to have covered up significant misconduct in medical research.


Data Colata - The Data Detectives

Leif Nelson, Uri Simonsohn, and Joe Simmons are academic researchers who study academic research.

They maintain a blog called Data Colata.

Nelson (Cal-Berkely), Simonsohn (Esade B-School in Barcelona), and Simmons (Wharton B-School, Penn) are employed by big-time universities.

All three are widely published.

They provide inside knowledge from the inside.

They examine behavioral science research, go to conferences, and read papers.

And sometimes, they do not believe what they read.

They found that whenever a finding did not align with the author's intuition, they would trust their intuition over the finding, defeating the whole purpose of performing the study.

"If you are only believing things you already believe, then why bother?"

They had an idea.

"How do we show people that we can very easily produce evidence of anything?"

So they started with something obviously false.


When I'm Sixty-Four

Something quite hard to do (impossible) is to make people younger.

Humanity has been trying forever, and we have not succeeded.

"So, let's show that we can do that in a silly way."

So they decided they could make people younger by listening to a song by The Beatles.

That song was "When I'm Sixty-four."

"So if we can make anything significant, one way to prove it is to say:

"I'm going to show you with significantly statistical evidence that people got younger after listening to "When I'm Sixty-four.”””

They ran actual lab experiments with real research subjects who had accurate birth dates and played them actual songs.

The songs were "When I'm Sixty-four," "Kalimba" by Mr. Scruff, and "Hot Potato" by The Wiggles.

They manipulated and cherry-picked their data to produce the absurd finding they wanted - listening to "When I'm Sixty-four" lowers your age.

Their data indicated that your age was lowered by about a year and a half.


They Got It Published

They published their article in Psychological Science, a top journal in their field.

The piece was called "False Positive Psychology - Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant."

The immediate aftermath was shocking.

There were so many people for whom this resonated.

And there were lots of people who were not very happy.

"Why are you giving our field a bad name?"

But they felt their field already had a bad name.

They began the Data Colata blog in late 2013.

They were concerned that the pressure to publish interesting results might produce unreasonable findings even if the researcher mainly had followed the rules.


P-Hacking

"P" stands for probability.

Not quite errors, p-hacking can be decisions that are accidentally self-serving.

An example is if you measure multiple things but only report the one you like the most.

Or you run a study with three treatments but drop one and don't even talk about it.

This scenario unfolded in Data Colata's "When I'm Sixty-four" prank experiment.

They dropped the results from "Hot Potato" entirely.

There are more approaches.

For instance, the use of statistics reveals something that is statistically impossible.

Another is that you see associations in the data, or lack thereof, which are not mathematical properties, but anyone familiar would say, "This isn't right."

Imagine that you have data on weight and height.

You correlate it.

You find zero correlation.

This cannot be right, for bigger people are generally heavier.

You might see rounding that is suspicious.

You see rounding when there shouldn't be any.

Or, you see the absence of rounding where there should be.

Consider a study where you ask respondents: "How much would you pay for this t-shirt?"

In the data, there was no rounding, which was curious.

The findings indicated the people were equally likely to say $7, $8, or $10.

But if you have ever run such a study and collected data like that, you'd know that people round.

They say "$10" or "$20," but they don't say "$17."

We see something as simple as a typo.

Someone is writing up their report, and the means are 5.1 and 5.12.

But instead, someone writes it down as 51.2.

That's a huge effect.

No one corrects it because it's a massive effect in the direction that they were expecting.

So a typo ends up in print, and that's before we get to anything like fraud - the active fabrication or manipulation of data.


Uncovering Fraud

It all started when Uri made a chart for a paper where he was mining data from multiple published studies, and Leif looked at this figure of this other research group and thought it was unusual.

Leif then went and read the source paper and looked at their dataset.

It collected data on a nine-point internal scale - people can answer 1,2,3… up to 9.

He found numbers in the dataset, like -1.7.

This was easy.

The data set is obviously broken.

After the Data Colata investigation, the paper was retracted.

It had been published in 2012 in the journal "Judgement and Decision-making" and authored by four researchers from Taiwan.

Those researchers were not sanctioned nor punished.

Although Data Colata became well-known within psychology and data science, they only had a little reach beyond those circles.

That changed when they published a post called "Evidence of Fraud in an Influential Field Experiment About Dishonesty."


Exposing Professors at Harvard and Duke

They claimed to identify fraud in a paper published years earlier in a top journal, PNAS, Proceedings of the National Academy of Sciences.

The paper's title is "Signing at the Beginning Makes Ethics Salient and Decreases Dishonest Self-Reports in Comparison to Signing at the End."

There are four things you may want to know about this paper:

  1. The central finding was extraordinarily popular - a lot of firms and institutions began putting the signature box at the top of the tax statement, insurance form, etc

  2. The article had been edited by Danny Kahneman, arguably the best-known living psychologist and one of the most highly regarded

  3. Two of the five authors were among the most famous people in this field, Dan Ariely and Francesa Gino

  4. There was already evidence that something was wrong with the original paper because the authors had published a second paper saying their findings didn't replicate.

Failure to replicate doesn't always indicate fraud, but Data Colata had found fraud.

The paper reported that the average driver in the database had driven 24,000 - 27,000 miles per year.

The average American only drives around 13,000 miles per year.

Ariely's data came from an insurance study where applicants were asked how much they drove.

One sample had the signature at the top and the other at the bottom.

Tens of thousands of drivers were sampled.

When asked, Ariely said the drivers were senior citizens in Florida, which seemed even more odd.

It's a simple idea that makes sense.

Organizations can quickly implement it - Lemonade Insurance did so via Ariely's advice as a paid consultant.

Yet, it failed when an insure-tech startup wanted to explore using it online.

It failed to replicate six consecutive times.

This led to an attempt to replicate the original study.

A large-scale sample dataset was used.

No repeatability was found.

However, the original paper was not retracted.

Ariely and Gino continued to profess belief in the hypothesis but were now adding "some of the time" to their talk track.


Enter the Data Colata team

They found fraud in the insurance study, one of three studies used.

They looked at a histogram of the miles driven each year by the people in the study.

A typical representation is a bell curve, with many people in the center and outliers plotted at each end.

This histogram showed a nearly uniform distribution of drivers from zero to 50,000 miles.

This was not normal, very suspicious.

There was no plausible benign explanation for it.

The insurance company that provided the data to Ariely had responded that the data Ariely published significantly differed from what they had given him.

For what they had initially given Ariely, there was no difference between those who signed at the top and those who signed at the end.

Data Colata also found data fabrication in lab studies from Francesca Gino and three other projects by Gino.

PNSA has retracted the paper.

Duke is still investigating.

Harvard placed Gino on administrative leave.


Looking Forward

Nearly 4 million articles are published annually in approximately 50,000 journals globally.

Pre-registration is a possible solution whereby rewards are bestowed on the questions being asked and the methodology used in the study, not the results.

Initiatives like Data Colata and the Center for Open Science are bringing transparency to the broken rewards system for academic research.

There is hope that fraud and misconduct can be reduced.


Thank you for reading.

Jeff

When you think “sales leader,” I hope you think of me.

If you like what you read, please share this with a friend.

I offer my help to sales leaders and their teams.


I possess the skills identified in this article and share them as part of my service.

In my weekly newsletter, Win More, Make More, I provide tips, techniques, best practices, and real-life stories to help you improve your craft.


Previous
Previous

The Only Easy Day Was Yesterday

Next
Next

Covered With Ice - The VarTec Story