Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I remembered the account of developers committing bugs so that reviewers were forced to find something instead of just giving the all clear.

Maybe there should be a group of researchers that submit papers that seem believable with known issues to avoid scientific journals stagnating. Maybe the names and institutions of the group submitting papers should be hidden so that more reputable/famous people and institutions get the same level of scrutiny as everyone else.



This reminded me of this apocryphal story. "Just one thing - get rid of the duck."

https://bwiggs.com/notebook/queens-duck/


Funny, I always heard this anecdote in the context of adding hair to Mickey Mouse's arms. Apocryphal indeed, I guess: https://www.npr.org/2014/11/17/364760847/whats-with-all-of-t...

In terms of function though, the "hairy arm" is the complete opposite though: it's not to make sure reviewers are paying attention, it's to distract meddling reviewers from the changes you don't want them to veto.


From the linked article on the duck:

> A feature added for no other reason than to draw management attention and be removed, thus avoiding unnecessary changes in other aspects of the product.

Sounds like it was for the same reason


Yes, sorry what I meant was opposite to the reason that started this thread (to ensure review participation)


I worked at Interplay at the time and can confirm the duck story.


“…submit papers that seem believable with known issues to avoid scientific journals stagnating…”

It’s called a scholarly hoax, and it’s been done more than once: https://en.m.wikipedia.org/wiki/List_of_scholarly_publishing...


Great way to test peer-review: submit a bunch of BS articles, particularly with big words and notable authors, and see which ones get published where

In fact I know for a fact this has been done at least once before (at least with the big words): https://en.m.wikipedia.org/wiki/SCIgen

But it should be done in every field, regularly


A better way would be to change the system so that publications in big-name journals are not important for career advancement.


What is your alternative solution?


Seems clear to me: "change the system so that publications in big-name journals are not important for career advancement." There are many fields that don't have this requirement, or anything like it, and the suggestion is that academia join their ranks.

The "alternative solution" to having a thing that wacks people on the head every Tuesday is...not having it.


Businesses have money. Governments have votes. What measurable quantity do you propose for scientific work? Votes of colleagues? It's even more corrupt.


Are you familiar with the concept of Chesterton’s fence?


Well, simply, for senior researchers to stop making big-time publications so high-up on the list to hire people. I know quite a few senior researchers, and they do have some say in it...


It's like PRs in software. Nobody really wants to dig through all that shit. Quick scan. Approve.. and move on.


A more benign example was the college professor who started each lecture saying he was going to say one wrong thing.

(it would keep the students alert)

or maybe Van Halen's "CRC check", where their venue contract would ask for a bunch of things set up in advance, including a bowl of M&Ms with the brown ones removed.

(it was a test of diligence, that they read the contract and knew the requirements)


Years before this current trend of LLM generated papers, there was a tool IIRC by some MIT students which would generate plausible looking papers which when actually read were obviously garbage. The title and abstract would have the sort of words you would expect of a research paper, but the resulting sentences would be nonsense.

The idea being that if the submission got published, the review process was obviously a total joke. I think they even had some examples that had actually been published.

Edit, Link: https://pdos.csail.mit.edu/archive/scigen/


I'm having a hard time believing those papers weren't written by humans after a cursory glance


You want to solve reproducibility? Apply forensic scrutiny to every publication?

That's a great idea, but who is going to pay for it?

Every scientist knows that it's more than possible to publish deliberately fraudulent work. Every non-scientist thinks that detecting this fraud is easy.


> Every non-scientist thinks that detecting this fraud is easy.

Eh, this working engineer doesn't think so. Freeform fraud detection is a hard problem in general, and when you're talking about (apparent) cutting-edge research, the pool of people who might spot specific details is tiny.

A low probability of detection does suggest that the penalty should be relatively high to have a deterrent effect.


> developers committing bugs so that reviewers were forced to find something

This assumes code review's key purpose is to find bugs.

I always wonder about this when I review something, but at the end of the day there are many other things I'm looking more at than "is this a bug"


Agreed. I probably won't catch a bug that the tests wouldn't also catch unless it's obvious. I'm looking for things like:

  -Is this duplicating something 
   else?
  -Should this be part of this 
   object?
  -if I come back in a year, will 
   it take me 10 minutes or an
   hour to understand it?
  -is this necessary?
  -should I approve this but add a 
   task to the tracking board to 
   figure out if this should be 
   done in the service instead of 
   the client?
  -etc


If my understanding of the timeline is right, Dias already had a retraction when this was sent in. I don’t think reputation is to blame here, the reviewers just weren’t up to the task.


It might be worth pointing out that reviewers are only on board to offer their opinion with the material given at face value. After all they are probably not in a position to experimentally verify whether the results are reproducible.

The editor always has the final word on whether a manuscript gets accepted.



Maybe the review institution of a paper should always add one known flaw that peer review must find to ensure a quality control of the process?


> Maybe there should be a group of researchers that submit papers that seem believable with known issues to avoid scientific journals stagnating.

There was. People were not very pleased.

https://en.wikipedia.org/wiki/Sokal_affair


Weird, I was peer reviewing a manuscript earlier today and actually thought of something similar. The manuscript needs some work to improve its English and I wondered if some of the broken English passages were a test to see what nonsense a referee like me would put up with. I don't think it is a test but the thought did occur to me.


> I remembered the account of developers committing bugs so that reviewers were forced to find something instead of just giving the all clear.

You'd be better off pair programming everything and forgoing code reviews.

Sometimes I think the people that come up with this shit are sociopaths.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: