Some paper reviewers feel the pressure to criticize something in order to appear competent. Sometimes they feel this pressure due to huge blank form fields for criticism in the reviewing system. As a consequence, they sometimes criticize wrongfully. Fully recovering from wrongful criticism during review is sometimes possible, but not always. This hurts the research community.
A while ago a saw advice in a video to have harmless and rather obvious mistakes (typos, inconsistent notation) on purpose in the manuscript when submitting for peer review, in order to avoid the aforementioned problems. I don't recall the details, nor who gave that talk.
Are you aware of such videos/articles, or can give examples of specific "diversionary tactics"?
Note that I'm only asking about specific example tactics. If you want to discuss (dis)advantages of choosing to use them at all, please open another question, and I'll be happy to link to it.
Edit: This question isn't about the pros and cons (see above). Many answers so far are as if I asked about the pros and cons (which I didn't).
Also, I'm not saying "I plan to do this, try to stop me". I just want to find the information that exists about it.
Note that I mean harmless mistakes. Also, they are fixed before publishing even if not asked to.
A related technique from programming is called "a duck".
The psychological phenomenon is called "Parkinson's law of triviality".
You're not the first one to come up with this idea.
In case it's not obvious, I recommend against doing this:
This is a terrible idea. Just a couple of days ago, I reviewed a paper with a lot of confusing descriptions and elaborate mathematics. It was not clear that the explanatory sections were going to be clear enough for me to be able to evaluate the mathematical material in a useful fashion, but ordinarily I would have given it a try.
However, the very first equation of the paper (which, dealing with introductory matters, was not particularly complicated) contained an obvious error. I concluded that if the authors were that careless, it was not worth my time to try to pick through each and every poorly documented equation, trying to see if they were all valid. The editor agreed and rejected the paper.
So including stupid mistakes like this will only call into question whether you have been careful enough in preparing the manuscript. If it looks like the authors have been lazy or careless, there is little motivation for reviewers and editors to try to fix things for the authors.
This started as a piece of Interplay corporate lore. It was well known that producers (a game industry position, roughly equivalent to [project managers]) had to make a change to everything that was done. The assumption was that subconsciously they felt that if they didn't, they weren't adding value.
The artist working on the queen animations for Battle Chess was aware of this tendency, and came up with an innovative solution. He did the animations for the queen the way that he felt would be best, with one addition: he gave the queen a pet duck. He animated this duck through all of the queen's animations, had it flapping around the corners. He also took great care to make sure that it never overlapped the "actual" animation.
Eventually, it came time for the producer to review the animation set for the queen. The producer sat down and watched all of the animations. When they were done, he turned to the artist and said, "that looks great. Just one thing - get rid of the duck."
I have not heard of doing this in regard to the peer review process for journal publication, and in that context I would advise against it. Perhaps others have had different experiences, but I have not observed any tendency for journal referees to ask for changes merely for the sake of appearing to add value. Since most of these processes are blind review, the referee is not usually identified to the author, and there is little reason for referees to grandstand like this.
This tactic (which is by the way more of a joke than something that people actually do) is designed to deal with an archetypal incompetent manager who is incapable to understand the work they are given to review, yet unable to admit their incompetence and so are resorting to bike shedding to compensate.
Trying it on people who are actually competent will result in one of the outcomes:
Neither improves the chances for your paper to be received well.
In case you feel the reviewer is either biased or psychologically set to validate his/her competence by unjustly criticizing your work as if implying that it was properly reviewed by an expert, it is actually possible to introduce something which should be edited out but I advise extreme caution. It should be very subtle and yet conspicuous enough. It should not cast any doubt on your own competence: perhaps, something superfluous or murky but well-known to the reviewer so that he/she will enjoy criticizing it. However if you don’t know the reviewer and his/her level of competence, it’s better to be extra cautious about such things.
A few words about typos and sloppiness in formatting: The reviewer will feel more justified to pile up his/her criticisms so much as to consider the paper to be low quality stuff or a mess. I saw it happen when excellent papers got almost scrapped for lack of clarity and accidental errata. So, typos, bad formatting and inconsistencies are no-go; such stuff will only detract from your paper. Only much more subtle strategy is viable and only when you know the reviewers are less competent or unjustly biased. As to diversionary tactics, such things should be specified and discussed with the experts in the area of your work. Using some primitive generic tactics (typos, notation inconsistency, etc.) will only draw criticism.
If some "criticizing" blanks or forms or any bureaucracy are pulled on you, you should seek advice of colleagues in your area. It's better not to leave anything to chance, and discuss everything with the experts in your area. Finding out what forms/blanks or bureaucracy you are dealing with is crucial.
Your paper might be reviewed on general (non-expert in the area) principles just as if it were only reviewed by proofreaders or lay people. So making it more consistent, coherent, logical, clear, succinct, and free of spelling and formatting errors will be a big plus.
You should probably rely more on the guidelines that are used when reviewing papers in your area, rather then focusing on diversionary tactics. A lot of bad reviews are a result of carelessness (both on the part of the reviewer and on the part of the scientist), overzelousness and bureaucracy (or should I say strict "proofreader-like" guidelines for scientists), rather than malicious intent assuming the paper in question displays outstanding ideas and great substance. Please check some guidelines as How to Write a Good Scientific Paper: a Reviewer’s Checklist A Checklist for Editors, Reviewers, and Authors by Chris Mack or suchlike articles and guidelines. Their number in astonishing. Nowadays a lot of science is about сitation indexes; and a lot of reviews are about formal structure, clarity of reasoning, proper references, nice presentation of data, etc. used in the paper.
Please note that this is very generic advice. You may want to tailor it to your area of expertise with all the corresponding changes you deem necessary. Bottom line: I strongly advise against generic diversionary tactics. Tailor everything to your area of expertise. Unfortunately, if they don't want to publish it, they won't, even if it is a breakthrough. You might need extra recommendations and credentials. Please also note how much pseudoscience we have today, and some of it sneaks in respected publications! So a huge number of papers need to be weeded out.
This is not an exact match to your request, but is similar enough in spirit and relevant enough to your question to mention.
There's a very interesting and entertaining article by Karl Friston published in Neuroimage, called "Ten ironic rules for non-statistical reviewers". The point of the article is to give a generic 'slap on the wrist' to "common" review points by people who may or may not understand the statistical implications of their suggestions, and who may simply be making statistically-correct-sounding generic statements out of a need to seem useful / not-unknowledgeable, and err on the side of rejection.
It does so in the highly unusual format of starting off with a highly sarcastic and humorous introduction of why a reviewer is under "so much pressure to reject these days", given articles have increased in quality, and proceeds to offer ten tongue-in-cheek "rules" for them to try in order to ensure a malicious rejection even in the presence of good papers. It only enters non-sarcastic serious discussion as to why those rules are poor interpretations of statistics in the much-longer "appendix" of the paper, which is in fact the 'real' article.
So in a sense, this is the same thing as you're talking about, except seen from the reverse direction: instead of instructing authors on how to keep reviewers "busy" with trivialities, it is a tongue-in-cheek article instructing reviewers on how to focus on trivialities in the presence of an actually well-presented paper, in order to sound like they have critical influence over the outcome and / or to ensure a rejection / wasting of time for the author (i.e. heavily implying that this is a common-enough occurence to warrant such a sarcastic article).
(the original paper is paywalled, but a version of the pdf should be available to read for free online via a simple search engine search; it's a rather popular paper!)
Let's set up a simple static game: assume that there are two kinds of reviewers, "Good R" and "Bad R". "Good R" are those that they know the subject well but even if they don't, they will try to honestly review the paper on merit. "Bad R" are those who will go for "wrongful criticism" in the logic laid out by the OP, and for "Superficial Criticism" if we submit a superficially sloppy paper. We consider two strategies, "Superficial Sloppiness" and "Tidy manuscript". In all we have four possible states.
I argue that the most preferred state is "Good R - Tidy manuscript". In such a case the paper will be reviewed on merit by an appropriate reviewer. Let's assign the numerical value/utility 4 to this outcome (the scale is ordinal).
Consider now the state "Good R- Superficial Sloppiness". As other answers indicated, we will most likely get a quick reject (and acquire a bad reputation in the eyes of a person that we shouldn't). This is the worst it could happen to our paper, in light of the fact that it was assigned to a Good Reviewer. We assign the value 1 to this state.
Let's move to the situation "Bad R - Wrongful Criticism". Supposedly this is the state that we want to avoid by the proposed tactic. I argue that this state is not worse than the state "Good R- Superficial Sloppiness", because "Good R- Superficial Sloppiness" is a bad state because we shoot our own feet really, while "Bad R - Wrongful Criticism" is an unfortunate but expected situation. So we assign the value 2 to this state.
Finally the state "Bad R - Superficial Criticism" is what we try to guarantee with this tactic. We certainly consider it as better than the previous one, but not as good as having a Good Reviewer assessing our tidy paper on merit. So we assign the value 3 to this state. The normal form of the game is therefore
There is no strictly or weakly dominant strategy here. But let's not go into mixed strategy equilibria. From our point of view, reviewers are chosen by nature (mother nature, not Nature the journal) with some probability, say p for the probability that we get a Bad Reviewer. Then the expected utility for each strategy is
V(Sprf Slop) = p x 3 + 1 x (1-p) =2p+1
V(Tidy Mnscr) = p x 2 + 4 x (1-p) =4-2p
It appears rational to chose the Superficial Sloppiness tactic iff
V(Sprf Slop) > V(Tidy Mnscr) => 2p+1 > 4-2p => p > 3/4
In words, if you think that the chance that you will get a Bad Reviewer is higher then 3/4, then your expected utility will be indeed higher by applying such an embarrassing tactic.
Do 3 out of 4 reviewers belong to the Bad Reviewer category in your field?