Science rests upon the reliability of peer review. This paper suggests a way to test for bias. It is able to avoid the fallacy – one seen in the popular press and the research literature – that to measure discrimination it is sufficient to study averages within two populations. The paper’s contribution is primarily methodological, but I apply it, as an illustration, to data from the field of economics. No scientific bias or favoritism is found (although the Journal of Political Economy discriminates against its own Chicago authors). The test’s methodology is applicable in most scholarly disciplines.
Unfortunately, the paper does not study peer review. It studies the order of articles in published issues, under the assumption that editors, if biased, would place home articles first. This is not the bias these journals are accused of. People have issues how these journals do not publish articles from others, and in particular how "desk rejects" (rejections that did not go through refereeing) are handled. And why would editors place better are articles first? As discussed recently on this blog, lead articles are not better.
This study is completely misleading because title and abstract discuss a research question that was not addressed in the paper. All those who do not read the paper will be mislead. Bad research.