That's the thing though, with "peer reviewed" research, now it's time to check the results against everything we know about COVID-19. I've been following the Seattle epidemiologists Twitter, and he had some things to say about this study. And he has colleagues who have some things to say about the study. One admitted that normally the peer review happens out of the spotlight, but now everyone is watching. Another mentioned that the worrisome part is that people will latch onto something and use it for their own agenda, or this thing that hasn't been peer reviewed will be taken as fact. The example they used was how people were already using this study to prove that COVID-19 is no worse than the flu. We've seen multiple statements of that in this thread, this one just was the latest.
These people understand all the inputs, and assumptions. I don't. What little I can tell, the areas of concern is that within the group of people they tested, there is some actual observed prevalence. Some actual percentage of people tested positive. The study says that it was 1.5%. They originally found people who responded to an online ad, which means the sample wasn't random, but self-selecting. But to extrapolate to the county population at large you have to apply a lot of weighting. Which is where they ended up with the 2.5%-4%. Apparently, in studies like this, that is a large change. Also, there was concern about the accuracy of the Stanford antibody test. This is still all new, and they are concerned with the number of potential false positives. One person had a bunch of charts I didn't quite follow, talking about confidence intervals, test sensitivity and basically, just those things could end up with a situation where the entire "positive" results are "false positives." Seattle guy talked about how the test sensitivity, if it is only slightly less accurate, the number of expected positives drops to 0 (while the observed positive is still 1.5%). Finally, others were talking the presumed mortality rate and looking at everywhere else and basically saying, "does this match?" And the answer is a resounding no. If the mortality rate for this is as low as this study would indicate, than far too many people have actually died. Observed data has to match up.
Their conclusions is that this study can really only say the prevalence is "low." But the result that 50-80 times the number of infections...there is no way. Seattle guy, thinks that it's more 10-20 times, fwiw.
Here's the link to the actual study, so people can read what is being said in the comments
Background Addressing COVID-19 is a pressing health and social concern. To date, many epidemic projections and policies addressing COVID-19 have been designed without seroprevalence data to inform epidemic parameters. We measured the seroprevalence of antibodies to SARS-CoV-2 in Santa Clara...
www.medrxiv.org
And here is the link to the Seattle epidemiologist's Twitter in case anyone is interested in going down that rabbit hole
The latest Tweets from Trevor Bedford (@trvrb). Scientist @fredhutch, studying viruses, evolution and immunity. Collection of #COVID19 threads here: https://t.co/Yc4fun5rcp. Seattle, WA
twitter.com
I do not plan on posting / responding, but with the number of "see, it's no worse than flu" comments, I thought it was important to point out that the peers reviewing, are concerned with the accuracy of the conclusion. So *WE* should not get too excited and use it as proof that things aren't so bad after all.