Egg-shaped and weighted at the bottom, they were a popular children's toy. "Weebles wobble but they don't fall down." Push them and they rock wildly, then end up as upright as they were before.
They're also a model for normalcy bias, a cognitive leaning that is usually not a problem: we expect things to go on in the same way as before. Usually it does, but when it doesn't-- Even when things go badly wrong and the floodwaters are rising or the fires are approaching, many of us, possibly as many as seven out of every ten, want so badly for events to be unremarkable that they will come up with plausible-sounding reasons not to worry that are entirely unjustified. Many a small Indiana town that has never been struck by a tornado has a local legend about "the bend in the river" or "the lay of the land" that causes any tornado to skip over. Alas, that's not how it works, and every tornado season is another pull at the slot-machine lever, with the possibility of a free flight to Oz -- or to Palm Sunday.
Normalcy bias can be especially appealing when we don't have a lot of data. In uncertainty, the entire spectrum of cognitive biases* come into play, and we often end up deeply entrenched, believing what we want to believe and defending it against all comers -- even when they have new information.
This morning, I went hunting for information on antibody tests for COVID-19. Unlike the nasal or throat swabs, these are blood tests that show if you have ever had the virus. They're still getting started in most countries; China has had one longer, but they don't seem to be sharing results and even if they did, they don't have a good history of honesty about this illness. What I can find is highly preliminary. Small-scale testing of the general population in various countries is reported as resulting in numbers that vary from a high of 50% of the tested having antibodies for the virus to as low as 18%. The numbers don't correlate well with known cases, active cases or death rates: there isn't enough data.
Worse yet, nearly every report I could find was using the numbers to support some proposed course of action or another and hadn't back-linked to the source. There was no way to tell just how cherry-picked the data might be.
I could call up my own cognitive biases and spin you a tale of how things will play out -- but I won't. I don't know. We're in for a long haul through uncharted territory. Keep your wits about you -- and watch out for cognitive bias on the part of others and even more so, your own.
___________________________
* The linked chart and the article it comes from are worthwhile and sobering reading. How many things do we believe that aren't necessarily so? --And how sure are we?
Update
1 week ago
3 comments:
I don't know.
That statement puts you way up the credibility scale compared to most folks commenting on this situation.
Steve Hsu had some good analysis on the Iceland data, and in his updated post a better source than the Daily Mail.... https://infoproc.blogspot.com/2020/04/covid-19-iceland-random-tests-of-10-of.html
Not only is the data garbage and being cherry-picked to support pre-existing positions, as you say but it is growing crazier by the day.
Post a Comment