Jovidah
I'll make you a sponsor offer you can't refuse...
Kinda gave up on this thread a while ago since it's a bit of a waste of limited energy... but I figured I'd add a few points:
-Bringing up the idea to limit population in a discussion on COVID is rather weak. COVID mostly takes older and weaker people out of the population, usually already beyond the breeding age. Thus it won't put a dent into the population. If you want a disease that really cuts down on population growth, look at AIDS, which hits exactly the people who are doing the breeding, and gets transferred across generations, thereby further reducing their breeding potential.
For what it's worth, in general diseases have usually been rather weak at limiting population, with a handful of exceptions like the plague and other heavy hitters when introduced to a population with 0 immunity. Traditionally the most effective means to reduce population is plain old simple mass starvation.
-In the last year and a half I often saw it being reassuringly mentioned that only people with comorbidities are at risk. I'm not sure people realize that more than 50% of the US population is in at least one of the comorbidity categories.
-I see a lot of debate about 'which sources to trust' / how to interpret the science. When it comes to regular media... regardless of it's political background, I'd say "don't acsribe to malice that which is adequately explained by incompetence". While there's no doubt some 'selectiveness' and bias in what gets reported by who and what's not, and how it's presented, the sad reality is that across the journalistic spectrum frankly almost no one has even the slightest clue how to read scientific articles, how to interpret them, and what is written in them. This isn't new; this has been a problem long before COVID became a thing and people suddenly started taking an interest in scientific literature.
-A major problem plagueing pretty much all research about 'population effects' is that they're non-experimental. Meaning you have a lot of other variables polluting the data. This comes into play when comparing countries, states, but also different periods in time. So presenting almost anything coming out of a non-experimental study as fact - no matter how much statistics you throw at it - is always going to be problematic at best. It's just that in some situations we simply don't have any alternative.
A good example is for example comparing data of now to last summer. It doesn't take a rocket scientist to see that people's behavior this summer is vastly different to the behavior last summer; people are pretty much 'done' with COVID, most of the fear is gone, and people behaving different accordingly. How much of a difference does that make when you try to compare the effectiveness of vaccines? We don't know. No one knows.
Similar problem when comparing regions or countries. It's really hard to say what effect different policies have when other factors such as population density might either magnify, dampen, or completely reverse the result.
-The best 'data' we have is still the testing done on vaccines and what not in their preliminary phases simply because they are at least properly done randomized double-blind studies with control groups and large enough sample sizes, thereby cutting out a lot of the pollution. If you dig around enough you should find most of these at the medical authorities.
-I hate to say this since it sounds arrogant, but trying to interpret statistical analysis for a layperson without any training in the matter is dubious at best. Even within the scientific community many people doing it aren't necessarily all that good at it, and there's often a lot of caveats and limitations to the data and how you can interpret it and how you can externalize it. This also comes back to the second point, where most 'normal media' frankly doesn't have a clue about statistics and research, so they're unable to critically reflect upon what they're reading. It's very easy to make a statistical analysis that looks good to the average person, has awesome significance values, yet is completely meaningless simply because of flaws in the design. Again, this doesn't have to be on purpose; it can simply be an oversight, or due to a limitation that's impossible to avoid simply because the research cannot be done in an experimental fashion.
-Just because I saw it mentioned once and it's my pet peeve... Statistically significant means something very different than what it means in daily language (where it's treated as a 'noticably large effect'). When something is statistically significant, this means that 'the chances of this result happening to random chance, when following the normal bell curve distribution, are lower than the arbitrary cutoff treshold we picked'. It says nothing about the effect size.
Due to how this is calculated, even the smallest effect becomes statistically significant when sample sizes get larger, just like lower sample size will lead to statistical insignificance no matter what. The usual measure of effect size tends to be R², and is often more informative in how important a variable is.
But when the research isn't experimental there's a ton of other hurdles that tend to muck things up here.
-Because there's so many limitations with a lot of the research, using any single one study to make a point is extremely problematic. For all you know 21 research groups researched the same thing with a P<0.05 treshhold, and the one that actually got a positive result published it, while the others just moved on because they had nothing to publish. This is why stuff needs to be reviewed within the field and why results need to be corroborated. Academical consensus takes time, and doesn't come from one article or one experiment.
Just looking around to find a random article that might agree with what you're trying to say might be succesful if you're just trying to work your way through a bachelor's thesis in the laziest way possible, but it's not good science. Picking and choosing articles à la carte is just not how you get to the truth.
-Bringing up the idea to limit population in a discussion on COVID is rather weak. COVID mostly takes older and weaker people out of the population, usually already beyond the breeding age. Thus it won't put a dent into the population. If you want a disease that really cuts down on population growth, look at AIDS, which hits exactly the people who are doing the breeding, and gets transferred across generations, thereby further reducing their breeding potential.
For what it's worth, in general diseases have usually been rather weak at limiting population, with a handful of exceptions like the plague and other heavy hitters when introduced to a population with 0 immunity. Traditionally the most effective means to reduce population is plain old simple mass starvation.
-In the last year and a half I often saw it being reassuringly mentioned that only people with comorbidities are at risk. I'm not sure people realize that more than 50% of the US population is in at least one of the comorbidity categories.
-I see a lot of debate about 'which sources to trust' / how to interpret the science. When it comes to regular media... regardless of it's political background, I'd say "don't acsribe to malice that which is adequately explained by incompetence". While there's no doubt some 'selectiveness' and bias in what gets reported by who and what's not, and how it's presented, the sad reality is that across the journalistic spectrum frankly almost no one has even the slightest clue how to read scientific articles, how to interpret them, and what is written in them. This isn't new; this has been a problem long before COVID became a thing and people suddenly started taking an interest in scientific literature.
-A major problem plagueing pretty much all research about 'population effects' is that they're non-experimental. Meaning you have a lot of other variables polluting the data. This comes into play when comparing countries, states, but also different periods in time. So presenting almost anything coming out of a non-experimental study as fact - no matter how much statistics you throw at it - is always going to be problematic at best. It's just that in some situations we simply don't have any alternative.
A good example is for example comparing data of now to last summer. It doesn't take a rocket scientist to see that people's behavior this summer is vastly different to the behavior last summer; people are pretty much 'done' with COVID, most of the fear is gone, and people behaving different accordingly. How much of a difference does that make when you try to compare the effectiveness of vaccines? We don't know. No one knows.
Similar problem when comparing regions or countries. It's really hard to say what effect different policies have when other factors such as population density might either magnify, dampen, or completely reverse the result.
-The best 'data' we have is still the testing done on vaccines and what not in their preliminary phases simply because they are at least properly done randomized double-blind studies with control groups and large enough sample sizes, thereby cutting out a lot of the pollution. If you dig around enough you should find most of these at the medical authorities.
-I hate to say this since it sounds arrogant, but trying to interpret statistical analysis for a layperson without any training in the matter is dubious at best. Even within the scientific community many people doing it aren't necessarily all that good at it, and there's often a lot of caveats and limitations to the data and how you can interpret it and how you can externalize it. This also comes back to the second point, where most 'normal media' frankly doesn't have a clue about statistics and research, so they're unable to critically reflect upon what they're reading. It's very easy to make a statistical analysis that looks good to the average person, has awesome significance values, yet is completely meaningless simply because of flaws in the design. Again, this doesn't have to be on purpose; it can simply be an oversight, or due to a limitation that's impossible to avoid simply because the research cannot be done in an experimental fashion.
-Just because I saw it mentioned once and it's my pet peeve... Statistically significant means something very different than what it means in daily language (where it's treated as a 'noticably large effect'). When something is statistically significant, this means that 'the chances of this result happening to random chance, when following the normal bell curve distribution, are lower than the arbitrary cutoff treshold we picked'. It says nothing about the effect size.
Due to how this is calculated, even the smallest effect becomes statistically significant when sample sizes get larger, just like lower sample size will lead to statistical insignificance no matter what. The usual measure of effect size tends to be R², and is often more informative in how important a variable is.
But when the research isn't experimental there's a ton of other hurdles that tend to muck things up here.
-Because there's so many limitations with a lot of the research, using any single one study to make a point is extremely problematic. For all you know 21 research groups researched the same thing with a P<0.05 treshhold, and the one that actually got a positive result published it, while the others just moved on because they had nothing to publish. This is why stuff needs to be reviewed within the field and why results need to be corroborated. Academical consensus takes time, and doesn't come from one article or one experiment.
Just looking around to find a random article that might agree with what you're trying to say might be succesful if you're just trying to work your way through a bachelor's thesis in the laziest way possible, but it's not good science. Picking and choosing articles à la carte is just not how you get to the truth.