Hydroxychloroquine study

Kitchen Knife Forums

Help Support Kitchen Knife Forums:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Is anyone good at these though? I thought I read somewhere that humans are just bad at probabilities. I know I'm awful so every time I see it, I have to sit down and write it out.

Of course it would be better to be accurate, but I wonder how damaging it is. For instance, if doctors always underestimate the severity of diseases that are common and well diagnose-able (low false positive), is that bad? I think I want my doctor to overestimate the severity of uncommon and hard to detect diseases.

Nobody is born good at them, but if you can a) get the grades to go to med school b) survive medschool, c) survive a residence+fellowship, you also have the brain power to learn Bayes Rule sufficiently well to not respond "false positive rate is 1% hence you have the disease with 99% probability"

And a lot of medical expenditure, a lot of suffering, and a handful of severe negative complications are generated from false positives. How many pregnant women had inductions leading to c-sections leading to deaths or debilitation of the mother or child due to doctors not understanding false positives in tests designed to find conditions that suggest an early induction of labor, as one specific example? Now extrapolate to needless biopsies, needless surgery in general, needless additional testing etc etc etc
 
And a lot of medical expenditure, a lot of suffering, and a handful of severe negative complications are generated from false positives. How many pregnant women had inductions leading to c-sections leading to deaths or debilitation of the mother or child due to doctors not understanding false positives in tests designed to find conditions that suggest an early induction of labor, as one specific example? Now extrapolate to needless biopsies, needless surgery in general, needless additional testing etc etc etc

That's a good point. You've convinced me that properly estimating the probability of a bad outcome given a test is important.

However, medical professionals already have to know and understand so many things. Maybe instead of making "false positive rate" and "false negative rate" and "population of people with disease Y" the standard and easily available features for the doctors to make their predictions, perhaps it would be better to provide them with the end probability that they need. That is, provide "P(covid | positive covid test)" instead of "P(negative covid test | covid)", "P(positive covid test | no covid)", and "P(covid)". I understand that its probably a lot harder than that; there's probably more variables that are important when making this prediction. However, this seems like a great place where technology could help doctors.

You point regarding that doctors shouldn't say "false positive is 1% so you have disease with 99% certainty" is very valid. That is certainly bad. They should at least know it isn't that easy.
 
Yes, I think one of the most puzzling features of the already very, uh,.. puzzling, health care sector is the inability or unwillingness to adopt modern information systems to make doctors work easier. Eg by providing the relevant probabilities.

Most of the time it wouldn't even be very hard -- they already have to chart your info anyway, the tests are inside a electronic data system, all the algorithm needs to do is combine the charted data with the test result and reliability data for different populations, and spit out the best guess!
 
You point regarding that doctors shouldn't say "false positive is 1% so you have disease with 99% certainty" is very valid. That is certainly bad. They should at least know it isn't that easy.

You mean that's not how it works??? :p

Now, the constant sh*t-talking of doctors rubs me the wrong way (especially when followed with something about nurses or NPs being superior), but we have yet to learn of Bayes Rule.

Also @ian The timescales required in translational research for animals to live, grow, and die is often such that the more rigorous a study is, the more time and resources it will take. I'm unclear if there are similar principles in mathematics research.
 
Last edited:
Also @ian The timescales required in translational research for animals to live, grow, and die is often such that the more rigorous a study is, the more time and resources it will take. I'm unclear if there are similar principles in mathematics research.

👍 Harder problems usually take more time to solve, but the funds you need to buy stationery (and the occasional laptop) do not depend on the problem. Applied mathematicians may have a different answer though.
 
Yes, I think one of the most puzzling features of the already very, uh,.. puzzling, health care sector is the inability or unwillingness to adopt modern information systems to make doctors work easier. Eg by providing the relevant probabilities.

Most of the time it wouldn't even be very hard -- they already have to chart your info anyway, the tests are inside a electronic data system, all the algorithm needs to do is combine the charted data with the test result and reliability data for different populations, and spit out the best guess!
These are available in many situations. For example, there are a number of tools available to calculate the likelihood of death or of failure return to independent life for patients having certain types of surgery. For frail people or people with significant comorbidities, the numbers are often much worse than you would think.
 
That's a good point. You've convinced me that properly estimating the probability of a bad outcome given a test is important.

However, medical professionals already have to know and understand so many things. Maybe instead of making "false positive rate" and "false negative rate" and "population of people with disease Y" the standard and easily available features for the doctors to make their predictions, perhaps it would be better to provide them with the end probability that they need. That is, provide "P(covid | positive covid test)" instead of "P(negative covid test | covid)", "P(positive covid test | no covid)", and "P(covid)". I understand that its probably a lot harder than that; there's probably more variables that are important when making this prediction.

You are kind of describing the difference between sensitivity/ specificity of a test and positive/ negative predictive value of a test result.

Sensitivity: What % of actual positives will the test detect.
Specificity: What % of actual negatives will the test say is negative.

Positive predictive value: Given a positive test, what is the likelihood that the patient is actually positive?
Negative predictive value: Given a negative test, what is the likelihood that the patient is actually negative?

Sensitivity and specificity are characteristics of the test. Predictive value depends on the prevalence of the disease as well as the sensitivity and specificity.

For example, the CV19 test is around 65% sensitive if performed correctly. If there is no known CV19 in a community, a negative result is very unlikely to represent an actual case (it will likely be a true negative). The same result in a CV19 hotspot is not as reassuring (it's significantly more likely to be a false negative), even if exactly the same test was used as in th CV19 free zone.
 
Last edited:
You are kind of describing the difference between sensitivity/ specificity of a test and positive/ negative predictive value of a test result.

Sensitivity: What % of actual positives will the test detect.
Specificity: What % of actual negatives will the test say is negative.

Positive predictive value: Given a positive test, what is the likelihood that the patient is actually positive?
Negative predictive value: Given a negative test, what is the likelihood that the patient is actually negative?

Sensitivity and specificity are characteristics of the test. Predictive value depends on the prevalence of the disease as well as the sensitivity and specificity.

Making sure I understand, in my notation earlier,
sensitivity = P(test positive | c19),
specificity = P(test negative | no c19),
positive predictive value = P(c19 | test positive),
negative predictive value = P(no c19 | test negative)
prevalence of covid = P(c19)?

If so, I think we're exactly talking about the same thing. In my crappy notation,
positive predictive value = P(c19 | test positive)
= P(test positive | c19) P(c19) / P(test positive)
= P(test pos | c19) P(c19) / [P(test pos | c19)P(c19) + P(test pos | no c19) P(no c19)]
= sensitivity * prevalence of disease / [sensitivity * prevalence of disease + (1-specificity) * (1-prevalence)]


For example, the CV19 test is around 65% sensitive if performed correctly. If there is no known CV19 in a community, a negative result is very unlikely to represent an actual case (it will likely be a true negative). The same result in a CV19 hotspot is not as reassuring (it's significantly more likely to be a false negative), even if exactly the same test was used as in the CV19 free zone.
P(c19 | test neg) = 1-P(no c19 | test neg)
= 1-P(test neg|no c19)P(no c19)/ [P(test neg | c19) P(c19) + P(test neg | no c19)P(no c19)]
= 1 - specificity P(no c19) / [ (1-sensitivity) P(c19) + specificity P(no c19)]

If P(c19) << P(no c19),
P(c19 | test neg) < 1 - spec P(no c19) / spec P(no c19) = 0

If P(c19) >> P(no c19),
P(c19 | test neg) < 1- spec P(no c19) / (1-sens) P(c19) which is large when spec/(1-sens) ~ 1.

Great! My bad algebra matches common sense.

Apologies if this was painfully obvious (or just wrong...). I probably learned it in school, but I never thought I would use it. CV19 sure showed me! Instructors are going to have a field day in the future if students ever argue that they don't need this stuff in their daily lives. :p
 
Nature Pub on a highly sensitive/specific COVID-19 serology test. Best data I've seen.

https://www.nature.com/articles/s41551-020-00611-x
Also, a brief comment on biological research in general- it is incredibly hard to review, as much of it is novel and you can't say for sure whether the data are real until it's repeated (or not). I've seen some high profile papers fall apart and some big names fall due to dodgy work by students/postdocs. I went to the lab of a Guru in my field to learn some techniques they published in Nature. The author only spoke Mandarin, his notes were all in Mandarin, and it turns out he faked the whole thing, but this didnt come out for YEARS as the studies took that long to try to reproduce.

Science is cut-throat. Too many snouts in too small a trough, too little oversight to catch all the cheats.
 
Making sure I understand, in my notation earlier,
sensitivity = P(test positive | c19),
specificity = P(test negative | no c19),
positive predictive value = P(c19 | test positive),
negative predictive value = P(no c19 | test negative)
prevalence of covid = P(c19)?

If so, I think we're exactly talking about the same thing. In my crappy notation,
positive predictive value = P(c19 | test positive)
= P(test positive | c19) P(c19) / P(test positive)
= P(test pos | c19) P(c19) / [P(test pos | c19)P(c19) + P(test pos | no c19) P(no c19)]
= sensitivity * prevalence of disease / [sensitivity * prevalence of disease + (1-specificity) * (1-prevalence)]



P(c19 | test neg) = 1-P(no c19 | test neg)
= 1-P(test neg|no c19)P(no c19)/ [P(test neg | c19) P(c19) + P(test neg | no c19)P(no c19)]
= 1 - specificity P(no c19) / [ (1-sensitivity) P(c19) + specificity P(no c19)]

If P(c19) << P(no c19),
P(c19 | test neg) < 1 - spec P(no c19) / spec P(no c19) = 0

If P(c19) >> P(no c19),
P(c19 | test neg) < 1- spec P(no c19) / (1-sens) P(c19) which is large when spec/(1-sens) ~ 1.
I think you have written correct mathematical formulae for the terms, although I'm not a mathematician. I'm definitely not gonna mark your algebra- it's making my head hurt! 🤕. @ian- help!

But it does look like you've reached the correct conclusions.

In medicine, the PPV and NPV are the most important values because the info you get (and have to act on) is a test result.

Interestingly, a test is most useful when there is a lot of uncertainty in the diagnosis of a condition.

If you have clinical information (symptoms, examination findings, local prevalence data, etc) that suggeststs that the patient doesn't have the disease, but the test is positive, you need a high specificity test to discount that clinical information (and accept that the test isn't a false positive). The clinical information effectively lowers the PPV of the test.

Likewise, If your clinical information suggests that the patient does have the disease but the test is negative, you need a high sesnitivity test to discount that clinical information (and accept that the test isn't a false negative). The clinical information effectively lowers the NPV of the test.

It's really when you are completely unsure about whether the patient has a disease or not where a test (especially a low sensitivity and/ or specificity test) is most useful.

Note that in most testing systems, there is a tradeoff between sensitivity and specificity. This tradeoff can be modified by moving the detection thresholds. A higher detection threshold will improve specificity but reduce sensitivity.

I just remembered- biostatistics makes my head hurt more than algebra does.
 
Last edited:
Hah, yea, looks generally correct, although I didn’t look at all of it. Good point about using this in an intro probability class.

All these terms, agh! When you were originally talking about false positive rate, I assumed you meant the rate (which I took to mean percentage) of positive tests that are false, which made the initial discussion quite confusing... 😂
 
But it does look like you've reached the correct conclusions.
Success! Its easier to get to the right answer if you already know the right answer!

I just remembered- biostatistics makes my head hurt more than algebra does.
Prob/stats always gave me headaches. Only worse thing I tried was combinatorics. Counting is hard when I run out of fingers.
 
Nobody is born good at them, but if you can a) get the grades to go to med school b) survive medschool, c) survive a residence+fellowship, you also have the brain power to learn Bayes Rule sufficiently well to not respond "false positive rate is 1% hence you have the disease with 99% probability"

And a lot of medical expenditure, a lot of suffering, and a handful of severe negative complications are generated from false positives. How many pregnant women had inductions leading to c-sections leading to deaths or debilitation of the mother or child due to doctors not understanding false positives in tests designed to find conditions that suggest an early induction of labor, as one specific example? Now extrapolate to needless biopsies, needless surgery in general, needless additional testing etc etc etc

Let me approach this in a way that educates and does not alienate.

I am all for enhanced education. My sister and my mother both PhD's in industrial psychology had much more formal training in statistics than I had. We had discussions such as this on a regular basis. However, If I were to set up a study, I would enlist the help of our statistician friend and not rely solely on my recollections of the 2 semesters of statistics I had as an undergraduate.

I have cared for women and their pregnancies for 30 years. Beyond medical school and residency, my education did not stop.

Your statement " how many pregnant women....early induction" I find offensive, uninformed and highly biased. I imagine that within certain social echo chambers that people float through daily, that this statement goes unchallenged. But where is your academic rigor? This sounds like the rant of an uneducated lay midwife not some academic imbued with profound insights into statistical methods and a thirst for the truth.

The truth is that I and the vast majority of my colleagues in obstetrics understand, with more acuity than you could even imagine, the limitations of the tests we do and the profound consequences of our actions. To have life in your hands is humbling. To suggest that we are so callus as to not recognize the suffering of our patients is elitist, dehumanizing, bigoted, insensitive and plain wrong.

I think that went well....Beers..
Hopefully that small point is a little clearer.
 
Look, I don't want to throw shade on your medical ethics :) I am sure you are a great doctor.

But when my wife's yoga group has 5 inductions because of "the baby is too big" and all 5 turn out exactly at 50th percentile (with 3 turning into an emergency C-section) and the AMA recommendation is that you do not use weight scans as a criterion for inductions outside of extreme cases, I have to either question the ethics or the medical sophistication of the physicians that these poor women saw.

Also, I read the medical literature on these topics and the AMA recommendation on many issues does not match what actual statistical evidence can support. And then we end up with induction and c section rates a dozen times higher than WHO says should be the case.
 
Last edited:
Since @VicVox72 mentioned Bayesian statistics

You are kind of describing the difference between sensitivity/ specificity of a test and positive/ negative predictive value of a test result.

I understand the utility of sensitivity and specificity in a medical context but I find they make the math confusing. Which is to say, I would stop here:

P(c19 | test positive)
= P(test positive | c19) P(c19) / P(test positive)

@rmrf has wittingly, or unwittingly, transcribed the examples in Wikipedia 😝;). So yes! The algebra checks out. While it is interesting you can derive the positive predictive value using Bayes rule, i think it confuses what it going on.

The utility of Bayes rule is that you can update the 'belief' in a proposition based on new knowledge. The canonical form is:

P(A|B) = P(B|A)P(A) / P(B)

The value on the left-hand side of the equation, P(A|B), is called the posterior. It represents the 'belief' we have in the hypothesis 'A' after accounting for the evidence 'B'. The 'belief' P(A), before knowing anything about the evidence 'B', is known as the prior. The important part is to recognise the posterior is a function of the evidence and the prior. The cool thing about Bayes' rule is that it is recursive. The posterior can act as the new prior for the next round of observations. To bring this beautifully full-circled:

this episode shows that the scientific method and peer review process are, in the long run, self-correcting and effective.

You can see the analogue between the scientific method and Bayesian statistics: update your belief when new evidence is available :)


Moving back to the SARS-CoV-2 example; substitute 'A' with 'c19' and 'B' with 'test positive' into the canonical formula above and it is identical to what @rmrf wrote. Despite more confusing syntax, the same simple principle applies. Bayes' rule allows you to calculate the probability of a person having SARS-CoV-2 given the other prior knowledge (background prevalence, contact) AND a positive test result (new evidence).

A small weakness of Bayesian methods is that they rely on assumptions. For example, how do you reasonably determine the prior? I don't view this criticism as a major flaw in the method. Modelling is littered with assumptions. To me, the more interesting 'weakness' is computational complexity - in particular the value P(B), which is known as the model evidence. It can only be evaluated analytically (fast) for a limited set of probability distributions. In many complex models, the value has to be approximated numerically (slow). Whether this is really a disadvantage depends on your application and whether or not you can avoid the issue with a more intelligent model construction.


Finally, people are naturally bad a statistics. Even when trained, they can be crappy. It is difficult and it can be counter intuitive. To illustrate the point, riddle me this:

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

It is a famous statistics puzzle: the Monty Hall Problem. Before clicking on the Wikipedia link, calculate the odds of winning the car if you switch ;)
 
Last edited:
All the statistics words people use confuse the hell out of me. Bayesian blah blah 😱. But then when someone says "P(A|B) = P(B|A)P(A) / P(B)", I'm like "duh... why does that simple equation even have a name? you can prove it in like half a sentence". Guess this is what math training does to you. Makes simple things confusing and difficult things simple.
 
I never remember P(B|A)P(A) / P(B) = P(A|B). I always have to go from P(A,B) = P(A|B)P(B) = P(B|A)P(A). Just adding that little "P(A,B)" made it make a lot more sense to me.

@rmrf has wittingly, or unwittingly, transcribed the examples in Wikipedia 😝;). So yes! The algebra checks out. While it is interesting you can derive the positive predictive value using Bayes rule, i think it confuses what it going on.
Horray for unintended plagorism! "Great minds think alike but fools rarely differ." :p

If I remember my goal in the earlier post correctly, I was trying to derive the positive predictive value to see how different it was from the false positive test rate and then bound the error of reporting the false positive test rate vs reporting the positive predictive value. But I got lazy somewhere. I blame the lack of tex support!


A small weakness of Bayesian methods is that they rely on assumptions. For example, how do you reasonably determine the prior? I don't view this criticism as a major flaw in the method. Modelling is littered with assumptions. To me, the more interesting 'weakness' is computational complexity - in particular the value P(B), which is known as the model evidence. It can only be evaluated analytically (fast) for a limited set of probability distributions. In many complex models, the value has to be approximated numerically (slow). Whether this is really a disadvantage depends on your application and whether or not you can avoid the issue with a more intelligent model construction.
I agree that its not a huge problem. The assumption would be there anyways, just not explicit. Maybe if you want to hide the assumptions to manipulate the model, it is a weakness ;)

I'm not clicking on that monte(monty?) hall problem link. I'll un-convince myself of the right answer and need to spend an hour re-convincing myself.
 
All the statistics words people use confuse the hell out of me.

😁 as opposed to surjections this... or homeomorphism that... not to mention all the Heegaard splittings! 🤓


Horray for unintended plagorism! "Great minds think alike but fools rarely differ." :p

If I remember my goal in the earlier post correctly, I was trying to derive the positive predictive value to see how different it was from the false positive test rate and then bound the error of reporting the false positive test rate vs reporting the positive predictive value. But I got lazy somewhere. I blame the lack of tex support!

:)

All the jargon used in diagnostic testing confuses the **** out of me... My eyes usually glaze over at that point. So it was actually really cool to see the positive predictive value be related to sensitivity and specificity. Thanks for that! Hehe... who would have thought? KKF needing TeX support! 😂


monte(monty?)

Doh! Indeed. Thanks for that catch... I'll correct my post!
 
  • Like
Reactions: ian
With all the math being thrown around here, I’d like to try my luck on a question that I’ve never gotten an answer for.

Once upon a time, a math dude Mathematically proved that 1+1=2, he became a national sensation & the King nerd bachelor, had girls writing love letters to him from all over, eventually married one of them. He made math cool.

I thought 1+1 = 2 is just a rule, an assumption where all other math assumptions were built upon, thus, it can not be proven by definition.

Does anyone know the story?
Can anyone explain how to prove 1+1=2?
 
With all the math being thrown around here, I’d like to try my luck on a question that I’ve never gotten an answer for.

Once upon a time, a math dude Mathematically proved that 1+1=2, he became a national sensation & the King nerd bachelor, had girls writing love letters to him from all over, eventually married one of them. He made math cool.

I thought 1+1 = 2 is just a rule, an assumption where all other math assumptions were built upon, thus, it can not be proven by definition.

Does anyone know the story?
Can anyone explain how to prove 1+1=2?
I'm no mathematician, but I think it depends on what axioms (initial assumptions) you take. I remember the popular math majors proving 1+1=2 as a party trick. I think most of them used peano arithmetic. I don't know if its the fastest way or not. I don't know why you can't just call the natural numbers as the size of sets, like 1=|{a}| and 2=|{b,c}| and addition as the union of those sets and just count. There's probably something bad about needing to draw unique elements... I don't know. I wasn't cool enough to be at those parties. 😄

What are you talking about? Everyone knows what those are.
Injections and surjections I can handle. Its the "one to one function" that I disliked. Why isn't "one to one" a bijection? Also, I hope you have better insight to the prev question.
 
With all the math being thrown around here, I’d like to try my luck on a question that I’ve never gotten an answer for.

Once upon a time, a math dude Mathematically proved that 1+1=2, he became a national sensation & the King nerd bachelor, had girls writing love letters to him from all over, eventually married one of them. He made math cool.

I thought 1+1 = 2 is just a rule, an assumption where all other math assumptions were built upon, thus, it can not be proven by definition.

Does anyone know the story?
Can anyone explain how to prove 1+1=2?
I'm no mathematician, but I think it depends on what axioms (initial assumptions) you take.

That’s right. One of the early pursuits in the 20th century was to try to formalize the logical foundations of math. This involves finding a very small set of “axioms” that you declare are self evident, and then building up all the rest of math just using those axioms. It’s important to keep the list of axioms small to avoid the possibility that some of the axioms contradict each other. So for instance, you shouldn’t have as your axiom system “there’s a number called 1, there’s a number called 2, there’s a number called 3, ..., and these are all the possible ways that addition and multiplication behave”. Modern math usually takes as axiomatic a set of rules for how “sets” behave. For instance, sets can be “elements” of other sets, and there are familiar ways to combine sets (intersection/union). From this you start to understand sets intuitively as groups of objects, although formally they’re just unknowns that behave according to the axioms. You then define what numbers are in terms of sets, and you define addition just in terms of set theoretic operations. So, you’ll have to define what 1 is, and what 2 is, and what arithmetic is, all in terms of operations on sets, and when you do that it’ll follow that you need to prove things like 1+1=2. Depending on how you set things up, this statement could be obvious, or not.

It’s worth mentioning that it’s only a very small subset of mathematicians that think about this stuff. Most of us do more complicated math that just assumes all this stuff works out. But it’s important that it has been worked out.

An early attempt to formalize the logical foundation of arithmetic (not in terms of set theory, I think(?), but some other way) was this text. I think that’s probably what @ma_sha1 was referring to.
 
Heh, just listening to a podcast, with the main interview described thus...

Duke talks with Moon Duchin, a mathematician and professor at Tufts University, about her research into understanding how voting districts work. Through redistricting analysis at the Metric Geometry and Gerrymandering Group, an organization Duchin co-founded, she describes how in this democracy, fairness isn’t always as easy to find when you’re considering where people live and vote.
Made me think of this conversation. I just love the name of her group.
 
That’s right. One of the early pursuits in the 20th century was to try to formalize the logical foundations of math. This involves finding a very small set of “axioms” that you declare are self evident, and then building up all the rest of math just using those axioms. It’s important to keep the list of axioms small to avoid the possibility that some of the axioms contradict each other. So for instance, you shouldn’t have as your axiom system “there’s a number called 1, there’s a number called 2, there’s a number called 3, ..., and these are all the possible ways that addition and multiplication behave”. Modern math usually takes as axiomatic a set of rules for how “sets” behave. For instance, sets can be “elements” of other sets, and there are familiar ways to combine sets (intersection/union). From this you start to understand sets intuitively as groups of objects, although formally they’re just unknowns that behave according to the axioms. You then define what numbers are in terms of sets, and you define addition just in terms of set theoretic operations. So, you’ll have to define what 1 is, and what 2 is, and what arithmetic is, all in terms of operations on sets, and when you do that it’ll follow that you need to prove things like 1+1=2. Depending on how you set things up, this statement could be obvious, or not.

It’s worth mentioning that it’s only a very small subset of mathematicians that think about this stuff. Most of us do more complicated math that just assumes all this stuff works out. But it’s important that it has been worked out.

An early attempt to formalize the logical foundation of arithmetic (not in terms of set theory, I think(?), but some other way) was this text. I think that’s probably what @ma_sha1 was referring to.

Reminds me of a subtle joke in Futurama. The episode where the Brains plan to collect all information in the universe and store it in an Infosphere. The video aught to load at the timestamp where the Infosphere is finishing its knowledge acquisition:

 

That’s right. One of the early pursuits in the 20th century was to try to formalize the logical foundations of math. This involves finding a very small set of “axioms” that you declare are self evident, and then building up all the rest of math just using those axioms. It’s important to keep the list of axioms small to avoid the possibility that some of the axioms contradict each other. So for instance, you shouldn’t have as your axiom system “there’s a number called 1, there’s a number called 2, there’s a number called 3, ..., and these are all the possible ways that addition and multiplication behave”. Modern math usually takes as axiomatic a set of rules for how “sets” behave. For instance, sets can be “elements” of other sets, and there are familiar ways to combine sets (intersection/union). From this you start to understand sets intuitively as groups of objects, although formally they’re just unknowns that behave according to the axioms. You then define what numbers are in terms of sets, and you define addition just in terms of set theoretic operations. So, you’ll have to define what 1 is, and what 2 is, and what arithmetic is, all in terms of operations on sets, and when you do that it’ll follow that you need to prove things like 1+1=2. Depending on how you set things up, this statement could be obvious, or not.

It’s worth mentioning that it’s only a very small subset of mathematicians that think about this stuff. Most of us do more complicated math that just assumes all this stuff works out. But it’s important that it has been worked out.

An early attempt to formalize the logical foundation of arithmetic (not in terms of set theory, I think(?), but some other way) was this text. I think that’s probably what @ma_sha1 was referring to.

Than Ian, very good explanations!

When you have to “define what 1 is, and what 2 is”, won’t you have to say 2 is by duplicating 1? How else could you define 2?

Sine you need to put 2 into axioms as part of definition, the relationship to 1 is already defined by the rules, it can’t be “proven” again by mathematical deduction.

BTW, I was referring to this guy:
https://en.m.wikipedia.org/wiki/Chen_Jingrun
He was all over Chinese news for proving 1+1=2, & Media taught kids be nerdy like him & girls will be all over you 😂, totally opposite of the US.

Now looks like he wasn’t even the one who did it? My whole life has been a lie.
 
Back
Top