Metamath as a medical tool

74 views
Skip to first unread message

José Manuel Rodríguez Caballero

unread,
Mar 29, 2020, 8:00:28 PM3/29/20
to Metamath
Hello,
  Disclaimer: This post is about a mental exercise, I will not apply this theory in real-life situations.
  Recently I was thinking about how to use proof assistants in order to substitute to some extent the role of medical doctors. My motivation is obvious: medical doctors are the main victims of COVID-19, whereas the behavior of the pandemic is approximately exponential growth in many countries (I know that it will change... someday). So, we need to think about how to survive in a world without medical doctors (this mental exercise may become the reality for some people, maybe most people on the planet). The only solution that I see is to have a proof assistant library about medicine and people without access to a medical doctor, but having a computer, will use the library in order to self-medicate, but the proof assistant will prevent them to make mistakes. Indeed, a file with a medical proof using the proof assistant should be sufficient in order to have the right to get medication, with the same value that a medical prescription (this is a mental experiment, I am not suggesting to do it).
  How to convince the government that a medical proof (accepted by a proof assistant) has the same value (or even more value) than a medical prescription? My proposal is to apply the Turing test in order to prove that the library is statistically indistinguishable from a medical doctor.
  My questions are: 

a) How to express medical information in a proof assistant?

b) Could metamath be useful for such a task?

Kind regards,
José M.

Mario Carneiro

unread,
Mar 29, 2020, 8:42:26 PM3/29/20
to metamath
I don't see how this would be workable. Medical information is not logical deduction, and there is very little logical deduction involved. It is all about statistics, and a proof assistant is not the best way to organize that data. I think a more fruitful use of proof assistants in this space is to have proven correct statistical analysis tool, where correct means that the analysis tool reports error bounds that match the model (or are overestimates, etc). All the real world interactions can't be axiomatized in any reasonable way: the correctness of the model, the incoming data, the correct usage of the tool.

How do you expect proofs will help here?

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/metamath/1cbe6b08-03a8-42e7-b965-59ec38057323%40googlegroups.com.

David A. Wheeler

unread,
Mar 29, 2020, 10:07:04 PM3/29/20
to metamath
> On Sun, Mar 29, 2020 at 5:00 PM José Manuel Rodríguez Caballero <
> josep...@gmail.com> wrote: ...
> > Recently I was thinking about how to use proof assistants in order to
> > substitute to some extent the role of medical doctors.


On Sun, 29 Mar 2020 17:42:14 -0700, Mario Carneiro <di....@gmail.com> wrote:
> I don't see how this would be workable. Medical information is not logical
> deduction, and there is very little logical deduction involved. It is all
> about statistics, and a proof assistant is not the best way to organize
> that data.

I agree. Indeed, IBM's experience with Watsom may be instructive.
Watson managed to win at "Jeopardy", and that win was rightfully hailed
as a stellar achievement in AI. IBM has since been working to apply
Watson to medical work, and that hasn't had the same level of success.
That doesn't mean it's *impossible*, but it turns out to be *much*
more difficult for machines to reasonably do things like medical diagnosis
in a serious way.

You might use proof tools to prove that a program performs a task.
But these are not the kind of tasks that we know how to mathematically
model (yet). In addition, program proofs take time, and taking extra time
is probably not what you want to do if your focus is COVID-19.

It's perfectly reasonable to say, "I have problem X; could tool Y help?".
But I don't see how to match proof tools to this problem at this time.
I'd be delighted to be proven wrong :-).

--- David A. Wheeler

José Manuel Rodríguez Caballero

unread,
Mar 29, 2020, 11:30:16 PM3/29/20
to meta...@googlegroups.com
David A. Wheeler wrote:
But I don't see how to match proof tools to this problem at this time.
I'd be delighted to be proven wrong :-).  

One way may be to develop a really huge set of propositions, which are "medically true" (maybe using a combination of modal logic, deontic logic, and many other logical systems at the same time). For example,
(1) If the patient has a bacterial infection and the patient is not allergic to antibiotics then it should take antibiotics.

Using (1) we cannot prescribe antibiotics as a treatment to a person who is infected by a virus and has no symptoms of bacterial infection. There are many people out there who do not know this and think that COVID-19 can be cure using penicillin. Using a proof assistant with a set of "medically true" propositions these people can not derive the fake statement:

(2) If the patient has COVID-19 then the patient should take antibiotics.

Of course, there are cases when COVID-19 and is accompanied by a bacterial infection, but this is not the same as (2). One of the goals of a proof assistant is to avoid writing stupidity and for many people, including myself, it is easy to write stupidity about medicine, because it is beyond our field of expertise. So, having a proof assistant with a really huge library of medically true statements may be the only way to guide us in the absence of a medical doctor as in the current situation (as a Millenial, I think that I am young enough to survive the COVID-19 if I get it in the near future, but it is important to think about the boomers out there).

Another medically true proposition may be:

(3) If the patient is secreting green fluid by his/her nose, then the patient has a bacterial infection.

Using (1) and (3) the patient can deduce by himself/herself:

(4)  If the patient is secreting green fluid by his/her nose, then the patient should take antibiotics.  

I am not claiming that this approach is scalable. This is just brainstorming.

Kind regards,
José M.

Mario Carneiro

unread,
Mar 30, 2020, 12:11:15 AM3/30/20
to metamath
This approach sounds like "expert systems" (https://en.wikipedia.org/wiki/Expert_system) from 80's AI. My understanding is that it's more or less discredited these days, and machine learning systems have little to no resemblance to expert systems. That is not to say that ML is necessarily better, in particular when it comes to deliberate exploitation, but the data is on their side - ML approaches are far more effective on real world problems with much less human intervention (which is important because that means they scale better). An expert system requires you to collect the knowledge of a lot of experts, which is time consuming, and the result is still exploitable (after all, the legal system is essentially one big expert system and it's full of loopholes).

So for my part, although I find ML to be philosophically unsatisfactory when it comes to explainability, when we are doing large scale numbers games like diagnosing disease, I would rather put my life in the hands of a well trained ML system than a simplistic model with some logical deduction abilities.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.

David A. Wheeler

unread,
Mar 30, 2020, 12:31:46 AM3/30/20
to Mario Carneiro, metamath
On March 30, 2020 12:11:03 AM EDT, Mario Carneiro <di....@gmail.com> wrote:
>This approach sounds like "expert systems" (
>https://en.wikipedia.org/wiki/Expert_system) from 80's AI. My
>understanding
>is that it's more or less discredited these days

I think that is too strong a claim. Expert systems still have their uses. However, they require you to be able to define very clear and ambiguous rules. When humans can provide those rules, and the ruleset is not too large, expert systems can work very well.

So it is not that they are discredited. The problem is that they turn out to be a good but relatively niche solution for a relatively narrow set of problems. When they match the problem space, they are great, but most problems don't map well to them.

> and machine learning
>systems have little to no resemblance to expert systems.

That is true. A serious advantage of machine learning systems is that they can handle probabilistic results in a cleaner way than some alternatives, and as already noted, they can learn from data sets.

Most machine learning approaches are terrible at explainability. They can give you a prediction, but not a human understandable reason for why they make that prediction. There are a few approaches that provide explainable results, such as decision trees, but they have their own problems. I am skeptical that many would want to apply a machine learning system to medical diagnosis without also being able to provide an explanation.

And that is nothing to say of the fundamental problem, that so far they have not done very well at broad medical diagnosis. There have been some significant successes in very narrow applications, such as looking for tumors in images. So it's not fair to say that machine learning is useless for sny medical diagnosis. It's just that the current ML technology is still in its infancy when it comes to its application in medical technology. Medicine is hard, and while machine learning can sometimes do some impressive things, we are still learning what it is good and not so good at.

In contrast, I think the OpenAI work to create proofs is a far more amenable task for today's machine learning systems. That's not to say it is easy, it is not, but I suspect that is a far more attainable goal for machine learning as it currently exists. I'm very excited to see what OpenAI publishes when they do.



--- David A.Wheeler

José Manuel Rodríguez Caballero

unread,
Mar 30, 2020, 8:26:51 AM3/30/20
to meta...@googlegroups.com
Mario Carneiro wrote:
This approach sounds like "expert systems"

Thanks for the concept of "expert systems" (my background is pure mathematics, specifically number theory, I am new in the field of computer science, so I miss some basic concepts). I was thinking about a sort of expert system expressed as a sort of grammar, that I will call "medical grammar", in order to avoid writing nonsense, e.g., to use antibiotics in order to cure a virus. Nevertheless, unlike genuine expert systems, the goal of this grammar is not to make predictions, but just to avoid writing nonsense.

David A.Wheeler
Most machine learning approaches are terrible at explainability. They can give you a prediction, but not a human understandable reason for why they make that prediction. There are a few approaches that provide explainable results, such as decision trees, but they have their own problems. I am skeptical that many would want to apply a machine learning system to medical diagnosis without also being able to provide an explanation.

What about to impose the above mentioned medical grammar to the artificial intelligence system so that it will only do things allowed by this grammar, i.e., right things. This is where I see the role of proof assistants: to avoid making mistakes, not to deduce new results. So, proof assistants + artificial intelligence may be an interesting case to consider during the COVID-19 emergency situation. I think that the way of thinking during emergency situations, e.g., war, pandemic, asteroid, should not be the same as the way of thinking during normal periods, in the sense that we need more multidisciplinary thinking than usual.

Kind regards,
José M.
Reply all
Reply to author
Forward
0 new messages