Production or Expert

It seems that the assumptions we make in our attempts to model decision-making may be necessary but problematic. The notion of conditional independence for example, the idea of there being many properties and entities that do not affect the scope of our decision, is important not only for computational tractability, but for model modularity as well. Without it, none of the current techniques have much hope. However, the assumption is neither apparent in reality, nor particularly representative of the way we human make decisions.

Chaos theory suggests that even tiny changes or relationships between antecedent variables can lead to vastly different consequences. Therefore, it would seem that even small differences between our models and reality would lead to irreconcilable errors in decision output. On the other hand, decision networks don’t quite function like human cognition either. We seem to have come to a conclusion that heuristics cause biases and biases are bad. But I want to disagree. Heuristic-caused biases have developed in human cognition because they are computationally cheaper and quite successful for the most part. Why do we have availability biases and risk aversion? Because it makes sense to. In the mid-Atlantic states for example, when you see joint pain, especially in the summer, it’s good to think Lyme disease. There is a high likelihood of it! Even if the probability is overestimated by a physician, it’s better to overestimate the likelihood of a dangerous disease than underestimate, fail to treat and deal with negative consequences.

Getting back to decision making and expert systems, the simplifications we make, by minimizing the scope of a model as much as possible, or by collapsing the relationships between nodes into discrete types with no room for interpretation, we are perhaps sacrificing too much. Yes, I do think there needs to be a limit to the size and scope of a problem, and it would make no sense to attempt to replicate reality. Not only would such a thing be prohibitively expensive, it also wouldn’t necessarily give you the right results. After all, reality isn’t deterministic! So perhaps the harder question is how to correctly define the domain of all possible future states, in order to create a system that would be able to assign a probability to each of them.

Also, problem solving isn’t inherently rational. There is neither a correct solution, nor a correct way to obtain such a solution. I still find this to be a foundational issue. We are applying a rational method to an often irrational task. We ask, what is the right thing to do at this moment in time, with all of the information I currently have? And maybe that just isn’t the right question to ask. Maybe there is something in human cognition that is quite unique, this way of collapsing the past, present and future down into an understandable frame. And perhaps we can do something similar with machines, a way to code decision-making not only based on facts, logics and known probabilities, but a small degree of chance and intuition as well.

Note: Taken from my BIME 550 discussion posting

Leave a Reply

Your email address will not be published. Required fields are marked *