I took a decision theory class as part of obtaining an Industrial Engineering degree; it was interesting learning quantifiable tools to apply to decision making. Those don’t map directly onto how we ordinarily tend to make decisions, more related to analysis of engineering problems and solutions, but there was some overlap.
It was even more interesting taking logic classes, related to pursuing two separate degrees in philosophy (a long story), learning to what extent we actually use binary logic as part of our form of reasoning. Binary logic is also how computers “think,” to some extent. Really that’s more the underlying basis for hardware operations, that is then designed to run complex forms of programming code, in that case. Binary logic is also what is taught in logic classes, in a related but different form. In that context it is said to tie back to reasoning.
It probably works better to call what computer design is based on “Boolean algebra,” but it’s essentially the same term, just expressed differently. I took a digital logic class once too, along with a practical lab; fascinating stuff. Those forms of logic gates and circuits aren’t completely separate from the argument and derivation forms used in formal logic classes, just not all that similar either. The mapping / terminology for both looks like some sort of advanced math script, like what is left on the chalkboard after a calculus class.
Back to the point: we don’t use ordinary reason in a form similar to either. It’s true that we express truth values as true and false, but a lot of the commonality ends there. Ordinary language cannot be converted to assertions of binary logic, even in discussion or presentation of ideas in which it would seem most likely to apply. If someone had taken formal logic classes they might be able to try and speak in structured phrases, and then related derivations, that match that type of form, but ordinary language and reason is really completely different. I’ll try to explain why, and then work back to how decision theory tries to land in the middle, but applies best to engineering problems, versus everyday life decision making.
Things really seem to be true or false (facts, propositions, assertions), until you examine them more deeply. Then it turns up that truth values have other range, that there is a lot of indeterminate scope, that things can be unknown, or true only in a limited sense, or related to preferences but not truth. Assumptions frame the context of any line of thought, and that context dependency means that most ordinary assertions are only true or false within a limited framework of ideas.
This isn’t only a reference for boundary condition problems, but that is an interesting special case. Is it day or night out, or raining outside or not? Usually that’s all clear enough, but there would be boundary condition cases. A lot of ideas and contexts just don’t relate to truth value variables. What should I eat for dinner, is the person I’m in a relationship a good partner, what is the best value car to buy related to a set of performance attributes within a certain price range, which stock should I invest in? The last example is quantifiable, at least; if one could set up some formal expectations and modeling that one could be answered, it would just involve guesswork and probability calculation. That’s the kind of thing we worked on in that decision making class, setting up and using models.
I’ll give an example. If you buy a car you might have expectations and preference related to a number of different factors (cost, style, fuel efficiency, maintenance expectations, etc.). You could weight those as being important to different degrees and then review different models based on how you’ve modeled weighting, assigning proportion values and doing a calculation. Of course without making up any formulas you would be doing the same sort of thing in thinking it all through. If a problem involves multiple variables that involve guessing out expected probabilities this kind of approach can help, but otherwise it’s too much extra process without enough benefit over an informal evaluation. In complex projects you tend to need to break options into sets, and then apply financial analysis to parts that work out that way (lease versus buy, related to maintenance expectations, considering functional alternatives, etc.).
Back to the formal logic versus language idea, it would be natural to have a vague, general idea that when we argue what we are doing is using true and false propositions and something like formal logic (derivations, relationships between ideas) to arrive at a conclusion. Not really. Fallacies do point out logical relationships that don’t work (in the scope of rhetoric, related to logic, another category of it), and there are equivalent connections between propositions that do hold up, but it leads back to the theme of context. If there was any underlying, comprehensive, “flat” basis for these ideas, and more extensive connections between related propositions than usually apply, within that context formal logic—like that binary value Boolean algebra—would describe how we think and discuss. Our use of concepts just isn’t like that. The contexts we frame ideas within is complicated, and lots of the range doesn’t reduce to true or false states or numerical values.
Lately I’ve been considering the subject of confirmation bias quite a bit. That’s familiar enough, that we tend to accept the evidence and support for beliefs that we already hold, and reject input that doesn’t match current understanding. This implies that we are actually using rational, reason-based processing to evaluate ideas, we just use a filtering process for inputs that limits the potential for changing current views. The more I think about it we really aren’t even as rational as that implies.
Biases occur well before any reason at all, and well outside that scope, determining how we shape our worldview, and arrive at specific conclusions. There is limited evaluation process overlapping with those inputs at all, in most cases, whether flawed or well-grounded. We don’t buy a car based on clear, limited preferences, technical considerations, and other individual expectations, we just like an image, and start within a narrow range. Things get even murkier related to politics, gender issues, culture or class perspective; all the subjects that are currently cluttering up Facebook feeds with jarring, one-sided content. Decisions like selecting investments really are a special case. Even for that a subject specialist assigned to provide input would probably narrow context for us quite a bit, limiting our own use of reason and decision analysis within a very narrow scope.
It’s not a problem being irrational (essentially). Given the connotation of that term we might be regarded as arational instead, but most typically I support ordinary use of language and broad interpretation of meaning of terms, so it wouldn’t be necessary to go there. It does help noticing why we are making decisions, and hold specific views, to typically consider related preconceptions, in order to “push down” the level of analysis to where it is really occurring anyway.
If it’s any comfort formal logic classes don’t help, learning how structured, formal reasoning would work, if we were using it. Ordinary introspection is a much better tool. It’s interesting to try and break down ordinary perspective and reason, delving into subjects like psychology and Buddhism (as practical psychology, versus religion), but those sorts of approaches involve taking very long paths.
I just wrote a bit on confirmation bias that might be of interest, mainly related to the subject of tea preference (right, I’m all over the map), adding depth to the part I covered on that here:
On roasting sheng pu'er, and tea culture confirmation bias
No comments:
Post a Comment