Computational theories of the mind seem to be ideally suited to explain rationality. But how can computations be subverted by meaning, emotion and love?
Minds are computational systems that are realized by causal functionality provided by their computational substrate (such as nervous systems). Their primary purpose is the discovery and exploitation of structure in an entropic environment, but they are capable to something much more sinister, too: they give rise to meaning. Minds are the solution to a control problem: in our case, this problem amounts to navigating a social primate through a complex open environment in an attempt to stave off entropy long enough to serve evolutionary imperatives. Minds are capable of second-order control: they create representational structures that serve as a model of their environment. And minds are capable or rationality: they can learn how to build models that are entirely independent of their subjective benefit for the individual. Because we are the product of an evolutionary process, our minds are constrained by powerful safeguards against becoming fully rational in the way we construct these models: our motivational system can not only support our thinking and decision making to optimize individual rewards, but censor and distort our understanding to make us conform to social and evolutionary rewards. This opens a security hole for mind-viruses: statebuilding systems of beliefs that manage to copy themselves across populations and create causal preconditions to serve neither individuals nor societies, but primarily themselves. I will introduce a computational model of belief attractors that can help us to explain how our minds can become colonized and governed by irrational beliefs that co-evolve with social institutions. This talk is part of a series of insights on how to use the epistemology of Artificial Intelligence to understand the nature of our minds.
Speakers: Joscha