On the question of free will and pre-determination, [View all]
it's easy to look at decisions we make and attribute many of them to semi-automatic functions controlled by a wide range of biases and other influences. However, such decisions are not the only ones we make. Yes, much of what we do is handled by parts of the brain that are simply reacting to stimuli and making virtually automatic choices without rational consideration.
However, we also face many decisions that cannot be decided by our brain's primitive limbic system. Here's a very simple example. You are invited to a business lunch, which takes you to an Ethiopian restaurant. There is one of those here in St. Paul, MN, and I was actually invited to a business lunch there. I had never been to a restaurant that serves that cuisine, nor did I know where we would be going. So, there I was, confronted with a menu that was completely unfamiliar to me.
I scanned the menu, noted what the basic ingredients were for the various options. But I was unfamiliar with how those dishes were prepared or seasoned. I could have asked the waiter or the person who invited me, but I decided to choose on my own. So, I did. Not at random, of course, but by comparing the main ingredients of the dishes. I made my choices. I'm a fairly adventurous eater, and enjoyed my lunch. My brain was not familiar with the options. I did not even recognize the names of the dishes on the menu. I ordered after thinking about ingredients and took my chances on the preparation and seasonings.
A small decision? Yes. But one made without reference to previous knowledge.
This example explains why artificial intelligence systems do not function like our brains. I worked for a couple of years developing a chat bot, inspired by the Turing Test. At the time, the technology was limited, so I limited my experiment to handle a specific situation. That chat bot was designed to participate in discussions in a CompuServe forum where operating systems were discussed. Using a database of language related to that subject, the chat bot accepted existing posts in that forum as input, and then used a complex set of algorithms to reply to those posts.
As an experiment, I revealed to some forum members that the poster was a chat bot under development, so as not to be seen as trolling the forum. That also encouraged challenges to its performance. At the time, the battle between Microsoft Windows and IBM's OS/2 were raging. My bot was a Windows advocate.
Over time, the database containing the language elements grew at an almost exponential rate. That was done manually, by me, as the programmer or "creator" of the bot. The algorithms also changed as the experiment continued. Eventually, it got good enough to fool some forum members who did not know that it was a chat bot experiment. However, it never developed to the point that it always performed well. It had no ability to respond to new challenges that it had not been programmed to deal with. It could go on in a thread for a long time, doing just fine, but always failed to react appropriately eventually, and had no way to correct the error. It did not learn on its own, and did not think for itself. It was a collection of algorithms and data. It could decide nothing. It had zero free will.
Further, there was no way to give it that free will. It was a computer program, not a living, thinking being. After about a year, I abandoned the project. By then, if you didn't know it was a bot, it would do a pretty good job of fooling most people. But, it was just a bot. A single question or statement that didn't fit the long list of parameters it could react to would throw it off and leave it unable to respond in any sensible way. That would have always been the case, because it had no way to self-modify or learn. It had no free will.