Observing Thinking

Observing Thinking
Observing Thinking

Tuesday, April 9, 2024

The Internet, Fake News, and Confirmation Bias

 


The Internet, Fake News, and Confirmation Bias


“The spread of fake news is one of the biggest disadvantages of the internet. Since there are limited to no controls in place on a communication forum, website, or social media platform, fake news often goes viral - risking the well-being of entire communities. “ (unstop.com)


The quote, “A lie can travel halfway around the world while the truth is still putting on its shoes” is often attributed to Mark Twain, but its origin is debated among scholars. Other people who have been attributed as the author include Jonathan Swift and Winston Churchill.

The idea behind the quote is that lies can spread quickly and easily, while the truth can be slow to catch up. This can be because people are more likely to believe something that is sensational or shocking, even if it is not true. Lies and other fake news can also be spread more easily through social media and other online platforms on (you guessed it) the Internet. And it is precisely fake news that contributes to a more primal defect in the human psyche: Confirmation Bias. If you happen to be a human being, you are subject to confirmation bias in your choice of politics, or anything else.


 Unfortunately, Confirmation Bias acts like addiction and like any addiction, the first step is awareness --- to acknowledge its existence in much the same manner as Alcoholics Anonymous encourages its members: “My name is _____ and I am an Alcoholic. My name is Stewart Denenberg and I have Confirmation Bias.


 Before proceeding it would be prudent to be sure that we’re on the same page regarding the definition of “confirmation bias”. I decided to use an AI to do the scut work for me and below is my query followed by the Microsoft Copilot response:


“What is confirmation bias”

I asked CoPilot the same question once again, ““What is confirmation bias” and got a more detailed response:

  1. Definition and Nature:

  2. Historical Roots:

  3. Examples and Practical Contexts:

  4. Utility or Disutility:

    • Is confirmation bias helpful or harmful? It’s a complex question.

    • On one hand, it can reinforce our convictions and provide cognitive shortcuts.

In summary, confirmation bias is a ubiquitous cognitive tendency that shapes how we perceive the world. Being aware of it can help us approach information more critically and open-mindedl”

Well, then, I mused, why does Copilot give me different answers to the same question? Who better to ask?  and Copilot’s response was:

“Copilot, like any other AI language model, generates responses based on a combination of factors. Here are some reasons why you might observe different answers:

  1. Context and Prompt Variability: Copilot’s responses depend on the context and the specific wording of the prompt. Even slight variations in how a question is phrased can lead to different answers. Additionally, Copilot doesn’t have memory of previous interactions, so it treats each prompt independently.

  2. Randomness and Sampling: Copilot uses a probabilistic approach to generate responses. It samples from a distribution of possible answers. As a result, different runs or iterations may yield slightly different responses due to this inherent randomness.

  3. Training Data and Model Version: Copilot’s knowledge is based on the data it was trained on. If there have been updates or changes to the training data or the underlying model, it can affect the responses. Different versions of Copilot might produce varying answers.

  4. Ambiguity and Interpretation: Some questions have ambiguous meanings or multiple valid interpretations. Copilot may choose different interpretations based on the context it infers from the prompt.

  5. Search and External Information: Copilot can search the web for information. If it uses external sources, the results may vary depending on the search results at that moment.

  6. Creativity and Diversity: Copilot aims to be creative and diverse in its responses. It avoids repeating the same answer verbatim. This diversity can lead to different answers even for similar questions.

Remember that Copilot’s responses are not deterministic, and they can vary based on the factors mentioned above. If you encounter different answers, it’s a reflection of the complexity and variability of language generation by AI models.”

In other words, it’s like I went out and asked several experts to explain some phenomenon. I would expect to get different honest answers from different people that overlapped even though they might use different styles and examples. The worst case scenario would be the ancient Indian Parable of the elephant:

The parable of the blind men and the elephant is an Indian parable that has been adapted by many religions and published in various stories for adults and children. It is about a group of blind men who attempt to learn what an elephant is, each touching a different part, and disagreeing on their findings.

“In the story, six blind men touch an elephant. Although each man touches the same animal, his determination of the elephant is based only what he is able to perceive.

The man who touches the elephant's trunk thinks the elephant is a snake. The man who touches the elephant's ear thinks the elephant is a fan. The man who touches the elephant's leg thinks the elephant is a tree. The man who touches the elephant's side thinks the elephant is a wall. The man who touches the elephant's tail thinks the elephant is a rope.

The six men argue about what the elephant is, each insisting that his own perception is correct. None of them are able to see the whole picture, and so they are all wrong.

The parable of the blind men and the elephant teaches us that we should be careful not to make judgments based on limited information. We should also be open to the possibility that we may be wrong, and that there may be other perspectives that are just as valid as our own.”


Sunday, January 21, 2024

More Musings on the Introduction of AI Technology into Society



A recent article in the paper you are now reading, “Chatty robot helps seniors fight loneliness through AI companionship”. ( PR, page A6, 12/272023) triggered a melancholy recollection of  a relatively recent visit to an elderly acquaintance (elderly, only in the sense that she was and still is, older than me). She was proudly in the process of showing off her newest acquisition:  cooing and snuggling with a highly realistic, fuzzy little cat doll that mewed and/or meowed when you petted it. However, that first somewhat sorrowful reaction to an adult seduced into childlike behavior over what was no more than a souped-up toy doll --- a child’s toy --- quickly mellowed to a feeling of how cool and lifelike the robot acted and reacted. And the more I thought about it, the more I mellowed into an attitude of,  “Why not?” ---who am I to be offended by another’s harmless pleasure (an attitude I find offensive when practiced by others)?  Also, how is the mode and delivery of this enjoyable activity different from interacting with a real live pet (which by the way doesn't have to be fed and cleaned up after).  And, the final push off my high horse was to realize that this situation seems not so much different than enjoying a good sci-fi novel like the Seven Moons of Mali or viewing a hot game on the Sports Channel? As Tevye, in “Fiddler on the Roof” has famously replied, “I’ll tell you --- I don’t know” --- but I will tell you what I do know (within reasonable bounds of certainty) and that is that technology, any technology, is borne from our curiosity and need for control in order to make our lives more pleasurable and enjoyable.  


All the same,  it is useful to also consider the admonition that “Enjoyment and Pleasure are terms often used interchangeably, but they are not the same thing. Pleasure happens to you; enjoyment is something that you create through your own effort. Pleasure is the lightheadedness you get from a bit of grain alcohol;  Enjoyment is the satisfaction of a good wine, properly understood. Pleasure is addictive and animal; 

Enjoyment is elective and human.” (Arthur Brooks, The Atlantic Magazine, MARCH 24, 2022)



So using a “chatty robot” or Chatbot as a companion for the potentially lonely came as not much of a surprise and reminded me of the initial reactions to Joseph Weizenbaum’s computer program,  “ELIZA”, one of the earliest (mid 60s), if not the first,  attempts of AI research to simulate and stimulate the man-machine conversation. All of this is described brilliantly and lucidly in Joseph  Weizenbaum’s book. “Computer Power and Human Reason --- From Judgment to Calculation”, and so I will have no reservations in quoting directly from this work and what I will be quoting is just limited to the Introduction. For those who are a bit more technically adventuresome, there are chapters on “Where the Power of the Computer Comes From” and “How Computers Work “ as well as “Artificial Intelligence”.


Early on Weizwnbaum reveals , “I chose the name ELIZA because, like the Eliza of Pygmalion fame, it could be taught to ‘speak’ increasingly well... The script is a set of rules rather like those that might be given to an actor who is to use them to improvise around a certain theme. Thus ELIZA could be given a certain script to enable it to maintain a conversation about cooking eggs or about managing a bank checking account, and so on. Each specific script thus enabled ELIZA to play a specific conversational role. For my first experiment, I gave ELIZA a script designed to permit it to play (I should really say parody) the role of a Rogerian psychotherapist engaged in an initial interview with a patient. The  Rogerian psychotherapist is relatively easy to imitate because much of his (sic) technique consists of drawing out his patient by reflecting his patient’s statements back to him.”


The most famous ELIZA transcript was published soon after in an article Weizenbaum wrote for the January, 1966, volume of Communications of the ACM. It is a record of a conversation between the program and a young woman; the program’s responses in the text below are in all caps.


Men are all alike.
IN WHAT WAY
They’re always bugging us about something or other.
CAN YOU THINK OF A SPECIFIC EXAMPLE
Well, my boyfriend made me come here.
YOU BOYFRIEND MADE YOU COME HERE
He says I’m depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED
It’s true. I am unhappy.
DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
I need some help, that much seems certain.
WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP
Perhaps I could learn to get along with my mother.
TELL ME MORE ABOUT YOUR FAMILY

There is a bit more dialogue but you get the idea...

 You can try it out for yourself at this site: https://psych.fullerton.edu/mbirnbaum/psych101/eliza.htm


If, as I did, you test it out by starting with the prompt as in the example above, “Men are all alike” don’t be surprised to find the conversation pursuing a different path as some randomness has been injected into the script at this website.


Weizenbaum goes on to note that, “I was startled to see how quickly and how very deeply people conversing with the computer  became emotionally involved and how unequivocally they anthropomorphized it. Once, my secretary who had watched me work on the program for many months and therefore surely knew it to be a computer program, started conversing with it. After only a few interchanges with it she asked  me to leave the room! “


 He was also amazed that a number of practicing psychiatrists seriously believed that the program could grow into a nearly completely automatic form of psychotherapy. He quoted from a paper by Colby et al , “Further work must be done before the program will be ready for clinical use.. If the method proves beneficial, then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centers suffering a shortage of therapists. Because of the time-sharing capabilities of modern and future computers, several hundred patients an hour could be handled by a computer system designed for this purpose. The human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man (sic!) since his efforts would no longer be limited to the one-to-one patient-therapist ratio as now exists.”


And in a footnote, Weizenbaum adds “ Nor is Dr Colby alone in his enthusiasm for computer administered psychotherapy. Dr. Carl Sagan,the astrophysicist, recently commented on ELIZA, “No such program is adequate for psychiatric use today but the same can be remarked about some humans psychotherapists, In a period where more and more people in our society seem to be in need of psychiatric counseling, and whe time sharing of computers is widespread, I can imagine the development of a network of computer psychotherapeutic terminals, something like arrays of large telephone booths, in which, for a few dollars a session, we would be able to talk with an attentive, tested, and largely non-directive psychotherapist”.


In response to Colby and Sagan’s interpretations of ELIZA, Weizenbaum decries, “I had thought it essential, as a prerequisite to the very possibility that one person might help another learn to cope with his emotional problems, that the helper himself participate in the other’s experience of those problems. There are undoubtedly many techniques to facilitate the therapist’s imaginative projection into the  patient's inner life. But that it was possible for even one practicing psychiatrist to advocate that this crucial component of the  therapeutic process be entirely supplanted by pure technique --- that I had not imagined! What must a psychiatrist think he is doing while treating a patient, that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?” 



Whew... no to difficult to ascertain Weizenbaum’s  underlying attitude towards the “science” of psychotherapy!  Unfortunately, we still observe this phenomenon today when a person proposes absurd solutions to problems way outside their area of expertise, such as, for example,  a politician proposing a solution to a problem in medical science --- drinking Clorox to cure Covid comes to mind.



He contends there are important differences between men and machines as thinkers , ”I would argue that, however intelligent machines may be made to be, there are some acts of thought that ought to be attempted only by humans... I believe there are limits to what computers ought to be put to do. One socially significant question I thus intend to raise is over the proper place of computers in the social order ...We can count, but we are rapidly forgetting how to say what is worth counting and why.”


And, that last sentiment, being the purpose of this column, which is to  explore the effects of Technology on Society and vice-versa, I couldn't agree more.





Search This Blog