Observing Thinking

Observing Thinking
Observing Thinking

Sunday, April 23, 2023

 


March 2023 Chatbots Continued...



While researching this article continuing the discussion of the newest, shiny bauble offered by the Internet, I stumbled upon an interesting opening sentence:

“This sentence was written by an AI—or was it? OpenAI’s new chatbot, ChatGPT, presents us with a problem: How will we know whether what we read online is written by a human or a machine?” (technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/)

This question of, “How do we know?” prompted me to comment on the intriguing Editorial in the March 2 edition of the PR. You may recall that it’s title was, “Winter Wonderland” and described in detail all of the wonderful winter activities available here in the North Country...until the final paragraph which admitted it was written by a Chatbot:  “We gave it the prompt “write a newspaper editorial praising winter recreation in the North Country of New York including Whiteface Mountain and the Olympic facilities around Lake Placid,” and this was the result.” Is the author absolved of the sin of plagiarism when the content itself admits that it is plagiarized in much the same way as does putting it in quotation marks and citing the source? Don’t ask me --- that’s a mystery to be solved by scholars who investigate  ethical issues within the English language...


All that aside, I had written in last week’s column, “Chatbots Arrive”

that there were still many issues yet to be addressed in the hope that many of the pros and cons could be resolved by the time this column was published.






Boy, I was wrong!  If anything, the issues are growing and had I the temerity to modify another prediction, I would speculate they will continue to grow and make news. This comes as a mixed blessing as there are now a gusher of new ideas (all of which preexist with their own Pros and Cons (destined to be addressed in future articles). 


Accordingly, now seems to be an opportune time to back up  a bit on the subject of Chatbots, take a quick breath, and to visit some of the underlying theory on how these bots actually work.


To begin,a Chatbot is, like any application on your computer, part of the software that the computer runs and the job of the app is to increase the “intelligence” of the computer  which, in turn  increases the intelligence of the user by making them a more effective contributor not only to their own well-being but to all society. Not coincidentally, this allows all of the individuals comprising society more opportunity to acquire goods and services provided by more jobs which is good for the economy at large.


So, the beginning of the answer to the question of, “How does a Chatbot actually work?” would be: A Chatbot is created by scientists and engineer’s research and eventual application using Artificial Intelligence (AI) theory provided by the discipline of Computer Science. So what is AI? According to an old computer joke,  “Artificial intelligence is like Artificial Insemination, but not nearly as satisfying (groan).  However, if we expand the meaning of “satisfaction’ to include the feeling of happiness,  achievement, and  fulfillment, there is some truth in the comparison. While there is not yet a final, complete, and definitive definition of  the term, “Intelligence” I propose that we agree to something suitably vague like, “the ability to learn or acquire  and apply   knowledge and skills”. 




And this is what a Chatbot, using various educational strategies, is trained to do by an AI.  The term “AI” is usually used as if it refers to an object with certain properties, the most most important being  that it can simulate or appear to be intelligent to a human --- and that raises a problem that has yet to be solved: Is it possible to build a computer with  the capability to think,  and finding a definitive  definition of “Think” was (and still is) a harder task than one might have thought. If you search on “think”, be prepared to be overwhelmed by its many definitions, synonyms and examples.


So, if we want to create an intelligent machine that “thinks” as well  as“does”, we already have the “doing” part accomplished and at electrical speeds.


To sum up:the ability to learn or be trained both require intelligence. And because  a Chaltbot is always said to be trained, the distinction between training and learning blurs but certainly  is pertinent . If we want an intelligent Chatbot,  then the AI that drives it must have Intelligence also.


The term “AI” is usually used as if it refers to an object with certain properties, the most most important being  that it can simulate or appear to be intelligent to a human --- and that raised a problem that has yet to be solved: Is it possible to build a computer with  the capability to think?  And finding a definitive  definition of “Think” was (and still is) a harder task than one might have thought. If you search on “think”, be prepared to be overwhelmed by its many definitions, synonyms and examples. The philosophical question,  “Can  a machine think?” has  fascinated us for a long time and one of the earliest and most venerable attempts used a straight-forward test.




“The Turing test, originally called the imitation game by Alan Turing in 1950,[2] is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine's ability to render words as speech.[3] If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine's ability to give correct answers to questions, only on how closely its answers resembled those a human would give.


The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester.[4] It opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[5] Turing describes the new form of the problem in terms of a three-person game called the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[2] This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".

Since Turing introduced his test, it has been both highly influential as well as widely criticised, and has become an important concept in the philosophy of artificial intelligence”. “Wikipedia)


Search This Blog