Observing Thinking

Observing Thinking
Observing Thinking

Thursday, June 8, 2023

 On AI Regulation


To pass laws that regulate how innovative technology will affect society requires a thorough examination of its expected  pros and cons and that includes predicting future effects on society. There is a  Buddhist aphorism that says, “the best way to clear up muddy water is to leave it alone” and while this insight works on many sorts of problems, it is a very dangerous approach to use on how to regulate AI.


Niels Bohr, the Nobel laureate in Physics and father of the atomic model, is quoted as saying, “Prediction is very difficult, especially if it's about the future!”.

I was not surprised when I saw the news last week that various titans of the Technology industry were summoned for testimony to Congress on how AI (instigated by the recent arrival of Chatbots) should be regulated or not.  


Those whose politics lean to the left believe that if we're going to have a well-run, harmonious society it must be regulated by laws --- but not by too many of them. We must have rules to protect us from each other because we know from many historical bitter experiences that we humans are not entirely rational  and need to follow some ethical codes to make our lives and the lives of others worth living .



I was also not surprised to read about  the House ofRepresentatives starting to investigate the effects of the current advances in AI called Chatbots and whether the companies that provide them need to be regulated. Additionally, a number of computer scientists who have contributed to development of AI technology have begun to sound warnings about the  dangers inherent in unleashing this technology on an uneducated public. Science Fiction novels have long presented Robots as potential threats to the human race leading to its eventual  extinction. A recent cartoon appearing in the  NewYorker magazine depicts a crew of  evil-looking robots yielding whips who are forcing humans to perform some form of arduous manual labor which involves carrying heavy rocks and the caption is, ““To think this all began with letting autocomplete finish our sentences.” (https://www.newyorker.com/cartoon/a27699)






Robots whipping humans carrying stones on their backs.

“To think this all began with letting autocomplete finish our sentences.”


The tendency of humans, knowingly or not, to communicate and spread information as well as misinformation is at the heart of the problem. This is not a new problem; before the Internet arrived on the scene we had plenty of ways and reasons for sharing ideas --- remember  books, newspapers, radio and TV?  The Internet has just exacerbated the problem of transmitting  misinformation --- faster, cheaper, generally more effective and affecting more people’s opinions and actions. Easier is not always better:


As Woody Allen has quipped, "I took a speed reading course where you run your finger down the middle of the page and was able to read 'War and Peace' in twenty minutes. It's about Russia."

(https://www.nytimes.com/1995/09/03/opinion/l-war-and-peace-in-20-minutes-if-you-care-what-it-says-read-449395.htm)l


 I remember  the grudging acceptance of TV as a way of delivering news and other entertainment; that it has the potential to be addictive and corrupt not only our minds but those of our youth and thus all future generations.  As early as the mid-eighties, Neil Postman in his book, “Entertaining ourselves to Death” examined this problem. He argued  that the line between the “News” and “Entertainment” is becoming blurred and all of this is not for the benefit of our citizens  by the profit motives of larger and larger corporations and their stockholders. 



There was an interesting  contrast between the viewpoints  of where the future development of AI technology will lead by two political cartoonists I follow. In the one, recently published here in the PR, is the depiction of  a dystopian rocky landscape where  a caveman-like figure , using some sort of bone as a walking stick, is approaching a mysterious, massive  plinth which is inscribed with only the message, “AI”.  I found it difficult to interpret the meaning: was it to show that humankind had finally destroyed all remnants of civilization?  Has a computer with Artificial Intelligence replaced the human race and become the next step in our continuing evolution ?  It could be interpreted as a portent of the future where, if we are not careful regulating this new AI technology, surely  there will be unintended consequences.


 The other view was presented in the May 21 cartoon by Clay Bennet in which a depiction of a Raggedy-ann sort of doll is saying to us, Hi! I’m an AI. Wanna toPlay?” whose message seems to be clearer: Shouldn't we be very, very, very careful before unleashing a technology that has a disastrous downside? 



Any time a new technology appears that has the potential to change the way society works, people are suspicious; they are concerned about disruptive change such as job loss and other unintended consequences.  Those on the political Left profess to embrace change and progress and those on the Right favor conserving what we already have if it’s still working OK. 


Generally, the Left will propose some sort of government regulation to mitigate unwelcome outcomes while the Right is more hesitant to allow big government to interfere with the freedoms of individual citizens as well as corporations. Unfortunately, large corporations run by wealthy individuals can make contributions to political candidates of any Party, Left or Right., allowing politics and business to exchange “dark '' money to mutual advantage. Politicians rely on corporate donations to help them get elected under the reasonable self-delusion that they are good people who can’t do all the good things they want for their constituents without the funds to get elected in the first place --- but by doing so they become beholden to their corporate donors. On the other hand, Corporations in return can expect favorable concessions from lawmakers  when they vote on laws with 

regulations that will affect  earnings of said corporations. Many (including me) feel that this, “You scratch my back, and I’ll scratch yours” quid pro quo is not only unethical but should be illegal.


For an in depth look at this dilemma see:

https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/economics-of-voting/the-implications-of-corporate-political-donations/


Meanwhile, here is a snippet from the American Bar Association:


“Making corporate political donations seemingly requires complex calculation. Flesh-and-blood political candidates will imperfectly align with the corporation’s stated policy views, values, and long-term commitments. Consider the dilemma faced by a corporate manager weighing whether to allocate corporate assets to support a conservative state legislator. The legislator may favor lighter regulation on the corporation’s activities in the jurisdiction but also favor laws making it difficult for transgender individuals to access public bathrooms or for public schools to provide basic education about gender identity and sexual orientation. To the extent that the corporation has lesbian, gay, bisexual, or transgender stakeholders and a professed commitment to equality, making the donation invites a host of other risks. Even if the donation goes undiscovered, supporting the conservative legislator likely increases the difficulty the corporation will face in recruiting and retaining talented employees to work in the jurisdiction. If later revealed, the donation invites public backlash and boycotts.”




I believe that most reasonable people would agree that we need some level of government regulation for any technology that could cause harm. Many researchers in the field of AI recommend we put the brakes on  or temporarily freeze development of AI technology until we can agree on a plan for how it would be  regulated. CEO of OpenAI, Sam Altman as  well as Elon Musk (Twitter) stand to make a lot of money by letting this go the way of the “wild west” agree on this AI development slowdown until a regulation plan, which balances the needs of innovation, progress, and profit with consumer safety, can be forged.


“TEL AVIV, Israel -- OpenAI CEO Sam Altman said Monday he was encouraged by a desire shown by world leaders to contain any risks posed by the artificial intelligence technology his company and others are developing.”


(https://www.google.com/search?q=CEO+of+OpenAI&rlz=1C1CHBF_enUS1013US1014&oq=CEO+of+OpenAI&aqs=chrome..69i57j0i512l8.544j0j7&sourceid=chrome&ie=UTF-8,xxx. )


Most accept the need and the usefulness of  Federal Agencies that regulate our safety and health care. In response to my query,”How many federal agencies are there?” I got this response from my favorite search engine,


“Updated by the Federal Register, includes a list of all 438 agencies and sub-agencies. VP, Secretary of Agriculture, Commerce, Defense, Education, Energy, HHS, Homeland Security, HUD, Labor, State, Transportation, Treasury, Veterans Affairs & Attorney General.Feb 9, 2023”





And while I don’t like government laws ordering me around ( I remember my 8 year old son telling his older sister  just such an admonition, “You’re not the Boss of Me!!!”) I’m willing to accept responsibility for some (democratically elected)  government regulations whose purpose is to protect everybody, everyone equally in our society. I believe that’s how Civilization is supposed to work.


I certainly don’t want to be on an airplane that does not fully adhere to FAA regulations or to drive on a road that provides no guidance such as speed  traffic lights and Stop signs where needed.




So we have two competing factions --- each telling us what we should and should not do for the betterment of society. It doesn’t matter if they lean conservative or liberal because the actual problem is that they don't trust one another, they talk past each other and they don't respect the other side's views --- not a promising scenario. Rather than have an actual conversation to see if some sort of a compromise can be worked out , emotions take over as each side refuses to abandon their “moral principles''. Fortunately not all politicians are not so self-righteous but more practical and realize that in order to get anything done, they give in order to get. I think a bit of compromise amongst our representatives, our politicians, is necessary for a political system to work effectively and indeed, that’s what we pay them for.



The good news is that the politicians and the technocrats continue to try  to achieve  some sort of reasonable balance between freedom and responsibility. We have libel laws which are aimed at curtailing the spread of misinformation. We certainly should be able to apply those laws to any machinery or technology that has the potential to rattle society.


Search This Blog