Slippery Slope of Artificial Intelligence (AI)

AI technology does not scare me.  The unwavering trust and authority people credit it with does.

My new HP41CV calculator in 1979 saved me in Mr. Stanton's high school calculus class link. It never failed me.  It always provided the correct answer.  

Now, 44 years later I have a new super tool to elevate my intellect to the bar demanded by Mr. Stanton.   Like my HP41CV Calculator, ChatGPT answers all of my scientific questions with absolute accuracy.  It even phrases the answers in a way that I can understand.   It magically synthesizes tons of data and produces clear cut definitive and authoritative answers in simple English.   It doesn't even share its source material like that bothersome Wikipedia site.

So what could possibly be the problem?  

Well, unlike my HP41CV which stuck to mathematics, ChatGPT espouses to have a much deeper level of expertise.  

ChatGPT answer to question: "slippery slope"

The term "slippery slope" refers to a logical fallacy or a rhetorical technique where the argument suggests that a small action or decision will inevitably lead to a series of increasingly negative or extreme consequences. It implies that once the initial step is taken, it becomes difficult to stop or control the subsequent chain of events, leading to undesirable outcomes.

In the context of reasoning or debate, the slippery slope argument typically involves asserting that if a particular action is allowed or accepted, it will set off a sequence of events that ultimately result in a highly undesirable or extreme situation. However, it is important to note that the slippery slope argument is considered a fallacy because it assumes, without sufficient evidence, that a specific cause-and-effect relationship will occur.

The slippery slope fallacy is often used in various domains, such as politics, ethics, and social issues. It is important to critically evaluate the merits and evidence of a slippery slope argument to determine its validity. While some situations may indeed lead to a series of negative consequences, it is essential to assess the specific circumstances and evidence supporting the claimed causal relationship.  
chat.openai.com, May 21, 2023

ChatGPT tells me that Slippery Slope is "a logical fallacy or a rhetorical technique".  

Wow.  I've taught my kids all about "Slippery Slope" and how to listen to that inner voice when heading down a slippery path. I've even created artwork and an NFT to this principle link.  As a kid I learned the hard way how the path of bad decisions can lead to ever increasing comfort with ever increasingly horrible decisions and eventual consequences.  


Governing Organizations are the problem,
    not the technology.

The ChatGPT answer for "slippery slope" is totally derived from source data (training sets) entered by the governing organization.   I think when creating their AI training set they missed this quote: 

Psalm 40:2 (NIV) promises, “He lifted me out of the slimy pit, out of the mud and mire; He set my feet on a rock and gave me a firm place to stand.”

Governing Organizations like OpenAI, Microsoft, Google, Apple and Meta are going to be determining what data is used, and what data is not used in their training sets.

The end user has basically no level of discernment in answers because there is no linkage to contributing source data.  The end user just gets a really well constructed answer that looks entirely plausible.  


The Slippery Slope


The Good News

AI technology combined with blockchain and AR/VR provide an incredible toolbox for Web3.0 developers like myself.  Being an early adopter of neural net and rule based AI technology dating back to the 90's I'm incredibly excited about the potential.

The responsibility falls on us, solution providers, to use this technology for good 

Governing Organizations must clearly state their intent behind their use of this technology.  Micro disclaimers at the bottom like this are not helpful.
"ChatGPT may produce inaccurate information about people, places, or facts"

End-users of the technology need to listen to that inner voice: 
buyer beware.  trust but verify.  if it sounds too good to be true it might not be.  


When a technology tells you that your fundamental beliefs are "a logical fallacy" you better be prepared for a rocky road ahead.