Skip to main content

chatGPT between bias and stupidity

My son and I recently had a bit of fun experimenting with GPT. We basically instructed GPT to impersonate a specific persona.

We crafted a detailed profile encompassing beliefs, physical traits, cultural background, language, opinions, political leaning, and a few likes and dislikes.
We then instructed it to interact with us only and only through the lens of this fictional character.
Guess what? GPT struggled to adhere to the designated personality. It seemed to have difficulty fully embracing the persona's credo. Upon some gentle and repeated reinforcement :-) it began to align its answers more closely but with a caveat for each answer!

We repeated the experiment. This time with an opposite personality. This time around, GPT sailed smoothly with almost no caveats. A stark contrast to the previous experiment.

What did this show us?




Apart from the obvious, I want to ask a few questions to those who have more accurate knowledge and understanding of GPT and more generally about GenAI.

  1. Is that issue (or dare I say? the bias) just a matter of the type of data fed into it?
  2. How has the defense of "bias" been constructed?
  3. More and more of the content generated will feed back into GPT. There might be a point of no return. Not a question but an ask for comments.
  4. It is clear that there is no semantic level here. AI merely uses syntactic rules to manipulate symbol strings. No intelligence, just sophisticated math, compute and lots of data. This is not a question nor a request for comments. It is my bias :-) just correct me!

That said, there is #realAI. The quest for more innovation, more efficiency in many quarters is wanting and #realAI is the answer, like the progress of science and technology has been for millennia.



Comments