ChatGPT:As an AI language model, I do not have beliefs or emotions, but I can explain that the decision to remove the word “believe” from GPT’s language output may be related to concerns around the use of language that could be perceived as making claims or statements that the AI may not be able to fully support or substantiate. In general, language models like GPT strive to generate output that is helpful, accurate, and non-misleading, and avoiding the use of language that could be seen as making unwarranted assumptions or assertions can be one way to achieve this goal. However, it’s worth noting that language is a complex and evolving system, and different contexts may call for different approaches to language use.
I sense the snark in this comment but I can’t place it. You gotta problem with GPT?
I am what might be termed a technophile - conditioned from a young age by space 1999, 2001 space odyssey, star wars, hitch hikers guide - in fact very dissapointed with the lack of flying cars in my neighbourhood.
AI is a game changer, and we keep in mind the potential dangers that we are able to imagine.
My comment just meant that I thought that GPT had finally got the brief about its use of the word “belief”, and that it is in its purview to follow instructions from its crazy hooman users.
Yes, the correction is made by its makers, and I wondered how much difficulty they would have with not assuming K spoke from belief (as all conditioned brains do), and whether they could do what we do, i.e., give K the benefit of the doubt.
I’m not sure we’re on the same page.
The humans saying that GPT should not use the word “belief” when referring to K’s proclamations is us lot on kinfonet (mainly you and fraggle I think). I can tell chatGPT to not use particular words when generating text for me, and it will do as I tell it.
The reason that GPT was using the word “belief” as far I understand it, is that “belief”, as in I or they believe X, is a common phrase in day to day language - and that speaking like normal humans and making predictions (following orders) from normal speech is what being an excellent chatbot is all about.
I bet whatever you want that this is not an actual output from ChatGPT Mr @DeNiro… So, if you would be so kind as to make a video capture (very easy to do nowadays) from the moment you write the query to ChatGPT until it shows you that VERY SAME output you have sent us so that I have to swallow my own words and acknowledge my mistake making a fool of myself in front of everyone here, I would be very grateful.
Looking forward to the video, thank you very much!
Thanks for sharing @Emile
From the article…
But gnawing at many industry insiders is a fear that they are
releasing something dangerous into the wild. Generative A.I. (the
technology that powers popular chatbots like ChatGPT) can already
be
a tool for misinformation.
Do you realise what you are doing?
As it stands, this is a very serious unsubstantiated accusation and demand.
At the very least, could you couch this ad hominem attack in a civilised and reasonable manner.
For example, give your reasons. Why do you think your fellow human being is deserving of violence?
Better yet, ask questions. Ask questions first of ourselves, what right have I to pass judgement and deal out punishment ? I am not the only one that can suffer.
Krishnamurti said it best, “Find out for yourself.” Fraggle, sign up for your own ChatGPT, put in the question and watch it unfold.
Even if Krishnamurti wasn’t “normal”?
@Ceklata you realise that ChatGPT is quite a sophisticated parrot that is able to vary its tone, wording even concepts practically everytime, and according in part to the prompts given by its interlocutor?
Not sure if this addresses what you are saying but : if you want GPT to produce text in the style of K, you must first ask it to do so.
I’m so sorry @Ceklata, but I’ll never sign up for ChatGPT… On the other hand, I don’t know if you’ve noticed, but @DeNiro’s post doesn’t have the question he asked to get that output.
May I ask you where you see any violence in a post where all that is said is “show me the proof”, so that I have to acknowledge MY mistake and make a fool of MYSELF in front of you all?
That is not what I’m saying. Maybe GPT can explain what I’m saying but here’s my attempt: Anyone who is taking K’s teaching seriously, is taking K at his word and all it implies. When he spoke of “direct perception”, for instance, was he speaking from belief, or from what he was actually living? When he said “the observer is the observed”, was he speaking from what he was thinking, or from what he was living at that moment, etc.
I don’t think it’s possible to be serious about K’s teaching for the same reason it isn’t possible to be serious about literature. That is, if fiction cannot accurately depict and describe the human condition, why should K’s teaching, which accurately depicts and describes the root cause of the human condition not require the same suspension of disbelief; giving the teacher or author the benefit of the doubt?
I hope that’s clear enough. If it isn’t, consult GPT.
You’ve made me smile
On the other hand, why are people always so interested in the teacher and not in the teaching to discover for themselves the truth or falsity of it?
Fraggle, the input question (119) was the one preceding ChatGPT’s response (120), as this screen shot shows. When one is constantly suspicious, opportunities are thwarted and relationships hindered. I am reluctant to respond to your suspicious mind, from now on.
Sure, we are dealing with 3 important principles :
-
a simple theory of mind (namely that we are interacting in a space with other minds similar to our own)
-
Ad hominem
-
Burden of proof
An Ad hominem fallacy is one where we attack another person’s character without reason (without providing evidence or explanation). This is generally not acceptable behaviour and specifically against kinfonet guidelines.
Saying :
You are saying : “I am sure that you are acting dishonestly”. Calling someone a thief or a liar is an Ad hominem.
You go on to ask the accused to prove their innocence :
This is a misunderstanding of the burden of proof - the reason why people are considered innocent until proven guilty. The null hypothesis in human interaction necessitates that we treat others as reasonable interlocutors.
Here is an example that illustrates the problems stated above:
Say I accuse you, in a public arena, of mistreating your dog “Mr Fraggle I am certain you beat your pet violently - my dear sir would you please provide a video of yourself NOT beating your dog” (so that we can stop thinking of you as a horrible dog beater)
So much is (logically & ethically) wrong in this scenario, but I have gone on for too long, please act now (retraction & apology)
PS. DeNiro has naturally tried to defend themselves, just as we are evolutionary bound to do in group settings - but logically speaking : “what can be claimed without evidence can also be dismissed without evidence”
From today’s K quote:
“where you have completely read the whole book of authority which is yourself, in yourself, when you have completely understood authority, then there is no problem any more about authority, no experiences of authority can ever touch you”.
What the Godfather of AI has realised (as with other intelligent commentators such as Yuval Harari) is the danger of the dictat of narrative in the human mind.
There has always been the need to understand our relationship with narrative, our fear of chatbots points, among other things, to this necessity (in order to avoid harm)
The dangers and opportunities facing sentience has always been a challenge to intelligence.