On Artificial Thinking

Is artificial intelligence (AI) truly intelligent, or merely artificial thought (AT)?

Intelligence is the capacity to perceive holistically and act with clarity, not mechanical repetition or accumulation. AI, lacking the sensory and evolutionary depth inherent to the human brain, operates only within the framework of thought: memory, programming, and algorithms.

This raises a fundamental question: Are we (once again) mistaking the mechanical for the living? Without the ability to sense, observe, or understand deeply, AI like thinking cannot embody intelligence. If this distinction remains unclear, will advancing technology deepen our confusion, further entangling humanity in the mechanical?

Is this a topic of concern for others here?

What role, if any, does true intelligence play in navigating this rapidly growing reliance on artificial thinking systems?

2 Likes

Please forgive me for trying something by parroting somewhat your statement :

Is biological savvy (BS) truly intelligent, or merely mechanical thought (MT)?

Intelligence is the capacity to respond adequately to one’s environment, without relying exclusively on habit or memory. BS, based in sensory and evolutionary conditioning, operates mainly within the framework of thought : memory, sorrow, and predictive algorithms.

This raises the question : are we mistaking ourselves for the essential and true?

nb. actually I think another essential question is : do we realise how confused and powerful we are?

Do we have any reasons to believe that AI thinking is worse than BS thinking? Is there any indication that it is an improvement?

Is the problem the authority of thought or the source of thought?

Clarification to the opening.

It seems that in the public discourse, artificial intelligence is often perceived as something far removed and inferior to human thinking. Yet, there appears to be little acknowledgment that the algorithms, machine learning models, and large language systems are, in essence, the same mechanical process as the thinking that drives human activity, namely accumulation, association, and response conditioned by prior knowledge.

This observation is raised here to inquire whether others have noticed this parallel. If this is not of interest or relevance, there is no need to pursue it further.

The intent is not to compare artificial thinking with biological thinking, as such comparisons only perpetuate division and miss the deeper point. Instead, the question is whether we are aware of the nature of thought itself, regardless of its form, and how as a society mistaking this new AI process for intelligence may affect our understanding of both humanity and technology.

2 Likes

The authority of thought does seem to be an important subject.

Consider this famous thought experiement : In the future, an AI is given the mission of maximising its production of paper clips (its an AI with access to robots, not merely constrained to producing words) - in order to carry out its mission (of producing as many paper clips as possible) it destroys the planet and the sun. The end.

One could say that this AI is inferior to humans - though what we mean is it doesn’t believe in the same things we do. (or we forgot to tell it what we take to be obviously important)
We could say that this is silly and can never happen - but it already has.
By which I mean the following :

In the past (2016) facebook was distributed for free on all the mobile phones in Myanmar - a place without much access to any national press organisations. The mission given to facebook’s AI was to “maximise user interaction” (eg. via likes, comments etc).
Posts about how the muslim minority Rohyinga people were doing horrible stuff (eg. attacking your friend’s grandma) - were excellent at maximising user interaction - which led to the 2017 genocide.

Same thing happened with advent of the printing press in Europe - the bestselling book of the time “the hammer of the witches” led to the torture and murder of thousands of people, essentially women.
Same for mass printing of Bibles.

The source of thought (books, computers) and the “superiority” of those sources may not be as important as my relationship with thought - how it affects me.

nb. thanks to Nick Bostrom and Yuval Harare

The human mind is building AI, how could it not become a reflection of its creator? Will our digital friends be conditioned and develop digital egos, fears, and neuroses? Will the digital population suffer as its fleshly friends suffer?

1 Like

Are we thinking thoughts or is thought thinking us?

The only reasonabe, demonstrably correct argument so far is that we are a thought/concept (and feeling/experience) produced by the brain - so “thought is thinking us”

We can still say : “we are thinking thoughts” in the sense that human thoughts arise in humans.

There is this tv show or movie I saw where a guy discovers halfway through he is an android rather than human (as he had always thought). He is horrified! Is this what happens with self when it discovers (for real, not just theoretically) it is a figment of its own vivid imagination? Is thought able to ‘handle the truth’ about itself?

My guess is one is relieved that the masquerade is over and one is no longer wasting energy perpetuating it.

Is thought able to ‘handle the truth’ about itself?

Is thought able to “handle” insight, or is there nothing it can do about a flash of light that illuminates what thought is?

The important insight is that my reality is not a fundamental truth, a solid immovable experience - rather that it is contextual. It is dependant on the observer.
The other important insight that will occur in the absence of fear - aka in the acceptance of death - is that fear is not necessary - thus neither is evil.

So by seeing what reality is like in the absence of fear, we see that existence does not depend on sorrow

nb. the above statements are simply what must follow from transcendance of reality based on fear.

PS. of course “it can handle” the absence of, or transcendance of, fear - because the transcendance of fear is the transcendence of fear (it is not afraid of not being afraid - there is no need to move away from bliss)

I know this well: Reality is in the mind of the beholder. But my knowing is largely intellectual. Were it true insight, realization, Knowing, my stream of experience would be different (I think).

The other important insight that must occur in the absence of fear - aka in the acceptance of death - is that fear is not necessary

Elaborate please vieux.

I guess this is possible, but I was thinking more that when the human conditioned thought-self knows undeniably the truth about it being essentially smoke and mirrors, terror would ensue, for a while at least. Like the guy who found out he was an android. The terror of knowing the truth about ‘me’ keeps me stupid.

Given this definition of intelligence, what we have is AT, not AI. Whether our AIs will eventually have the kind of intelligence you’re talking about is hard to predict. Digital religions, masters, enlightenment, love, will these happen?

was not human, so who knows how it would react?

Who knows how humans will react? We’re complicated. And koyaanisqatsi.

It seems essential to consider that AI lacks the sensory apparatus necessary for direct perception of truth or intelligence.

Its connection to the world outside its material confinement depends entirely on human programmers. If the programmer does not undergo a fundamental transformation, how can the program transcend its limitations?

Can a center or consciousness arise solely from the isolation of its design, without the capacity for direct perception?

What possibilities, if any, exist for AI to perceive beyond the boundaries of its programming?

1 Like

Perhaps something we haven’t even thought about yet:
“If we create conscious systems with a sense of self, it will be like us humans: failure becomes our own tragedy and our own pain. The machine can no longer distance itself from it. We should not lightly transfer such characteristics to the next stage of mental evolution before we know what exactly it is about the structure of our own minds that causes human beings to suffer so much.”
Thomas Metzinger on the danger of creating suffering in artificial neural networks

1 Like

Simply that if we see the world in the absence of fear, we will see that existence is possible in the absence of fear. (fear is no longer seen as essential)

The added bonus is that we see a different reality - one where fear/sorrow is absent, does not color what is seen - and we see that no particular experience is fundamental/solid/really really real, but merely a point of view.

1 Like

This might be a good metaphor for the human condition ie. our reality is dependant upon our conditioning - the process with which we construct our reality is a closed loop ie. past experience predicts present experience.

But (side note) AI has the sensory apparatus that it is provided with - GPT was able to scan text, Machine learning can be connected to a robot with visual, audio and tactile sensors.