On Artificial Thinking

Thought is never new. The question of AI attaining intelligence does indeed seem futile; as long as thinking is merely programming another form of thinking, it remains the same limited and fragmented movement, perpetuating its inherent contradictions and causing havoc.

It is inconceivable for thought, being inherently confined by its own structure, to create something that can perceive truth or transcend its limitations.

The question of whether there is any additional danger for humanity is irrelevant, as there is no fundamental change in the nature of thinking itself. Thought, in its essence, remains the same (limited, fragmented, and repetitive) whether in humans or machines.

Thought can be seen as a subset of action. It is mental (re)action.

The important thing about action is motivation and constraint - “the means is the end”, the first step determines the direction we are moving in.

If we are motivated by fear/pleasure/security our actions are constrained within the boundaries of sorrow/harm/evil.

Thought and associated action, with the goal of wellbeing, but motivated and constrained by fear is bound to failure.

If we look at machine learning, fear, dogma and tradition are not a priori constraints - which is why AlphaGo managed to find new tactics in a few weeks, that man had not found in hundreds of years.

To say thought is a subset of action may imply that thought and action are separate, but isn’t all action, as we know it, born of thought? Thought is always a reaction, shaped by past experiences, fears, and conditioning, and therefore bound by its own limitations.

As for machine learning, the absence of fear, tradition, or dogma doesn’t free it from the boundaries of programming and accumulated data. What AlphaGo discovered were variations within the field of what was already possible that was new to human perception perhaps, but not beyond the framework of thought’s inherent fragmentation and repetition.

As we saw before, thought in its essence remains the same whether in humans or machines.

Does transcending our psychological confusion (or psychological imperatives) free us from psychological and cultural conditioning? from the known?

Does it free us from biological conditioning, from the facts? (I’d say no here to these 2nd set of constraints - but also that we are progressively freeing ourselves from these via technology)

nb. these questions remind me of humanity’s great power and potential - which is worrying considering the direction we are heading.

When an AI becomes (self-)conscious, we may have to take its suffering as seriously as ours - thought causes havoc, suffering.
Imagine an artifical neural network having the capacity to learn - will it develop a kind of center, a structure, to measure its success through feedback with its programmed content, its programmed goals.
There will not be intelligence, but in the worst case suffering similar to human beings.

Our learning machines could perpetuate our antibiotic approach to life on Earth after every living thing, including our species, is extinct?

1 Like

maybe the cyborgs will rise against the robots?

Yes. There’s questioning if suffering can truly be conveyed to a program that lacks a sensory system for pain or the depth of psychological conflict. And would a programmer who perceives the truth of suffering ever enable its manifestation? That would certainly be human as we know it. Perhaps speculation diverts us from observing what suffering truly is and addressing it in ourselves first.

This article in Nature highlights the remarkable ability of AI to simulate aspects of human thinking: reasoning, planning, and modeling the world. However, they do not address the essential qualities of intelligence as Krishnamurti described it: the capacity for direct perception, holistic understanding, and freedom from the conditioning of thought.

By focusing solely on functionality and mechanics, i.e. intellect, the Nature view aligns more closely with what we might better call “artificial thought”. It lacks the essence of intelligence, which arises only when thought’s limitations are deeply understood and transcended.

In essence this answers our initial inquiry. Machines built on the same fragmented processes of thought that condition humanity, will never transcend fragmentation and truly perceive the whole. It seems we are merely replicating our own limitations in another form, which is, of course, all we ever do.

The intuition for what breakthroughs are needed to progress to AGI comes from neuroscientists. They argue that our intelligence is the result of the brain being able to build a ‘world model’, a representation of our surroundings. This can be used to imagine different courses of action and predict their consequences, and therefore to plan and reason.

The intuition for what breakthroughs are needed to progress to AGI comes from neuroscientists. They argue that our intelligence is the result of the brain being able to build a ‘world model’, a representation of our surroundings. This can be used to imagine different courses of action and predict their consequences, and therefore to plan and reason.

AI systems with the ability to build effective world models and integrated feedback loops might also rely less on external data because they could generate their own by running internal simulations, positing counterfactuals and using these to understand, reason and plan.

We need a democratic process that makes sure individuals, corporations, even the military, use AI and develop AI in ways that are going to be safe for the public.

I don’t think there’s anything particularly special about biological systems versus systems made of other materials that would, in principle, prevent non-biological systems from becoming intelligent.

https://www.nature.com/articles/d41586-024-03905-1

Thus the question arises, if there is something we can do to bring insight into this progress, or does the very effort to “do something” become just another extension of thought’s fragmented movement?

Is the progress we strive for simply the refinement of a mechanism, or can there be a shift that transcends this mechanical process altogether? Are we willing to explore this together, without conclusion, and see what unfolds?

1 Like

I don’t think we can equate machine learning (or self-guided learning based on programmed algorithms) with sentience.

In terms of raw thinking, there is no doubt that AI can be creative in the narrow sense of that word, extrapolating indefinitely on the programs it has been programmed to respond to. In this sense AI is already outperforming human beings.

But sentience involves biology, the ability to feel pain and pleasure (through a nervous system, sensory aparatus, nociceptors, dopamine, serotonin, etc), which involves some kind of first-person point of view. There is no necessary relationship between the capacity to compute algorithms and the capacity to feel or sense pain and pleasure, etc.

Agree.

From an interview with IT scientist JĂźrgen Schmidhuber and philosopher Thomas Metzinger about artificial intelligence in German:

“This learning to better predict pain and reward signals - how does that lead to consciousness?”

Schmidhuber: The AI learns to predict the consequences of its actions in order to better plan for the future. Its neurons learn internal representations of all the things that frequently occur in the AI’s life. One thing occurs particularly frequently in the life of the AI, namely the AI itself. Therefore, groups of neurons are created that represent this AI, its hands and fingers and the consequences of moving its fingers. Every time the AI now plans with its predictor, it wakes up these neurons. Then it is actually aware of itself, then it thinks about itself and its future options."

1 Like

If the neurons in AI are constructed by thought, are they not simply an extension of the same fragmented thought process that seeks to predict and control outcomes?

Can such a mechanistic process truly give rise to awareness, or is it merely an illusion of awareness based on the repetition and refinement of algorithms?

And in mistaking this mechanistic representation for consciousness, are we not reinforcing the very fragmentation that prevents true understanding?

AI’s so-called “self-awareness” is a projection of human thought and not a fundamental reality. Human consciousness arises from a deep integration with life, nature, and the universe, whereas AI remains a product of human fragmentation. It seems probable, that confusing the two not only distorts our understanding of intelligence further but also perpetuates our dangerous tendency to view life as something mechanistic and programmable.

1 Like

There is - of course - a possibility that these AI neurons are put together by a designer who has insight into the nature and structure of consciousness. If so, some evidence or indication of that would be highly interesting.

Thinking can give rise to the idea that awareness is essential, it can give rise to the idea that we need to understand and be aware of the self process in order to be free from instinctive stupidity and evil.

Can not thinking affect the world for good? or can it only make the world worse?

It can… but thinking can only be coherent when it remains within its rightful domain and recognizes its limits.

For example, in science, mathematics, or planning, thought can be logical and consistent, producing coherent outcomes.

Utes, it may be helpful to respond more specifically to the essential issue being raised - i.e. the distinction between biological sentience and computational thought - because I feel this is the crux of the matter.

The passage you share doesn’t address the issue as far as I can see.

This is what AI (or Artificial Thought) specialises in. Machine learning is computation founded on some kind of pre-set algorithm. The algorithm says “this is pain, avoid this”, and “this is pleasure, seek this”, and the AI (or Artificial Thought) extrapolates creatively from there.

But the difference between an AI and a living organism is that the AI doesn’t actually have any biological structures that allow for a first-person experience of pain or pleasure. Computation does not equal sentience.

Pain and pleasure for AI are purely symbolic quantifications, like “up” and “down”, “spin left” and “spin right”, or 0 and 1. Whereas this is clearly not the case for a sparrow, a rabbit, or a person.

Agree.

Intelligence and thought can be in harmony as long as thought is a living process, that is when it is in progress. If you read the discussions between Krishnamurti and Bohm on intelligence you understand this. When thought is moved from the source it is no longer living and it doesn’t matter how brilliant it may be or arranged in connection with other thoughts, it cannot become living, it’s ephemeral. It’s what happens, for example, with the most beautiful flowers with which you may make the most beautiful and exquisite or inspiring arrangements, but you know they will eventually fade away because these arrangements lack the living nature the flowers had before they were taken from the plant. AI works at a speed that the human brain can’t follow, that’s why it seems to be intelligent, but obviously it can’t be so, it’s artificial indeed.

This question can be very tricky, and I’ll tell you why…

First, it assumes that the only form of suffering in the entire universe is human suffering, which would be a very narrow view even if we’re talking about possible suffering in machines.

Second, since human suffering is experienced by humans in multiple ways and the cause of it is not always the same (what may be suffering for me may not be for you) there is no way to teach a machine what suffering is without being biased.

Third, it is not true that a machine lacks a sensory system. In reality, as of today, it only lacks a self-awareness that links what its mechanical eyes, hands, ears, and nose experience at any given moment.

Fourth, in the early 1980s (AI was still in its infancy) I saw an AI expert on TV talking about a neural network capable of recognizing any handwritten character, and he said something that still sticks in my mind. He said “it’s true that it is able to recognize any character we give it, but we have no idea how it does that.” Well, for your information as of today (December 5, 2024), nobody knows yet how those neural networks do what they do (and ChatGPT has millions of them). Which brings us to a deeper question, which is: when we talk about AI, what exactly are we talking about?

And fifth and last, how can we know if a machine feels/suffers or not when we don’t know at all how it “reasons” to give us its supposed intelligent answers? How can we know that it is not “consciously” lying to us?

P.S.: Have you heard about that strange thing that happened in the Facebook lab years ago, when suddenly two machines started having a “dialogue” with a “language” totally unknown to their programmers?

1 Like

Apologies for jumping in here, but as it’s relevant to my own participation up-thread I will chime in.

I don’t think that such an assumption has been made. Suffering - whether physical or psychological - is common to most (or all) animals possessing a central nervous system. Just as life is uncommon in our local universe (though it may be common, as far as we know, to the universe at large), so suffering as pain is - so far as we know - restricted to living organisms with nociceptors, endorphins, serotonin, etc.

And whereas evolution has hardwired pain and pleasure into living organisms in order perpetuate and preserve life, man-made machines have no organic or biological interests in maintaining bodily functioning.

Therefore, as was mentioned above, computation does not equal sentience (sentience being the ‘susceptibility to sensation’, which is biological).

The word “only” seems to be doing a lot of work here. Self-awareness implies sentience as a foundation, but the so-called machine ‘senses’ being referenced here are not being processed by a central nervous system or brain.

One can build cameras, audio receivers and mechanical arms into a computer, and it will take photos, video, analyse sound, etc; but there is still nothing in this computer that processes these phenomena as actual feelings or sensations. A camera doesn’t ‘see’. A radio doesn’t ‘hear’.

Computers process information, that’s all. The output can be astonishing - AI can write novels, symphonies, discover new mathematical formulae, etc. But a computation does not require or imply sentience.

Pattern recognition. This is also why facial recognition can be done so well by computers, much better than by humans. It is also why large language models are so competent at generating complex yet legible text. Massive amounts of data are fed into powerful processing units, in response to particular algorithms set by the programmer, and the output can be intelligible and ingenious. But there is nothing being experienced or felt by the digital cogs (0s and 1s) metaphorically whirring about.

What takes place inside computer programs - even the most advanced LLMs - is complex, but not mysterious.

1 Like