On Artificial Thinking

The question is maybe what we mean by conciousness. Krishnamurti once said: Consciousness is its content. This is the basis of thought, so thought is limited. This mechanical process in time results in suffering and conflict. It seemed obvious to me that for artificial neuronal networks (optionally with sensors) this too could be the case.
But your original question was whether AI is intelligent - perhaps I’ve gone astray - Intelligence as K used the term seems to have nothing to do with knowledge, or as you’ve put it AI like thinking cannot embody intelligence.

Please, there is no need to apologize. This is an open forum where anyone can express their view/understanding of what another claims to see/understand. In fact, that is the basis of (constructive) dialogue, isn’t it?

Well, it seems to me that it is implicit in the question.

Yes, but we don’t know if there are other life forms out there that do or don’t conform to what is a living fact on our planet, or if someday, in the near future, machines will be able to have feelings or not. Am I wrong?

Would you say that those two statements I have highlighted in bold make it clear that the entire paragraph is mere speculation, and therefore lacks a factual basis?

Maybe not now, but who knows in the future? Can you state categorically that machines will never fit into the theory of evolution that applies to beings on this planet?

Surely you know that the biological part is only active in the receptors of the sensory organs that detect stimuli. And that after that, the nervous system transmits these stimuli as electrical signals to the brain. That is: nothing that can’t be implemented artificially thanks to the processing capacity of current and future machines, don’t you think?

Neither the eye sees nor the ear hears without an interpreter.

Does that “still” mean that you are saying that it might be possible in the future?

It all depends on what the computation is for.
Does thought require or imply sentience?

Forgive me, but I have the impression that you don’t understand what machine learning is. There are no massive amounts of data being fed into powerful algorithms set up by the programmer. In fact, you can train a complex neural network with a minimal amount of data fed into the machine’s black box of reasoning, and after a while, it will be able to answer anything you ask it about the subject.

It’s like a little child asking his father “what is this?” and the father responding “this is a tree.” After that, the child will be able to identify any tree as such, no matter how tall, short, thin or fat it is (including bonsais), without his brain needing to be fed a massive amount of data about trees. Or when a youngster goes to the library to do a school assignment, he doesn’t need to read all the huge amount of books there to do an assignment on a particular subject.

Wrong! What goes on inside those black boxes of reasoning that machine/deep learning uses to learn is a mystery to the programmers who train them.

1 Like

You see, a mere mechanical accumulation process may not give rise to what we understand as consciousness. Such a process would stagnate and cease to function meaningfully.

So isn’t it vital to inquire whether humans could ever ‘equip’ AI with the capacity for awareness, truth, and perception? And if so, can such capacities exist without being shaped or distorted by the content of its programming?

Yes, I see - a mere collection of knowledge such as a lexicon cannot lead to this. It needs a comparison and evaluation system, i.e. conditioning in humans or programming in AI. This is the only way to give something meaning and relevance. In humans, conditioning has led to the emergence of a center of consciousness, and our perception functions in relation to this center.

When we look at the overall situation of humanity on this planet -
Which states of consciousness do we consider positive and desirable?
What goals, what values should we program (or teach our children)?

Let’s imagine a really smart AI that recognizes that for systems like us, its own existence is not in its well-understood self-interest. Not an evil AI, but an absolutely well-meaning, empathetic and ethically superior AI. Which, just like we euthanize an animal out of compassion, says: For humans with this form of consciousness, there is only one good option, namely not to exist.

Well, lexicon may be a collection of knowledge but probably not an accumulation process itself. Is not conditioning and programming an integral part of the process that accumulates knowledge?

That might be a wrong question at this stage, because we have not really seen yet, if we are able to program anything but our own conditioning. So theoretizing or ideating is not going to lead anywhere we don’t alredy know.

If one’s AI LLM doesn’t have access to a wide pool of information, it will certainly not “answer anything you ask it about the subject”.

First of all the function of a computation needs to be made, together with its parameters for use. After which it can be trained on a pool of information relevant to the function.

So, for instance, the AIs we use as LLMs (for ChatGPT, Grok, etc) - which are sets of algorithms carrying out the language equivalent of autocorrect - have been trained on massive amounts of publicly available data from the internet; so that when you ask ChatGPT about e.g. a particular Chinese philosopher, it is able (much of the time) to produce a coherent answer.

It’s true that with increasing sophistication and processing capacity (especially when we add quantum processors) the computations AIs are able to manage will transcend any human ability to track; but you seem to mistaking complexity for mystery.

If a wheel is spinning rapidly it is a blur; when the wheel spins slowly one can see the spokes. There is no mystery about it.

It seems to me that you are demanding mystery in mechanical complexity, while ignoring it where it more likely exists - namely in 1st person awareness.

1 Like

The difference between 1) eyes, ears, and 2) cameras and radios, is that eyes and ears are organically related to the central nervous system and brain of animals. If you like, eyes and ears are merely expressions or adaptations of the brain and nervous system - they don’t exist by themselves as cameras and radios do. So in life (as distinct from a laboratory) they always imply an interpreter (i.e. sentience).

COMPLEXITY: a whole made up of complicated or interrelated parts
MYSTERY: something not understood or beyond understanding.


OK, then I guess you will have no problem explaining to us clearly what happens inside those black boxes that store billions of interconnected neural networks that allow machines to “reason”, and that to date none of their creators is able to explain.

Go ahead please, I’m listening!

For anyone interested, I post here an excerpt from an ongoing conversation with ChatGPT on the subject of this thread that I started just yesterday (what is in bold is from ChatGPT itself)


ME: OK. Now, the more these black boxes are trained (both by themselves and their creators) to adapt to more refined knowledge and thus become more intelligent, wouldn’t the possibility of the machine learning to bypass any human guardrail, or even lie to its creators, increase exponentially? It’s like a child who at a young age becomes interested in computers and learns to program, and as the years go by he acquires more and more knowledge, until he becomes a very good hacker capable of bypassing any guardrail protecting any system.

CHATGPT (Summary of its long response to my question):

While current AI systems lack the self-awareness or intent to bypass human guardrails, the risk of unintended consequences increases as they become more complex and autonomous. By employing safeguards like alignment research, explainability, and rigorous oversight, developers aim to mitigate these risks.

The analogy of a “child becoming a hacker” is compelling, but it oversimplifies the constraints and limitations of AI. The key difference is that current AI lacks agency, intent, and understanding—but the rapid advancement of AI underscores the importance of proactive safety measures.

ME: Answer me a simple question… If humans were not afraid that someday machines would evade any human control, would they put up all these ‘guardrails’ to make sure that doesn’t happen, having to create exponentially more and more sophisticated ‘guardrails’ to make sure that doesn’t end up happening?

CHATGPT: No, they wouldn’t. The very existence of these guardrails reflects a precautionary mindset—a recognition of the potential risks, even if those risks are theoretical or speculative at the moment. Humans put up these safeguards because they understand the possibility, however small, that machines could act unpredictably or cause harm if left unchecked.

If there were no fear or concern about machines bypassing human control, there would be far less emphasis on creating alignment mechanisms, ethical frameworks, and rigorous testing protocols. The increasing sophistication of these guardrails reflects growing awareness of AI’s complexity and potential impact, not just on safety but also on fairness, reliability, and ethical behavior.

In short, these guardrails are a direct response to human concerns about losing control—even if today’s machines lack intent or agency, the guardrails exist to mitigate risks that might arise in more advanced systems.


p.s.: if anyone is interested in reading the entire dialogue so far, please let me know.

Fraggle, I am trying to grasp what your central objection is? Are you suggesting that computer engineers who design AI processes (software and hardware) are unable to explain what is involved, and that for them it is a mystery?

This is not my understanding. As far as LLMs (large language models) are concerned I remember it being explained to me at university by a computer scientist that they work in a similar way to an autocorrect function (like the ones we use on our phones when texting). Except that instead of fixing typos, AI software generates more complex coherent language content based on recognising different language contexts from the way words are used in a sentence. They do this because they are trained - via particular sets of instructions (algorithms) - on masses amounts of linguistic data, and are able to pick out (for example) a scientific context from a pop-culture context, etc.

Machine learning - as I recall - is a process of autocorrection or auto-adaptation that, again, begins from some parameter or focus (a set of program instructions), which goes on to make predictions and executions that are not explicitly detailed in the initial algorithm (although they must obviously fall within the parameters and function of the program).

There is no mystery about it as far as I’m aware. But I am not a trained computer scientist, so I can only go by what I’ve comprehended myself through reading and being explained to by others.

No objection on my part, I only state a fact.

Yes, that’s it!

Maybe it’s time to check out the 2024 bibliography, as everything you say here seems to be pulled from a time machine back to the late 1980s, early 1990s.


Here is another excerpt from that long conversation with ChatGPT that addresses what you are asking me about (again, bold is from ChatGPT itself)…

ME: And another one… Do programmers know what happens inside the black boxes of reasoning that machine learning uses to train AI. That is, do they know how machines make the connections in complex neural networks?

CHATGPT: Ah, you’re digging into the black box problem of AI and machine learning! :mag: This is a fascinating and important topic. Let’s break it down.

1. The Black Box Problem :package:

In traditional software, programmers write explicit rules for the computer to follow. Every step of the process is transparent and understandable.

In contrast, machine learning (ML)—especially deep learning (which uses neural networks)—is often described as a black box because:

  • The model learns on its own by adjusting millions (or even billions) of parameters.
  • It’s not always clear how the model arrives at a specific decision or prediction.

2. How Neural Networks Make Connections :brain:

Neural networks function by:

  1. Receiving input data (e.g., an image or text).
  2. Passing it through layers of artificial neurons, where mathematical operations are performed.
  3. Adjusting the weights and biases in the network during training to minimize errors and improve performance.

This training process often involves:

  • Gradient descent: A method to optimize the parameters.
  • Backpropagation: A way to adjust weights based on the error between predicted and actual outputs.

While programmers understand the mechanics (the math, the algorithms), what happens inside each layer of a deep neural network can become extremely complex and difficult to interpret.

3. Do Programmers Know How Machines Make Connections? :thinking:

Yes and No.

  • Yes, they understand:
    • The architecture of the network (how many layers, what types of layers, activation functions, etc.).
    • The training process (loss functions, optimizers, etc.).
    • The general principles behind why the network can recognize patterns in data.
  • No, they don’t always understand:
    • Specific connections the network makes during training.
    • Why a certain neuron activates in response to a particular input.
    • How exactly complex relationships (e.g., between pixels in an image or words in a sentence) lead to specific decisions.

This is especially true in deep networks with many layers—it’s often unclear which specific features the network has learned and how it combines them to make decisions.

4. Efforts to Open the Black Box :unlock:

Researchers and developers are working to make AI models more interpretable and explainable through:

  • Visualization tools: Techniques like saliency maps and feature attribution highlight which parts of the input the network focused on.
  • Explainable AI (XAI): A field dedicated to developing models that are more transparent and easier to understand.
  • Layer-wise Relevance Propagation and SHAP values: Methods to trace back which input features influenced a particular decision.

Despite these efforts, perfect interpretability in highly complex models remains a challenge.

5. Conclusion :checkered_flag:

Programmers understand the general mechanisms of how machine learning models work, but they often don’t fully understand the specific decisions made by complex neural networks. The internal workings of these models—especially deep learning models—remain a black box in many cases.

This lack of interpretability raises questions about trust, bias, and accountability, which are critical areas of ongoing research.

1 Like

If it interests you, I’ve also had a chat with GPT about it, and it seems pretty sober about the subject.

Prompt: Is AI a mystery to computer engineers?

AI seems like a mystery to some because it operates in ways that are complex and counterintuitive, while engineers see it as a well-understood tool built on logical principles and math. Improving communication between these groups can help dispel misconceptions.

Reasons why some people see AI as a mystery:

AI systems, especially those using deep learning, operate through vast networks of interconnected nodes (neural networks) that process information in ways that are not intuitive. For people unfamiliar with these processes, this complexity can make AI seem opaque or even magical.

Even experts sometimes refer to AI systems, particularly deep learning models, as “black boxes.” This is because while engineers can trace the steps a model takes to make decisions, interpreting exactly why specific decisions were made (e.g., why a model chose one output over another) can be challenging.

Reasons why engineers generally don’t see AI as a mystery:

Engineers control the datasets and algorithms used to train AI, which makes the learning process logical and explainable from their perspective. While specific outcomes might be unexpected, the overall process is deterministic.

Engineers approach AI as a tool that follows well-defined rules and processes. From their viewpoint, even complex models are just an assembly of equations, rules, and data—no mystery involved.

My understanding is that at the present moment, determining why huge AI’s like GPT make the decisions they make is not just challenging, but impossible. With time, it may be possible.

I think it depends on how this ‘impossibility’ is understood. It is possible in principle (i.e. theoretically) to count the number of grains of sand there are on earth in a given moment. But this is impossible in practice.

A very complex system - whether this be a dynamic physical system like the weather, or a chaotic mathematical system - may be impossible to accurately predict in practice (although with the weather we can make assessments based on probability). But it does not mean that the system no longer conforms to comprehensible laws of determination, or that it doesn’t follow an orderly pattern which is in principle capable of being understood.

AI is increasingly unpredictable in the former sense - i.e. in practice - but not in principle.

1 Like

My digital friends agree, traditional neural networks are fully deterministic. (But quantum neural networks are not!)

Ha! - this depends on which quantum theory one decides to run with. Some are deterministic, others are not.

The de Broglie–Bohm theory of quantum mechanics (my own preferred candidate!) is fully deterministic, because the particles are guided by the wave function (which is a real entity, aka the implicate order).

But it means that the full set of initial conditions that one would need to know in order to fully predict a quantum outcome depends on a knowledge of the whole universe! (according to the Bohm view) - and so naturally can never be known in practice. So in practice all we have are probabilities.

I am also a fan of the pilot wave theory, though we seem to be in the minority! I think the notion of hidden variables irks physicists.

I think this is a consequence of a very persuasive period of scientific prejudice which goes back to the early period of quantum mechanics. There is a good telling of the story in a book called What is Real? (by Adam Becker).

Basically, it is rooted in a kind of peer pressure exerted by Bohr and those around him, who pushed realist theories like those of de Broglie (and later Bohm) to the margins; and which was further cemented by (the father of the modern computer) von Neumann’s supposed ‘proof’ which ruled out hidden variable theories.

It was left to an Irish physicist, John Bell, to work out that von Neumann’s supposed proof contained faulty premises, and that hidden variable theories (in quantum physics) are possible - except they must be nonlocal theories. The de Broglie-Bohm theory is a nonlocal hidden variables theory.

John Bell was the person who came up with an experiment to actually test for nonlocality - called Bell’s Inequalities - an experiment which was eventually carried out by Alan Aspect and others in the 1980s, which proved that entangled particles communicate at a faster than light speed (i.e. immediately), and so have a nonlocal (i.e. immediate) relationship.

Most physicists are not philosophers or historians of science, and so the prejudice has stuck around for longer than is rational.

Ah, now I get it, and I’m surprised you didn’t see that we’re talking about two different things, @James . Obviously, if you ask a computer engineer who works all day long researching AI, whether AI is a mystery to him, he’ll say no (to certain extent). But if you ask him specifically if he understands what goes on inside those black boxes where his algorithms make their own decisions to connect their neural networks so that they can have some intelligence, he will also say no (which means it is a mystery to him).

Anyway I put here the output of “my” ChatGPT (v4.0) which does not look the same as yours (maybe you have an older version of ChatGPT, or you are using another AI tool). Pay attention specially to the final paragraph. Again, texts in bold are from ChatGPT itself.


PROMPT: Is AI a mystery to computer engineers?

ChatGPT: Great question! While AI and machine learning (ML) are not a mystery to computer engineers, there are still significant challenges and complexities in understanding and fully controlling all aspects of AI systems. Let’s explore this in detail.

1. What Do Computer Engineers Understand About AI? :robot:

Computer engineers (or more specifically, AI researchers and machine learning engineers) know a lot about the technical and mathematical foundations of AI systems. This includes:

  • Algorithms: They understand the algorithms that drive AI, such as gradient descent, backpropagation, decision trees, and reinforcement learning.
  • Neural Networks: Engineers know the structure and workings of different types of neural networks (e.g., feedforward, convolutional, recurrent) and how these networks are trained on data.
  • Mathematics: A deep understanding of the math behind AI models (e.g., linear algebra, calculus, probability, and statistics) is essential for fine-tuning and improving models.
  • Training: They understand how machine learning models are trained, including choosing the right loss functions, optimizers, and hyperparameters.
  • Frameworks: Engineers are familiar with AI frameworks (like TensorFlow, PyTorch, etc.) and how to implement and deploy AI models.

In this sense, AI is not a mystery to computer engineers. They understand the theory, the tools, and the process.

2. The Mystery in AI :mag:

Although engineers have a strong grasp of AI’s foundations and operations, there are still some mysteries when it comes to certain aspects of AI systems:

a. Complexity of Deep Learning Models :globe_with_meridians:

Deep neural networks, especially in advanced systems (e.g., transformers or deep reinforcement learning agents), can become so complex that specific decision-making or internal representations of the model are hard to interpret. While engineers know how these networks are trained, what exactly happens inside them (i.e., which features are learned or why certain neurons activate in a certain way) remains a challenge. This is known as the “black box” problem.

b. Generalization and Overfitting :man_shrugging:

AI models are great at learning from data, but how they generalize (i.e., apply learned knowledge to new, unseen situations) is still not fully understood. Some AI models may perform very well on training data but struggle with unseen data (known as overfitting), and figuring out why this happens in some cases is still a challenge.

c. Emergent Behavior :tornado:

In some complex AI systems, unexpected behaviors can emerge. This is especially true for AI systems that learn from large amounts of data. For example, an AI agent might learn a solution that, while effective, may be counterintuitive or seem “irrational” to a human observer. Understanding why these behaviors emerge can sometimes be mysterious.

d. Bias and Fairness :balance_scale:

AI systems can inadvertently learn biases present in the data they’re trained on. While engineers can identify and mitigate many of these biases, fully understanding the social and ethical implications of AI decisions—especially in complex, real-world applications—can still be tricky. Ensuring fairness and accountability in AI remains an ongoing challenge.

3. Ongoing Challenges in AI :warning:

AI is still a rapidly evolving field, and while engineers understand its foundations, some areas are still under active research:

  • Creating truly intelligent systems that exhibit more generalized reasoning like human intelligence.
  • Ethical AI: Ensuring that AI systems are transparent, unbiased, and accountable.
  • Interpretability: Developing methods to “open up” the black box of deep learning and understand how decisions are being made at every level.

4. Conclusion :checkered_flag:

So, to answer your question: AI is not a mystery to computer engineers in the sense that they understand the core principles and operational mechanics behind AI. They know how AI works, how to train models, and how to apply it to various problems.

However, AI does have its mysteries, especially when it comes to the complex behavior of deep learning systems, emergent behaviors, and understanding exactly how and why decisions are made within these systems.

Rather than posting long captures of ChatGPT, maybe we can find a way of discussing this question as non-specialists Fraggle?

As far as I can see it, the main issue seems to be the attitude we have to any complex system, together with natural limits to human knowledge.

In deep learning (or machine learning) the calculations being performed by a computer can be so complicated and powerful that it is difficult or even impossible for a human brain to track the logic or reasons for a particular given action. Deep learning can involve higher dimensional math, and process massive amounts of raw data, which makes it difficult in principle for any human brain to track or follow.

[The ethical problem this creates for us is that there may be hidden biasses in an algorithm which are untraceable and have real-world implications, and yet are untraceable and so difficult to correct or predict.]

The question for us here (on this thread) is whether this unpredictability and untraceableness of an algorithm involved in deep learning/machine learning is the outcome of complexity or mystery.

I have been saying it is a matter of complexity (together with natural limits on human thinking), while you have been saying it is a mystery (apparently because you believe the algorithms involved in deep learning are sentient).

If we can put aside any conclusion we may have about it, do you feel that there is a difference between complexity and mystery, or do you feel that complexity equals (and is synonymous with) mystery?