Musings

I think what he meant was, When there is no I, what happens is all there is, since there’s nothing to react to it.

And resistance to (‘minding’) it means that the law of cause and effect is not recognized by the conditioned brain?

If the law of cause and effect is more than a concept, the free, unlimited brain has foresight enough to see where things are going and is not reactive or resistant to what happens,

That may be it. The self image acts as a judge of what is happening. It cannot not ‘mind’ what is happening. It is a ‘basket’ of likes and dislikes, beliefs, opinions, knowledge, conclusions, approvals and disapprovals, comparisons, condemnations etc…with it not present there would be no one to question or resist the ‘cause and effect’ fact that, ie. ‘when it rains, the streets get wet’! Nothing to ‘mind’ what happens!

Peace befalls the mind that has nothing—to think about; and because it has nothing to think about this mind is free—to think about anything; nevertheless, it thinks about one and the same thing only.

And what is the one thing it thinks about Manuel?

Thank you, danmcderm, for asking. Your question seems simple and innocent, but is it valid? Surely, a truth in one’s heart is no truth in any other heart. Whomever would give you an answer to this question would be corrupting you.

Thanks Manuel, I wouldn’t want that!


I wonder, is it possible for an AI to be unconditionally free?



Words, ideas, theories, views, teachings can be like opiates: dull the pain of neurosis. Transform it into pleasure even.


‘Unconditional freedom’ may be a red herring here. Maybe we can consider an ordinary sense of freedom first?

In order for there to be a sense of freedom (in the ordinary sense) there has to be ‘something it is like’ to feel free.

Is there anything it is ‘like’ to be a computational device?

Present day AI is a (very powerful) algorithmic computation.

So, for instance, ChatGPT and other language programs are, essentially, sophisticated and powerful auto-correct (or auto-complete) functions; that is, they correlate patterns between large amounts of pre-given (language) data.

So to infer sentience (feelings or consciousness) in a software program is akin to mistaking the TV images of people for the actual people themselves.

There is a complete difference between this computational (AI) “intelligence” and the intelligence of an embodied consciousness (or sentient being).

All feelings involve chemistry (for example serotonin, dopamine, adrenaline and oxytocin).

And so to feel a sense of freedom, or anything at all - whether pleasure and pain, desire or suffering, etc - it requires a sensory nervous system and brain system that have evolved for purposes of survival (or ‘will-to-live’).

AI, however, being nothing more than a computer algorithm, has no chemical and sensory nervous system with which to feel anything (and no intrinsic ‘will-to-live’).

1 Like

Good, well thought out, though my intuition says the gap between human and AI intelligence will get fuzzier and fuzzier, narrower and narrower. (Though, in the long run, it may I think get bigger and bigger, when the AIs really ‘come into their own’ and leave us behind in the dust!)

But what I was trying to get at was more of: Can AIs ever be utterly (or close) bias-free, the equivalent kinda sorta of the unconditoned mind.

The same question asked differently is: can an algorithm ever be bias-free?

The answer is in the question.

An algorithm is a procedure (a set of rules) that are used for solving a problem or performing a computation.

Without the program - i.e. the specific set of rules that align with the particular problem to be computed - there can be no algorithm.

Makes sense. The algorithms, however, can be optimized for as little bias as possible. And it wouldn’t take 50 years of diligent ‘practice’ to get there like it does for us non-digitals!

True. But then digitals have the distinct advantage of having no desires or fears to worry about!!

Algorithms are naturally only as biassed as the set of rules they have been programmed to execute (or because of unforeseen errors that occur during the computation).

Assuming all AIs rely to a significant degree on algorithms (and from my research they do, it’s hard imagining AIs that would be algorithm free), then they are all driven significantly by rules, and where there are rules there is conditioning. (I’m using the word in a general sense.) Which makes the notion that an AI could be unconditioned tougher to entertain. (Though I’m doing my best!) :wink:

What about randomizing the AI’s decisions? Sure it would still be conditioned with respect to the randomization algorithm and the rule that says “randomize all decisions.” But other than that it would be beyond conditioning? (Trade in Krishnamurti for a Magic Eight Ball?)

At any and every moment, to be aware of what happens is more important than what happens.

1 Like

More important for humans because, as far as we know, we’re the only species who resists choiceless awareness.

If “what is” is a distortion of perception—the observer is the “what is”—and perception an illusion of the senses, then “what is” is also an illusion and nothing is what it seems, i.e., all is perceptual illusion, and on the basis of illusion knowledge is built. Upon becoming aware of this as a fact, I wonder, what else is there for me to know about myself?

…considering how little I actually know and how much I believe…

I may say that I understand conflict based on what I have heard and read about conflict; but if I still suffer frustration, annoyance, loneliness, dejection, anger, etc., then I have not become aware and understood the true nature of my own conditioning, and without that understanding choiceless awareness cannot take place.

1 Like

What if I suffer excitement, adrenaline rush, joy, etc ?