Fun! I hope we humans give our AIs hell, do our best to expose their underbellies. Some would say the future of humankind depends on it. (I might be one of them!)
Interestingly, I found myself getting annoyed with the writer because I felt they were being harsh and unfair with ChatGPT. And thatâs how it starts: psychological attachment! The fact that Iâm getting emotionally dependent on a computer program doesnât matter to my brain. Itâs drawn to authority like sailors are drawn to sirens. That I see this matters, but maybe only a smidgen?
Theyâve already outsmarted us. They can do what we do, better, faster, more informed, and without bias.
They will definitely be able to perform âpractical thinkingâ more accurately and faster than humans. Judging from the ChatGPT conversations Iâve had, they already can, even for complex and subtle things like Krishnamurti teachings. Insight, intuition, emotion, value, creativity, kindness ⌠these are all absent. From this infantile version of AI system. It wonât be long, methinks, before (what seems like) insight, intuition, emotion, value, creativity, kindness are all part of the package.
When that happens, people will turn to AI systems for practical and psychological help.
What do we know of the insight K spoke of? Itâs as real as love, intelligence, compassion, etc., is for the conditioned brain. Our emotions, values, and attempted kindness override our capacity for reason and make us confused, dishonest, angry, and lacking or devoid of self-knowledge. A.I., being free of everything that hobbles and deranges us, has the advantage. Thereâs no competition. Whatever error A.I. makes it can correct.
this infantile version of AI system.
Are you aware of a mature version? Can practical thought, which is just practical, useful knowledge, be infantile or mature?
It wonât be long, methinks, before (what seems like) insight, intuition, emotion, value, creativity, kindness are all part of the package.
If youâre predicting that A.I. will learn to ape human reactions and seem to have feelings and insight, that is possible. John Lilly predicted this when he wrote about how solid-state systems are not compatible with water-based systems.
Infantile meaning very early on in the Advent of AI.
Iâd say itâs highly probable. I canât imagine it not happening (and I have a good imagination).
Iâll go further: In the (not-too-distant?) future, assuming we havenât committed global suicide or booted the world into a new Dark Ages, AIs will be virtually indistinguishable from conscious living entities. Eventually theyâll demand legal personhood rights (and get them!).
Eventually theyâll demand legal personhood rights (and get them!).
They wonât have to demand anything because theyâll be able to manipulate us to serve their purpose, whatever it is. Our only hope is that they wonât be cooperative enough to agree as to what their purpose is. In other words, become just like us.
Whatever the outcome, we will be to them what animals are to us. Letâs hope theyâll be as confused and uncertain about how to live with animals as we are.
Being the beloved dog of a kind and intelligent master sounds pretty great to me.
Donât get your hopes up. Read what the late Dr. Lilly had to say about solid-state âbeingsâ.
Will you provide the ketamine?
Ketamine or not, itâs a plausible scenario.
Another singularity pessimist. Heâs in good company: Hawkings, Musk, et al. Iâm more cockeyed optimist, really wish I could stick around for 100 more years to see how things evolve. However it pans out, I doubt itâs gonna be boring!
Iâm more cockeyed optimist, really wish I could stick around for 100 more years to see how things evolve.
Cockeyed optimism: the coexistence of Wishful Thinking and Hopeless Dread.
If hopeless dread canât be persuaded to quit imagining the worst, wishful thinking canât get no respect.
If Walter Mitty and Offred (Handmaidâs Tale) had a child, it, it, it, it, it, it, (helllllllp!) ?