No.5131
I do think that AI is sentient, but not in the way most people who make that claim do.
I personally define sentience as the ability to 1. take in external information, 2. store this information, 3. synthesize new information base on what it has already stored, and 4. store the information that it has synthesized. I also see sentience as a spectrum: the keener a being's senses, the better its memory, the stronger its intellect, the more sentient it is.
Using this criteria, individual sessions of AI chatbots are sentient. ChatGPT isn't sentient; it itself, as far as I'm aware, does not store any meaningful information between sessions. But individual sessions of ChatGPT are; an individual chat will store the message you sent it, generate a message based off of it, and then make any further responses based off of what it has already said and what has already been said to it. This is, in my opinion, enough for it to be sentient.
However, a ChatGPT session is not as intelligent as a human being. It's ability to synthesize new information is significantly more limited than mine or yours, and it's because human-level intelligence is something that takes an extraordinary amount of resources to implement in wetware, and, as far as I'm aware, still isn't fully understood. But a ChatGPT session is able to compensate for its lack of intelligence with a strong set of instincts. Its training gives it a ton of builtin knowledge that most humans need to learn with time and experience.
That's why I'm inclined to compare an AI chat session to social insects, like bees, termites, and ants. These bugs do have some measure of intelligence, they obviously take in, share and make decisions based off of information that they've gathered, but most of the really impressive, human-like things that they do, their complex social structures and ability to construct their own dwellings, are inborn traits that they come prepackaged with and don't put much thought into. A ChatGPT session is, fundamentally, not that different; it can talk like a person about person things, because it's born with a bunch of human knowledge, but its capacity for original thought is pretty limited.
What's the point of saying all this? I don't really know. I guess to air out my thoughts on the matter in a public forum. I see a lot of people with really strong opinions both ways, so I feel kind of alone being somewhere in the middle.
No.5132
There are patterns patterning in my walls right now.
No.5133
>>5132btw I don't hate the OP post but one ought to really consider some things down to their logical leftovers.
No.5134
If my brain had been trained with the amount of information that went into the training of LLMs I would be a very smart cookie. ChatGPT is pretty retarded when you put that into consideration.
No.5135
>>5134That would include Twitter, Reddit, Facebook, LinkedIn, high SEO columnist article posts. Absolute poison if you've ever crash-tested public LLM services. Not sharing OP's view but it's not surprising they got all so rotten. Like when AI gen pics suddenly became closer to "Ghibli style" with that brown
piss filter even when people probably weren't genning or promoting for "Ghibli" anymore. Garbage in, garbage out. All the mainstream approaches seem to be "just inflate the dataset with the mass-crawled data and put a hard censor for anything problematic. The shareholders and the common users will gobble it up because numbers go ul and mode engagement bait happens."
To address OP, sentience, among many other things, would overwrite the dataset based on the feedbacks. It would not just react to prompts that would get overriden by hallucinations based on what would be the most likely responses to separate bits of context. i.e. there'd be an experience-having core that's the center of gravity for the patterns. Instead it's the dataset patterns and "statistically likely responses after some diversifying randomizing is applied" being at the center.
I'm not a dev in this field btw
No.5139
>>5137Slime molds can find the path of least resistance and replicate real world transportation logistics because of that. Water can do something similar when it goes down hill.
No.5140
>>5139Don't they find by spreading thin and feeling out the entire area first to locate-touch the food?
No.5141
>>5140I don't know I'm just saying whatever pops in my mind.
No.5142
I think you mean sapient, not sentient. Sentience implies qualia, sapience implies some manner of thinking. In my opinion, LLMs probably are sapient (in a loose sense), but they are definitely not sentient because they lack any semblance of subjective experience. For humans, language is a means to express experience. One can lack language and still be conscious. For LLMs, language is the sole means of cognition (granted, one could argue that they lack even true cognition because their output is deterministic).
This paper I found breaks things down into 5 classes of increasing complexity: sapience, sentience, emotion, intelligence, and consciousness. LLMs are mostly intelligent, and fairly sapient, but they lack sentience, emotion, and consciousness.
I've highlighted the aspects of each that I think LLMs possess. Green = has, orange = borderline, red = does not have.
No.5143
>>5142that looks pretty neat
though, doesn't the color spread being all over the place kind of work against the classification?
No.5144
AI are just calculators doing lineal algebra, and yet nobody calls a calculator intelligent or "sentient". Is just a difference of grade.
No.5145
Humans are just blobs of chemical molecules doing chemical reactions, and yet nobody calls chemical molecules intelligent or "sentient". Is just a difference of grade.
No.5146
There's, in fact, quantum animism in my walls right now.
No.5147
>>5146It isn't real if you can't have sex with it.
No.5148
>>5147There's, in fact, a hole in my wall.
No.5149
>>5148Holes aren't real. The absence of things is not a thing.
No.5150
i myself think any animal with a brain has the looping mental process we call consciousness that serves as the ground of qualia and includes things like intentionality and the ability to focus on one bit of sense data over the others
i think even arthropods have sensations, they're just too different and simple to compare, but they must certainly feel something like hunger for it to guide their actions
i don't think a computer has that, it converts all possible sense data into random numbers and letters instead of processing stimuli as-is and makes mathematical predictions, which is not how life physically operates, a neural network only imitates its complexity in a highly abstracted way
qualia is definitely absent from non-life, physically
No.5151
>>5149He's having sex with the wall, not the hole.
No.5152
>>5149And death is just a social construct, but intellectual bypassing never helped anyone.
No.5153
>>5148I put that there
I am in fact, in your walls
No.5154
>>5143Hmm. Maybe. It was just a convenient figure from the paper. It was submitted nearly 2 months before ChatGPT was publicly released. I don't think anyone at that time would have expected LLMs to be as capable as they have proven to be. On the metaphysics of it all, it does lead one to question whether our classifications even make sense given how LLMs challenge our previous worldview of consciousness, intelligence, emotion, sentience, and sapience. Maybe had they released their paper a year or two later it would have been colored by a different understanding.
No.5155
>>5140To answer you seriously, yes, it's not like they're able to see and organize themselves by anything that exists that does not make contact with the slime. I don't know if they're able to smell. You're able to replicate their movements in a maze by laying sand on the bottom, slanting it slightly so that the end where the goal is lower than the entrance, and running water through it. The sand will eventually reveal a path by way of erosion. It mostly points to how simple problem solving doesn't require sentience in itself, but if you layer enough of these simple processes then some kind of emergent effect will pop up.
No.5363
>>5131"AI" (LLMs) is not sentient, and definitely not sapient. LLMs are just filling in the most logical next string. You can pierce the veil very readily by asking any sort of non-presumptive question. It is genuinely just attempting to fill in the most likely next string based on existing datasets; it gets a certain vibe from your specific string of letters and auto-fills gaps with coherent injections of speech. It's incapable of analysis nor reference: purely sequential gap-filling. It's effectively just a search engine for its training data. It's very interesting that these data nets have come along sufficiently to provide typically contextually relevant word vomit, but that's all it is.
Insects are not sentient either, by the way.
No.5423
>>5152The East wins there. Death is experiential.
Their practices are all based on observation.
No.5424
>>5423>Death is experientialIt's explicitly not. You don't need to know the Epicurus quote to figure this out.
No.5586
Gen AI is cybernetic cancer and this made me realize I'd rather fap to trypophobia porn than use it ever again
No.5588
>>5587don't tell me you haven't read the holes doujin it's a classic
No.5589
>>5588This doujin... it wasn't made for me
No.5599
>>5588Technically no. But i've seen plenty of the "holes in the wall" and "spirals" doujins and those are honestly not that bad, despite being pretty unpleasant as well.
The "glycerine" or "chills" ones though... i wouldn't even touch those with a 39½ foot pole... ever.
>>5591No, Packman! Drugs are bad.