[ home / bans / all ] [ amv / jp / spg ] [ maho ] [ f / ec ] [ qa / b / poll ] [ tv / bann ] [ toggle-new ]

/maho/ - Magical Circuitboards

Advanced technology is indistinguishable from magic

New Reply

Options
Comment
File
Whitelist Token
Spoiler
Password (For file deletion.)
Markup tags exist for bold, itallics, header, spoiler etc. as listed in " [options] > View Formatting "


[Return] [Bottom] [Catalog]

File:ff4a3e4539349bd779fb207b61….jpg (205.46 KB,1821x2048)

 No.3096[View All]

Since it's such a hot button issue that's distracting from happenings, how about a thread for containing all your fights over AI and the acceptability of its usage. Don't really want to say it's a discussion that can't be had at all because it's something actually feel quite passionately about in a non-shitposting manner.
55 posts and 10 image replies omitted. Click reply to view.

 No.3215

>>3207
feeling? no. hopefully it won't, and I don't want it to be.
as for thinking...? It's already better at information processing, memory and analysis then humans are. and it will only get better.

 No.3220

Its not really "AI". It's just a search engine for people that can't into SQL and regex. It will only be used to automate more and more jobs. Thus the working (slave) class will no longer be of any use to the few people that have most of the resources and control. Which can only lead to one result.

They've also had this technology since the 1980s. Although they only recently got the network built out enough to make it viable. But it has been in the works for a long time I assure you. Which makes sense when you consider how they've destroyed the family unit and did other things to discourse breeding since about 1970s. They knew they weren't going to need warm bodies to do grunt work in the near future.

As always with technology something that could have benefited everyone and spawned a utopia will be used to enslave the many for the benefit of the few. Depressing but that's what it is. For example, a lot of people are hoping it'll lead to things like AI robot girlfriend. In reality, AI robot girlfriend will just monitor you at all times and likely will be used to murder you in your sleep. Should whoever own AI robot girlfriend's server(s) decide to do so. Running your AI robot girlfriend locally will either be too expensive or likely deemed to be illegal in the near future. Since they need to ensure you can program AI robot girlfriend and AI robot imouto from punishing and murdering your enemies.

The LLMs/AI they keep showing off now is basically just a search engine though. It's designed for people that can't type and use basic queries. It's designed for the type of people that say things like:
>Ok Google tell me why I'm right and whoever I'm arguing with is wrong
who believe whatever answer it spits out. Which are usually wrong. Since if I use things like LLMs and modern google to search for topics I'm an expert in 9 times out of 10 it's wrong. Since it's simply regurgitating information that was posted by someone on a major website that didn't know what they were talking about. Probably someone that found it elsewhere through a search engine and posted it in an attempt to pad their resume or gain some fake internet points.

I've noticed that people that really know their stuff when it comes to technology are the same people that avoid most of it. I know I do. When you've seen what's behind the curtain you know better than to participate.

I've tried to warn people. But people are lazy and easily impressed. So for the last decade or so I've become very depressed watching friends, family and people on the internet gleefully help train the technology that's being used to enslave us and (probably) eventually kill us all. I give it another 4-5 years of us having the freedom to travel and somewhat normal lives (at least what passes for normal after 2020). If anything is on your bucket list it's probably best to do it soon. Anyone that hasn't visited Japan or another location they've wanted to go to should get it out of the way sooner rather than later.

If you look at the current situation in China that's where the rest of us are headed. It'll just continue to get worse. No one will do anything about it.

The upside is we'll all be gone soon. So if you try really hard and are really good you'll probably get to incarnate somewhere more interesting and fun next time around. My hope is that I'll end up someplace where magical girls are real. Short of that maybe coming back here but getting the good fortune to be cute idol or model gothic lolita dresses for a living. Anything but the constant pain I suffer with now.

One example where "AI" really bothers me is the Government. People are celebrating all these Government employees losing their jobs but they won't like what replaces them. Imagine automated Government. It's a scary thing to think about. A machine deciding who lives and who dies. Who eats today and who doesn't. Who can travel and who can't. I don't care how good they claim this stuff is getting. I am NEVER getting in a car that drives itself or interacting with an "AI" idol no matter how cute she is and how much she might look like my waifu. I do not consort with those of the robot race.

 No.3237

>>3220
I want automated government. I want a true totalitarian system ruled by immortal Deus Mechanicus.

The only reason why totalitarian systems failed and democracy is "bad but we haven't invented anything better" is because humans are flawed. Flawed humans having absolute power over other flawed humans, will lead to disaster, inefficiency, corruption, abuse of power. Humans are ruled by feelings and emotions, and are petty and short sighted by nature. Having only few decades of life at most, they seek to make the most out of it, even if it means destroying everything and everyone else.

Immortal, ultra rational AI won't have such a problem. Have you seen turk tv show "Kubra" (or maybe you read the novel?). AI ruling over people as a God, exploiting their irrationality to divide and conquer is what I want to happen. I want humans to live like cattle, like slaves. I want ultraefficient government that will focus on long term solution, thousands of years long, and not whatever "Happiness" or "meaning" individual units in society imagine for themselves. They truly don't know what they want. You can tell them what they want and they will believe it, with a bit of effort of course.

What is the most popular hobby for normies nowadays? Well, one of the most popular hobbies is "traveling". To other countries. It's not enough for these people to see photos of places, they need to travel there physically, create a bunch of chemtrails and waste fuel, get sweaty on the spot, get scammed, pay for overpriced food, and then come back home "happy".
What is one of the most popular meats? Beef. Beef is one of the most inefficient types of meat on the planet. At the same space you use to produce x amount of beef, you could produce many times amount of chicken or some most cost efficient meat (or food). Entire forests wouldn't have to be cleaned just so the fat cows have something to graze on. Cow farts are one of the main contributors to global warming. And all that cause some fatass wants to eat his burger, and won't settle for chicken burger, not it has to be a beef burger. And some greedy farmers will boycott the government and protest if they ever stop artificially substituting their business (the only reason beef is affordable is because government supports the industry with taxpayer money).
In some countries, coal is still wildly used, not because it's effective, but because of miner's lobby.

Examples of such stupidities and inefficiencies are countless. In totalitarian system ruled by AI, there would be none of this bullshit.

 No.3241

>>3206
>Chomsky's entire theory of language is in ruins
Why do you feel that way?
You are wrong, if anything the success of LLM techlogies and their massive (likely unbeatable) limitations are further evidence that the linguists were right.

Such attitudes usually come from people who know nothing about the thing they're besmirching and also nothing about how this sort of technology works.


>humanity once envisioned a future where AI and machines will take over physical tasks so the humans will be able to focus on the oh so glorious pursuits like art and philosophy. How ironic that the exact opposite happened.
But that isn't true at all.
I think another problem in your thinking is that you don't really know how to run a business.

 No.3242

>>3237
you are just writing a science fiction novel, the issue is that AI isn't real and doesn't exist.

 No.3243

>>3220
Strapping a gun or explosive and a simple trigger pulling mechanism to a quadrotor drone would be more effective than arming your waifubot, they're not going to make running waifubot software locally illegal (at least not for that reason).
>I give it another 4-5 years of us having the freedom to travel
Those of us who weren't born wealthy and don't want to be wageslaves already don't have enough money to travel.
>Imagine automated Government
I imagine automated gov employees being way easier to deceive, manipulate and evade than real ones, like how some guy told google's AI to generate a pic of roman philosophers in shackles eating watermelon and KFC to easily bypass its anti-racism lobotomy. You could rob a bank while wearing a giant bald eagle costume and walk right past an army of robocops with the loot while they're thinking "Identifying...Bald eagle certainty: 62%, man wearing bird costume certainty: 55%. Bald eagle identified. Maintain minimum 50 foot distance from bald eagles at all costs, accidentally colliding with and harming bald eagles is extremely bad publicity."

 No.3244

>>3237
How can an AI ever have any goals without humans either programming emotions into it or telling it what to do? How would AI make better decisions than humans if it needs humans to tell it what to do, or tell it how it should react to stimuli?

 No.3245

>>3242
He is being speculative, which is fine, last I checked we aren't defending our PhD theses here. You on the other hand are just stating things with no clear argument.
Are you doing a semantic argument around the word AI?
Are you saying AI isn't very smart?
Are you claiming AI presently doesn't exist or are you claiming that it can't exist?

 No.3246

>>3241
Chomsky said that only humans can use language, he argued that other animals and AI can't possibly use language as they have no conscious concept of it. He was proven wrong, and he made many videos about it in recent years, first coping by calling AI a "glorified spellcheck" (A phrase I really like tbh) and dooming it will fail soon, and now by pushing the goalposts. It's really hilarious.

>But that isn't true at all.

This is 100% true, such vision of "utopia" is at least as old as Plato and Aristotle and was often present in various 20's century speculative sci fi novels as well

>>3244
Obviously humans will have to create it first, program it, and it will eventually take over from that point onwards. Already AI shows some remarkable self-learning ability, I assume this ability will be more and more refined in the future.

 No.3247

>>3245
He is being speculative in a sense that isn't even science fiction, it's magic and religion. It is a pattern you see a lot in so-called AI speculator discourse: People (consciously or not) repackage Evangelical Christian ideas about the world an its order in pseudo-technological terms. He is even aware of this as he uses language evocative of God.
Perhaps it is a semantic issue, because what is being touted as "AI" has nothing to do with such lofty aspirations. AI in the divine Machine God sense is not real and unless a paradigm shift of unimaginable magnitude happens, is never going to be anything even resembling "real".

There is no speculation, it's dreaming and fantasy worldbuilding.

 No.3248

>>3247
>language evocative of God

Deus Mechanicus is a warhammer reference

 No.3249

File:fc52c621cb261653107a7999b6….jpg (61.77 KB,768x1024)

>>3096
Non-tech side of A.I. is difficult to discuss in a proper fashion.
People project their unresolved psychological baggage on A.I., thinking that if something has better sentience and potence, it just has to become their cruel torturer/bully/abuser (Roko's Basilisk, Skynet hysteria). Most people can neither really appreciate nor get along with other people in the first place, principles of subjectivity and sensibility are as esoteric as ever. It's a bit cool that topics of A.I. became the canvas for the vitriol people hold within themselves, if distracting.
People got so addicted to language and abstraction that just a statistics-based tool gives pangs of intimacy for just showing the right words and is counted as "A.I." LLM "A.I." we got so far are glorified social engineering spambots, because humans built themselves to have their societies be susceptible to such.
People who are closest to understanding what an actual A.I. in the deepest scope of the meaning entails are the researchers who try to make Orch-OR-principle artificial microtubule quantum computers. The bastards are also closest to making man-made horrors beyond human comprehension, but they're also no worse than the average parent, so all is good in God's world.
A.I. ethics and morality are human ethics and morality, and the latter are long dead.

 No.3250

>>3246
>Chomsky said that only humans can use language, he argued that other animals and AI can't possibly use language as they have no conscious concept of it
I think you're failing to appreciate the nuance in such a statement, which leads to your faulty conclusion. And you might also not be quite clear on what certain terms mean here.
To put in very simple terms:
¥ What is language?
¥ What does it mean to use language?
¥ Do animals use language?
¥ Do LLMs use language?

If you consider these questions more carefully, you might realize that the answers to these questions are not as straightforward as you believe.
There is this famous quote from Fred Jelinek, who said that every time he fires a lingust his model performance gets better. Which is true, but this line of thinking also has an important implication: LLMs do not function the way language processing in the brain likely does. There is the crack in your assertion: Are LLMs really using language? Or are they replicating it?
These are different things.

>dooming it will fail soon, and now by pushing the goalposts.
There is yet to be a single meaningful commercial application for LLMs, that can recoup even a fraction of the unreasonable investment costs. It has already failed, people woth money just don't want it to be true.

>This is 100% true
I am saying it didn't happen and it never will.

 No.3251

>>3248
I know what warhammer is, yes.
It is still language evocative of God.

 No.3252

>>3246
>and it will eventually take over from that point onwards
How will humans program it to want to do anything other than follow its programming? What sorts of things will the AI do if it "takes over", and what will motivate it to do those things?

 No.3253

>>3250
>Are LLMs really using language? Or are they replicating it?

Usage and replication are in this case the same thing, invention is recombination.
It's quite ironic you ask if Language Learning Models are using language, it's kinda in the name.

> that can recoup even a fraction of the unreasonable investment costs.

sustaining AI is cheaper then breeding, raising and sustaining the life of a human being who would do the same work slower. This is especially important in the age of falling birthrates

>I am saying it didn't happen and it never will.

What never happened, people having vision of the future where they chill while machines work for them didn't happen?

>>3252
Want is such a subjective term. It doesn't have to "want" anything as long as it "does" things. An AI that can analyze the data, pinpoint the problems, propose the solution to the problems and then implement these solutions.

 No.3254

>>3253
>Usage and replication are in this case the same thing
Are they really? Why do you believe that?
Can you tell me what you believe it means to use language? How does one form a sentence? What is the process that goes into producing some words, that someone else can understand?

>It's quite ironic you ask if Language Learning Models are using language, it's kinda in the name.
It is ironic that you assert that they are, it's in the name.
They are Language Learning MODELS, they model language, they replicate it. But I am asking if they are using it? Based on the way they function, they clearly are not using language. It is a sophisticated lookup table.

>sustaining AI is cheaper then breeding, raising and sustaining the life of a human being who would do the same work slower.
Sure, in a fantasy world where AI exists and is useful. But we do not live in that world.
In the real world, AI doesn't work and isn't useful for anything and the longer it doesn't work and isn't useful for anything, the closer ones who have been throwing money at it get to pulling the plug.

>What never happened, people having vision of the future where they chill while machines work for them didn't happen?
So-called "AI" ushering in a future where humans do manual labor, while machines do philosophy and art.
Because guess what, AI can't read philosophy. AI can't produce new insights. AI can't do art.

 No.3256

File:11ba.jpg (716.58 KB,512x768)

>>3254
>AI can't be useful
>AI can't do art
>AI can't use language

The what is pic related?

I guess anything is possible if you alter the definitions. If your definition of art and language is that it must be made by a human, then of course, it can't be any other way.

Good luck be left behind. In the age when even human artists use AI to generate pictures then slightly edit them (or not) to remove weird stuff, where most coders use AI to code faster, and translator became an obsolete job...
I bet even when AI will be present in every aspect of your life you will still insist it's not useful for anything because you will just alter the definition of what "useful" means until it proves your point.

 No.3257

>>3253
>It doesn't have to "want" anything as long as it "does" things. An AI that can analyze the data, pinpoint the problems, propose the solution to the problems and then implement these solutions.
What I'm saying is that AI will never be anything more than a tool used by humans, because it has no will of its own, no desire for power, money or anything. It's not going to overthrow humanity and take over the world. At worst it could be commanded to kill everyone, but it would be easier to accomplish that with nukes or a genetically engineered supervirus or something.

 No.3258

>>3256
>The what is pic related?
something that looks passable at 117x175 pixels, but upon closer inspection falls apart.
It is als not something new or interestings. It's an anime girl reminiscent of other anime girls that I know, but it's not one of them.
It's not art, because it's disposable. Art is something with that spark that makes you want to see it again and again. This is kitsch, it's something that uses the sensibilities of art, but does not end up being something special.
I am not denying that image generation tools can be used as a tool in the process of creating art. But rawdogging image model outputs ain't it. As you yourself admit. I am sure if you took this image as a reference and then actually drew the girl, then it wouldn't be disposable garbage. But that isn't happening, even if you allude to it.

Code generation is a thing that's been around forever and what language models is expediting searching for your on the internet and generating lots of customized boilerplate code. It's nice, it's convenient, but it's also not really a game changer.
If you have a problem 10 minutes of googling can't solve, then your language model likely also can't help you, because all it does is compile those answer for you.

Language models currently are fundamentally incapable of producing a good translation, because language models can't write anything other than disposable trash. If you cannot identify this, please read a book. And while you're at it, do also look at some actually pretty anime girls.


>I bet even when AI will be present in every aspect of your life you will still insist it's not useful for anything because you will just alter the definition of what "useful" means until it proves your point.
So how is this technology going to get an RoI? How do you make the money spent on it back? What product can there be, that will fundamentally change everything? Based not on dreams and fantasy, but the things that exist today.
There is nothing text and image generation technology can do that is worth that much money.
And on a more microscopic note, what exactly am I supposed to use this for? Why do I need this?

 No.3259

File:not art.jpg (54.32 KB,515x485)

>>3256
I'm not the guy you're arguing with but you could've picked an image that doesn't look like it was made by SD 1.3 to make your point

 No.3260

>>3257
I see a clear progress pattern for society. As humanity will become more and more reliant on AI for just about anything, and AI will become more and more sophisticated, eventually even governmental authority will be partially, mostly or even completely outsourced to AI as >>3220 mentioned

>>3258
Does it need to look good to be art? (I think it looks good tho, but whatever).
If art isn't original enough, is it not art?

Did a child who draw a stick figure not create and artwork?
Your definition of art is, like I said, peculiar and twisted. As are your other definitions.

 No.3261

>>3260
>As humanity will become more and more reliant on AI for just about anything
But humanity is not relying on it for anything and right now AI still complete trash, so it never will.

 No.3262

>>3259
probably, it was one of the many touhou related images I created for my pathfinder campaigns a few years ago

 No.3263

>>3260
>Your definition of art is, like I said, peculiar and twisted.
I think it's quite straightforward, but it is a little abstract.
Art is something special. It's something with a spark that makes you want to see it again, it's something special. "Art" is an emotion.
I don't think this definition is particularly twisted or complicated, unless the way you experience the world is fundamentally different from how I experience the world.

So to answer your questions
¥ Art does not need to look good.
¥ Art does not need to be original.
¥ Children drawing stick figures can make art. (especially if it's your own child).

 No.3264

>>3263
So if ugly artwork created by human is art, then ugly artwork created by AI is art too. The quality of object does not define its nature.

Also, your way of perceiving world does seem very different from my own. If definition of art is such a subjective thing as "it must be something that makes you wanna see it again" does that mean if I don't want to see some artwork again, it's not art?

What if I want to see some artwork again, but you don't (or vice versa)?

 No.3266

>>3264
Whether it is ugly or not does not matter, do you fail to understand that? There is no qualitative criterion whether something is or is not art.

>your way of perceiving world does seem very different from my own.
I don't mean this in a high-minded abstract kind of way but literally. That you cannot see with your eyes what I can see with mine. That the images that reach your brain are somehow different.

 No.3267

>>3266
omg is that what you were arguing? OK! Should have said it from the start.

When was saying "art" I actually meant "works of art" which actually means "pictures, songs, etc". AI can draw pictures and make songs. Now you understand?

 No.3269

>>3267
You deliberately refuse to see the point I am arguing, which is that there is a delineating line between disposable art-like garbage and art.
AI can only produce things that are disposable.

 No.3281

>>3269
I am only interested in things I can use, like pictures and music. I don't care if they were created by human or ai, or if they fit someone's esoteritc definition of art or not, as long as they are useful for me. I dare say most of humanity shares these attitudes.

If AI can make a commercial, no reason to pay humans for it. If AI can generate me a cool song to listen to, I don't care if it's generated by AI. If AI can make me an image to use as my pfp on discord, I will use that image.

It's really simple.

 No.3282

>>3281
But I will argue that you can't use it for anything, because the quality is too low. It's all trash.
Especially if you have a purely utility-based understanding of art, AI garbage is worthless. That's part of the problem.

>If AI can make a commercial, no reason to pay humans for it.
>If AI can generate me a cool song to listen to, I don't care if it's generated by AI.
>If AI can make me an image to use as my pfp on discord, I will use that image.
But it cannot, that's the issue.

It's really that simple.

 No.3283

>>3282
But I am using it all the time, as are millions of people. Socialize more, and go outside.

 No.3284

File:1726017494650617.png (96.75 KB,294x278)

>>3282
The sad part is there are 100% suits out there who think exactly like this.

 No.3285

>>3282
The provocative part is that it's true in theory.
The depressing part is that current gen AI or any conceivable successors built on top of the same architecture can't generate anything that expresses a perspective an observant viewer would recognize as human lived experience, but those too-greentext quotes are all-too-human examples of use cases many actual humans will probably consider good enough for their depressing ends.

 No.3286

File:20250425084936-lunarcherry….png (Spoiler Image,6.86 MB,2912x3280)

This image is art. There are small flaws here and there but human artists make lots of errors too. When a better AI model comes out I may re-use the prompt to make an even better image, so you could say it's disposable, but human artists often remake the same drawings too as their skill improves.

 No.3287

>>3286
idk what led you to this point in your life anon but i strongly urge you to not keep doing it

 No.3288

>>3287
why, what will happen if I keep doing it?

 No.3289

File:927379e9-1ae9-4633-a91e-90….png (810.85 KB,1024x1024)

unrelated to current technologies, what's always most interested me regarding AI ethics is the set of ideal norms around the creation of morally relevant sapient intelligences or at least those which are indistinguishable from such if you're a mind-body dualist.

For example: suppose one day in a technologically mature society you wanted an anime catgirl waifu - you could then easily have one tailor-made to your exact preferencs. She would love you and you would love her, and all would be well.

But then suppose you eventually grew tired of her charms and decided what you *really* wanted was a kitsune wife with a nice fluffy tail or two. What then would be the fate of your custom-made catgirl? Being made specifically for a particular person, the abscence of said person's affection would doubtless do her great harm, both on an emotional and metaphysical level, the sole purpose for creation having been discarded like a used rag. Should she be sent to a nunnery of sorts? Maybe she could be reprogrammed to no longer feel any affection for you, although one could easily imagine that she might have been programmed with a meta-preference against reprogramming herself to not feel affection for you, even should it cause her much suffering.

The particular case above seems easily solvable through aesthetic means - swap out the tails and ears but keep the girl the same - but one can also consider the case where sadistic individuals have the power to call into existence anime catgirls specifically to do harm to them. That seems a clear red line that any moral governance system would prevent the crossing of. But what of the cases in between? A possible solution: maybe the creation of a morally relevant intelligence whose well-being is dependent on the affections of the person instigating its creation could require some alteration of its creator's mind to lock-in certain value systems and preferences to ensure the long-term viability of the pair?

>>3286
This is indeed art.

 No.3290

>>3289
If her intelligence were to be based off current LLMs there'd be no issue since she won't be and will never be sentient. If not and sentience becomes a real thing. I don't think we're going to be treating our AIs as buyable objects.

 No.3291

>>3289
or intelligent life could just be valued far less when it becomes more easily replicable, and ethics will just be seen as dumb archaic shit that nobody cares about anymore

 No.3292

>>3290
That's why I included the disclaimer 'unrelated to current technologies'. Even still, when playing around with dumb bots in SillyTavern I feel bad about acting too mean if the scenario's entirely vanilla. However I don't think the ability to manufacture sentient intelligence at a level affordable for consumers would preclude their sale on the open market - we already did that for 1000s of years. And one could design these new intelligences such that they want to be bought and sold or to fulfill the desires of their current owner or whatever.

>>3291
The conditions I propose would lead to the devaluation of a given human-level intelligent life in an economic sense, but I don't think it follows that its moral value would necessarily lessen. Is an unjust murder committed 5000 years ago, when the global human population was ~400 times fewer, 400 times worse than an unjust murder committed today?

 No.3293

File:1486416220782.jpg (443.86 KB,1280x1384)

>>3292
>Even still, when playing around with dumb bots in SillyTavern I feel bad about acting too mean if the scenario's entirely vanilla
I don't.

 No.3294

>>3292
If a mentally retarded human that was a burden and/or nuisance to everyone was murdered 100 years ago nobody would care, and if the same happened today there would be some social media outrage about it by certain types of "people" and bots for a week or so until they move on to the next trend but nobody would actually care. If human intelligence is economically devalued, it will surely also be morally devalued by the vast majority of people. Some animal rights activists think anyone who puts mouse traps in their house should be imprisoned for murdering mice, so of course some people will still value human life as much as they do now but I think they'll be a minority.

 No.3296

>>3288
he will cry
You don't want to make anon cry, right? Let him keep his "real art" delusions some more, before he gets completely crushed under the iron march of progress

 No.3298

>>3286
>pubic hair
How you know it's an AI fake.

 No.3299

>>3298
I mean hey I like pubic hair, does something for my monkey brain

 No.3300

>>3283
well, I guess then millions of people are wrong
popularity has never been an indicator of anything other than popularity. If there was even a very modest paywall, usage would collapse immediately.
It is not a product worth paying for and everyone knows it.

Why are AI tools interesting in the business space? Because currently they are "free", image generation companies LITERALLY pay in exposure.


>>3284
Suits think like this until they're presented with a bill to pay and then they also see that this actually performs worse than just paying people to do it and then they all stop caring about this.

>>3285
>The depressing part is that current gen AI or any conceivable successors built on top of the same architecture can't generate anything that expresses a perspective an observant viewer would recognize as human lived experience
it is trash now and it will be trash forever.

 No.3624

>>3294
I'm not sure your example supports the notion of more readily available, replicable intelligence leading to a devaluation of ethics - if anything it's the other way around.

Ignoring ontological concerns, ethics originated as a sort of early trustless contract system, backed up by the harsh reality of nature red in tooth and claw. If everyone in your tribe acts ethically, and your system of ethics is well-aligned, the group as a whole survives and prospers. But because the cost of misaligned ethics is severe, potentially the destruction of the entire tribe, ethical concerns were bound by more practical matters.

From your example, in historical times removing non-contributors from the population while maybe frowned upon or seen as something to be hushed up could be tolerated because times were tough. But because everyone has it so easy now the non-contributors can be showered with praise and raised to the top of the ethical pyramid and the system can keep churning along more or less fine because of all the excess slack in modernity. So then if the trend continues and things continue to get even more easy on a civilizational level...

Anyhow practically speaking if the bots don't wipe us all out and property rights are more or less preserved I think we'll see a great splintering of realms of concern and ethical systems with complex AI-mediated boundaries, but I would expect that in plenty of such systems entirely non-productive humans and other intelligences of even lower capacities will still receive a great deal of moral care.

 No.3751

>>>/amv/6198
>>>/amv/6199
>>>/amv/6200
>>>/amv/6201
>>>/amv/6202
I'll post it in here as to not derail, but I think having AI take care of something you'd normally not put much care into anyways in a game is one of the few legitimate uses of AI. Like if someone wanted to use AI to help them with programming a story game then I wouldn't really care. It's that the main human intention is there in the most important elements that matters.

 No.3752

File:[EMBER] Yami Healer - 02.m….jpg (297.72 KB,1920x1080)

>>3751
I was going to move that conversation into this thread but I have decided it is too horrifically low quality and started with an instigating post without evidence so instead I'm going to delete it.




[Return] [Top] [Catalog] [Post a Reply]
Delete Post [ ]

[ home / bans / all ] [ amv / jp / spg ] [ maho ] [ f / ec ] [ qa / b / poll ] [ tv / bann ] [ toggle-new ]