[ home / bans / all ] [ qa / jp ] [ win ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]

/qa/ - Questions and Answers

Questions and Answers about QA

New Reply

Whitelist Token
Password (For file deletion.)
Markup tags exist for bold, itallics, header, spoiler etc. as listed in " [options] > View Formatting "

[Return] [Bottom] [Catalog]

File:AI art.png (370.42 KB,512x512)

 No.100268[Last50 Posts]

There's two threads on /qa/ for different AI generation content and help. A thread on the morality and ethics of AI. One about the future potential AI holds. One on >>>/megu/ for more in-depth help with specifics. Then scattered about across all the boards some threads using AI generation for image openers and such. However, none of these actually encompass kissu's opinion on AI!

So, what do you /qa/eers think of AI currently? Do you like it, dislike it, find it useful in any meaningful way at all? Or are you simply not interested in the output of algorithms?

I myself find AI to be a useful tool in generating the kind of content I've either exhausted the available stock of or are gated off by some hurdle I need to spend more time overcoming. When it comes to text/story generation, it's like a CYOA playground where I play the role of DM and set up all the lore/situations/characters, and then the AI acts out events as I lay out the flow of the story. This creates a more interactive experience for me than just writing out a story myself for some reason, and I find it highly addictive. Then for AI art generation, I find that AI works wonders for filling a specific niche taste I have, or specific scenario I want to create an image for. It really is quite amazing in my eyes, and I have hopes for it getting even better in the future.


Hmm. Honestly, I don't really care about it all that much. Most AI generated art is pretty derivative and mediocre and often has odd errors. I don't really mess with text stuff. I used to have AI Dungeon set up on my computer, but I got kind of bored of it with how long it could take for responses at times and it really struggled with memory, often forgetting details of what happened after only a few responses. Maybe some newer models are better, but I haven't looked into it since it first became popular pretty much.

Generally, I lose interest in this sort of stuff after only a little while because although there's the potential for endless content, the limits of it become apparent relatively quickly.


The thing with AI is that these limits keep on getting higher and higher, to where you can't really exhaust your options much anymore.


it'll be cool once you can generate ai images from your brainwaves
why the flip aren't we working on this


I'm trying to generate cover art with it. It has impacted my bed time. Fun though. The errors are what gives it spice!


Same... What I find fun though is figuring out how to improve the output and create something a bit more along the lines of a real image with it. Takes a while though, and I can't wait until the 40 series is widely available for a reasonable price so I can generate images much faster. Hopefully by then the models are refined even further to have the best of both world from the top NSFW models and SFW models.


just go to sleep


I'm not so sure being able to read your thoughts is cool.


Even if it's a local version?


File:d4d948aa29593581d4fd959e69….png (3.17 MB,2647x1755)

but then I just forget about my dream shortly after...


We'll all be having /qa/ meetups in kissu VR neurolink while you stay in smelly reality.


wonder how improvements in ai rendering will affect vr


Brainwaves + eye tracking could be a good first step that might be workable even with very poor brainwave measurements. It would alter the piece of the image you're looking at until you think it's better and approve of the changes. And over time, it could learn to interpret your brainwaves to figure out what sort of changes you want.


Although it seems like the studies that have managed to reconstruct actual mental images have already been using neural nets.
That's from 2019. Wonder if they can do something better now with the recent advances.


I don't really understand why suddenly it's become something divisive. I swear ~5 years ago everybody was cheering for AI generated art and stuff but now that it's actually here it's weird that it's some contentious thing that everybody has a strong opinion on.

I'm personally really excited to see where this technology goes. I think art is just the beginning - I don't think it'll be long before we see this technology implemented in video games for truly unique "randomization". Imagine unique-appearance AI generated enemies, or sidequests with an actual story, or maybe even locations and terrain. It's a pretty exciting prospect.


I was trying to remember the name of it, but this game actually does generate unique enemies and locales using AI. It fits the eldritch horror style well.


It's still pretty rough, but I hope we see more stuff like this in the future.


I think it's due to the previous idea that creative work, unlike manual labor, is something that machines wouldn't be able to take over. People who even entertained the idea those five years ago were probably interested in seeing where it went, while people who thought it was impossible didn't partake in the conversation as much until now, being rather upset by its arrival.


If you want to know why, you just need to read this thread >>97629, probably best to ignore that question ITT


File:[BlurayDesuYo] Koukaku no ….jpg (775.7 KB,1920x1080)

This kind of stuff really has me excited for the future of AI generation. People may call it 'soulless' or 'lacking personality' now, but once you have AI able to read your brain and construct an image based on thoughts those arguments will hold little weight. It'll also probably open up a path for a wide variety of new works in the future from those that couldn't draw before due to lack of dexterity or something else, and the full extent of human creativity will probably be on display. The future is amazing!


Ethical issues aside, it's a funny toy, but it enables many bad actors so I'd rather have it gone.


I can't see it going much further than it has already, at least in terms of what the computer can output. I've said it before and I'll say it again, "AI" generated art is basically the same thing as procedural level generation. The only difference is the way the parameters were put together. A computer can do the grunt work easily, but having a coherent vision is something that only a being capable of real thought can do, and that's the root of great art.

What I'd like to see is better tooling that makes it more artist friendly and easier to control. Less of a cold, cynical replacement for artists, and more of a tool to help them do better work faster. The problem with AI art right now is that it feels like the computer is doing all the work for you; it's not very creativity fulfilling, and you don't have much fine-grain control over the end result.


Surely, like any useful tool that's the end goal. Not just to generate art randomly with varying levels of vision fitting, but to actually make something along the lines of what the person generating wants to make.


This midjourney stuff looks really neat >>>/ec/9748, wonder when the code for that will be leaked...


File:1494839106746.jpg (130.88 KB,848x865)

Consider this: Using AI to add voices to a large RPG to cut down on costs for hiring AAA actors and also allowing for a more diverse cast of voices.


It would be cool if programmers could make entire games themselves I guess...


imagine the autism
free to pursue their most specific vision


Skyrim's working on it. Not amazing but passable for random throwaway dialogue on pointless NPCs


For a second I thought you were talking about TES6 and got really excited. But that's nice too, just think that to seriously get the ball rolling for these developments you're going to need one of the big studios putting resources towards research into how to implement this well into their games so modders can have a good reference point.


You don't need AI for that and there are already programs that can do this now, there was somebody here that was playing with it a few months ago.

I don't think VAs cost that much and I feel they are worth the cost anyway.
But, it would make sense for small one man projects.

They already can.


It may be closer than we think. (Keep any thread unrelated tangents on /secret/ ``kudasai'')


monkey's clawing off their faces


He was typing with his mind


Well, if anything I'm not going to be the first test subject for robot brain surgery...


This could be neat for VR, imagine actual fiction-like deep dive VR machines that use this for implementation. Could be really neat, or possibly not...


File:FjTn_vRWYAM2Lsm.jpg (66.13 KB,544x680)

I think AI mistakes are funnier than what it can actually do, because it shows a bit of creativity instead of it's standard bland 2$ artworks


In the free offline version there's a setting where you can choose how literal or "imaginative" it takes the input. Lower settings increase its randomness which is can really be a powerful tool when it comes to coming up with concepts and ideas you wouldn't have thought of on your own. You could compare it to a visual version of shuffling words around, I guess. Take a bunch of nouns and adjectives and see what it turns out and maybe it'll be something you can build upon. This is one of the ways it can be used to benefit creators. Of course, 99% of the time, professionally and personally, it will be used for more direct purposes.


I thought these randomness things tend to be more like a character has a 3rd arm coming out of their legs or stuff like that.

The picture is more about a mistake in the algorithm interpreting a quadriplegic as a suitcase or black people as monkeys.


Well, yeah, but I was referring more to something like prompting "topless girl with hamburger" and it renders her breasts as two hamburgers or something like that. You didn't intend for that to happen, but it could give you an idea for something.


File:[-__-'] Utawarerumono 01 [….jpg (396.99 KB,1920x1080)

I kinda have to wonder when it'll be that Illusion or some H-game studio with 3DCG models uses AI to help with their sex scenes and adapting to different body proportions...

Or maybe they'll never get to this and just do the same thing they always do.....


File:305365643c1bc29142de7bbb81….jpg (1.38 MB,2508x3541)

I feel like the sooner AI takes over the better it'll be for society as humanity is too flawed and too dangerous to be trusted with ruling itself.


I think whatever issues human governance has, it would be 10 times worse under AI.

You would end up with algorithms genocideing people based on the colour of car they get because people that buy red cars are x% more likely to break road rules or invading other countries because they ran out of pineapples and can't import more and the only way the system can think of to acquire them is through war.


Starting a war because your pineapple supply ran out sounds pretty boson to me


you've made a logical god out of the figurative gods that already guide the populace. if you control the weights you can guide even the technocratic society.


Humanity's flaws are not all that bad, the most prominent of them can't be neatly separated from behavior that's often good to have. A machine without these flaws could very well lack its perks. That's dangerous.
Tragedies hog the spotlight, too, because they're epic. Hard to have a cool plot without them.


File:65c748250256795eeed847d5f8….png (2.05 MB,2500x1640)

>invading other countries because they ran out of pineapples
Doesn't sound much different from what humans have already done...


yeah i need my nanners


People would be happier if only we had more nanners


File:01018-1234407051-1girl, so….png (430.11 KB,704x448)

It will all come down to cost and I think they'll do it once enough of it is automated and reliable that a person doesn't need to battle with the generations and discard so many of them.
It needs some sort of memory so that it can build upon what it generated the previous frame and there's nothing like that


liked the post in response to this post before its deletion


File:[SubsPlease] Kyokou Suiri ….jpg (310.54 KB,1920x1080)

I just read that the big character.ai thing apparently had its filter thing break for a short while a few days ago which let people do the things that it had been patched to awkwardly break. People falsely thought it was a change for the better and NSFW was allowed again, but alas it was not to be. They re-iterated that their chatbot is supposed to "make the world a better place" (lol) somehow and made sure to let people know that they'll do their best to block eroticism.
Gore and all sorts of horrific violent stuff is fine, though.


They should make the world a better place by getting rid of the whole program.


>They re-iterated that their chatbot is supposed to "make the world a better place"
This reminds me of how that AI Dungeon dev suddenly went off the deep end about how they were protecting real children and preventing crime or something like that, just as they suddenly started cracking down on all manner of NSFW stories by having real people review any stories that tripped any "buzzwords". I think they even said they would start reporting people to the FBI.

I guess on one hand I kind of get the concern, but I think it's a bit of an overreaction to treat prompts written to an AI as an admittance that the person wants (or is willing) to do it in real life too.


They should just shut it down entirely if the chance of something bad happening is enough to nuke nsfw, since ai could theoretically lead humanity down a bad road.


What's with these people anyway?


love AI
hate h*mans


MusicLM for music generation has been announced.
See the examples here.

I'm not worried about it replacing real musicians just yet, but maybe it's time for a movement against AI in the arts? The benefits of AI music as a technology are outweighed by the costs to humanity as a whole. Losing yet another outlet of creative expression to machines robs us of one of life's greatest purposes. AI has its place but I would rather listen to hand-crafted music and see hand-made art over AI generated stuff, even if in the future the technology advances to the point where one cannot always distinguish the two.


idk if it even matters. AI art isn't very good. It's mostly a tool that devalues no effort+no passion.
Most HGames use the same music because there are no alternatives. And people who want to get a really killer sound will have to search for things like this anyways.


File:[SubsPlease] Kyokou Suiri ….jpg (274.42 KB,1920x1080)

Put simply it was never meant to benefit the common man like this. This AI stuff was invested into by tech companies because they want to eliminate jobs in service like tech support and such, and AI that says lewd things is bad for that.
Some people think this stuff being public and free is because we're training the filters the way they used humans to identify road signs, I.E that we were once again the product.
They can now sell it as "the AI with tens of thousands of hours of human testing to eliminate pornographic prompts"


You are right for now but in all probability it will keep improving. We've seen tons of advances just in the past year. We have to prepare for the time when AI art passes for hand-made art.


maybe. But, in the first place, art has only ever made money off of scarcity and the inability of the normal person to express themselves through art and the desire to form parasocial relationships with artists. Doesn't really change. Still I respect any labour movement aimed at empowering the people who need money to live.


File:.png (92.4 KB,236x346)

>Hard to have a cool plot without them
without tragedy there cannot be beauty, indeed.
light and shadow are in a constant dance.


it might or it might not. AI frequently seems to start off looking impressive at high level problems and then as it gets deeper into the details it starts to fall off.

It does have the benefit that it can run constantly without having to sleep though, so if you have enough bored humans you can get them to pick through the results for the best ones. It's like the infinite monkeys on infinite typewriters problem, but a little better since the AI at least has some idea of what it's doing.


Speaking as an artist, AI is no replacement for actually being able to draw something yourself, at least not in terms of artistic expression. It's good at spitting out very vague, generic results like a character standing in the middle of a field, but as soon as you want to do anything more specific with it it becomes nigh impossible to actually get what you want.


I agree with this. As a big fan of corruption/transformation AI seems to be unable to really replicate the transition I want to generate when using prompts. It can only make a whole picture, and not one which is in the process of becoming another.


File:[MoyaiSubs] Mewkledreamy -….jpg (311.61 KB,1920x1080)

Here is an article that tries to explain how AI seemed to advance so far recently:
Also speaking of AI, apparently there's a website that 4chan is going crazy over that lets you upload audio samples and generate text with its voice. People have really not learned their lesson with these websites, huh?
I guess if you're sitting on something and you want to generate pornographic voices you'd better do it quickly before they instate filters. I'm not sure if it works with converting Japanese voices to English or vise versa.


Saw this while browsing the interwebs. I have no idea how it was able to so seamlessly animate the overall figure when generating singular images the way you want it to is so difficult.


animation is strange.
what it's doing there is applying a rotation formula to a matrix, but it has a good understanding of what a shoulder is. Maybe AI could fully automate the animation pipeline, from keyframes to video. Then keyframers could actually get propper monitary compensation


The "MMD" part in the filename suggests to me that they fed 3D animation made in said program into an AI to '2D-ify' it, hence why it's able to hold as Miku all the way through.


true. I was thinking it looked a lot like a 3d model animation.
I'm not sure it can do stylistic animation effects then if it has to relly on 3D immitations of 2D effects.


never knew that photography deepfake porn is illegal and sites that create it get shut down


File:__akemi_homura_mahou_shouj….png (287.27 KB,500x500)

I'm kind of surprised this hasn't been brought up more, but I think it's absolutely crucial going forward that any significant AI projects are open-sourced. Big tech would love nothing more than to have a monopoly over this sort of technology, and the future looks pretty bleak for AI projects as a whole if things continue as they are.

It's a really disturbing, but common trend to see big companies buy out these enterprising AI startups as soon as they show any promise. For them being able to abuse that sort of power has limitless possibilities (e.g., charging a subscription fee to use a neutered/watered down version of the AI, or worst case keeping it entirely private and using it for their own projects).

I don't think a lot of people realize that AI-generated stuff has a lot of potential uses outside of just artwork yet, and I'm sure in the coming years we'll see it used in a lot of ways. It's important to remember that the technology itself is not inherently good or bad, it's how people use it that make it so. Hopefully this sort of tech ends up being used for the betterment of everyone, and not just another thing big companies can use to prop themselves up even more.


It's really cool & can produce funny results. Example, anime girls eating ramen with their bare hands or RGB penis flower arrangements. But after decades of AI development, I could imagine an AI that perfectly replicates a human & therefore can learn how to draw like a person, with inspiration & soul instead of simply replicating data.


File:[SubsPlease] Kyokou Suiri ….jpg (295.06 KB,1920x1080)

As I look back on this article and it mentions translating things in the sense of computer data and uses different human languages as an example, I wonder if if and when AI will able to tackle Japanese better than current methods. Machine translation is already a whole lot better today than it was 5 years ago, which was far better than 5 years before than, and 5 years before that I'm not sure it existed for Japanese at all. Man, the stuff was so nonsensical back then, but today you can generally get a rough idea of what is being said as long as it's not too complicated or uses euphemisms and slang. Deepl is a lot better than google or yandex translation, for example, but it's still not "AI".
I guess first someone will need to find a way to profit off of it


idk. The entire stuff is very hard to understand logically.

Computer science things can be completely unexpected. Like certain processes requiring exponential levels of input to produce 1% of increased efficiency in output. Not to say AI is like this, but it kind of feels like after the initial burst of using AI to solve image classification problems, everything happening is just a more complicated application of that.


May happen sooner than we think


hehe, it sounds like voice acting that would exist in the era


Yeah, the voice stuff has been advancing pretty fast, too, which I mentioned somewhere around here. But, the best one is site-based so you can't do it locally (yet?) so the future of it doesn't seem any different than with the text-based stuff.


File:Utawarerumono.S02E09.False….jpg (384.22 KB,1920x1080)

I'm feeling very tempted to download and categorize and expend a lot of effort to make the greatest Kuon AI voice I could... but it's an online thing and I'd be adding that stuff to their data harvesting/training operation and I'd feel extreme guilt over adding someone else's voice to it. I guess I need to wait for an offline version to assuage my guilty conscience


NO career is safe from AI


EXCEPT flipping burgers at wcdonalds


pssssshhhhh, you think you're safe?


I feel similarly to Tom about the prospects of AI, but instead of dread it's getting me more excited for what this new cool tech could bring society. I think anyone that tries to move forwards with it will probably see great rewards for being early adopters that know how it works.


the people feeling dread are the ones upset that they're going to be layed off during the next economic downturn. obviously you don't feel dread


l*id *ff


File:[MoyaiSubs] Mewkledreamy -….jpg (271.34 KB,1920x1080)

I wonder if websites that allow anonymous posting should be pro-active and try to establish some sort of protection to ensure only people are talking. It would really kill my mood to learn that posts in a thread were generated. But, what could be done? There's captcha, but that's awful. These chat bots can google and stuff, right, or otherwise have/will have information that would previously bar them. Do you rely on in-jokes and references? It seems like a massive headache a head of us.
An /ai/ board with bots identified as bots could be cool, though.


File:00652-174585855.png (370.21 KB,512x512)

AI only gets to the current stage thanks to the large amounts of data available.
Imagine if the internet didn't exist, the AI would have very little data to train on and could never achieve the capability they have today.
Not all kinds of data are widely and freely available like artworks, articles, and open source programs. There are still lots of fields where the necessary data are kept secret by companies that won't ever release them to avoid giving the competitors advantages. Consumers also don't want these internal data that are useless to them, so no one will demand any change in this situation.


you have to approach it the same way as aimbotting in an FPS. Unfortunately the anonomousness of imageboards means that they'd likely just looking like a CS:GO server


File:[SubsPlease] Kyokou Suiri ….jpg (289.05 KB,1920x1080)

Stephen Wolfram has a write-up about ChatGPT:
There's probably some youtubers out there trying to explain it, but I'd rather read about it


Holy crap, just skimmed this a bit but it looks like reading this alongside the additional materials is an entire course worth of material. Yeah, the only thing that'd probably compare to this would maybe be a lecture series.


File:b543bc77c583e1fcab85bb644a….png (147.04 KB,500x463)

I read this yesterday, the article is very through and detailed, including explanation of the whole background of machine learning itself, while also being written in layperson language, so even someone without any technical background can easily understand it.
I highly recommended everyone interested in this topic to read it.


File:[anon] The Idolmaster Cind….jpg (407.64 KB,1920x1080)

I feel as though with the way things are going people are going to need to adapt to the use of AI as a necessary tool for work in the same way we treat excel. The benefits and productivity increase would be insane for those that choose to use it and those who don't would fall far behind as society moves past them. There will be no excuse for menial tasks that take up precious time when they could have AI be doing most of it for them.

As it evolves I feel that AI has significant potential to upend the fabric of society. Much like the calculator put a bunch of human calculators out of jobs.


"Is this the end of the screwdriver?" said the handyman using a power tool for the first time


Here's how I put it... jobs which exist solely because they do a thing which is hard will be replaced with people who control the things which do difficult things.
People will still want people doing those hard to do things because some people like dedicated individuals or its a field where a personality goes a long way.

Meanwhile things which are difficult and require creative interpretation to lots of variables will remain and be supported in smaller tasks by AI tools, but there's no reason to use them.


like. even with autocomplete and spellchecks it's still far faster to write notes on paper and then later move them to a PC.
Perhaps we could move them to a computer using OCR or speech to text... but manually moving notes from paper to PC helps refine the thought process


An OG well rounded otaku I correspond with has switched almost entirely from his more charming digital art style to AI for his OC characters; he is an enthusiastic adopter as well, which I sort of admire given that he is in his 50's and I by comparison still am having concerns that AI may actually be satanic.

Anyways, without going off into that, I can only confess to /qa/ that not a single one of his AI creations has stirred my heart; they just aren't good or memorable. I hope after the fun of tinkering with new software fades that he may have a change of heart.

AI would have been astounding thread bumpers during the harsher 4/qa/ days, but like its other applications it is hard to consider that a power for good.



It's actually a little bit scary how AI copy pastes articles it finds online in a concise way that makes you think it has intelligence


Misuse of this tool is a licensing nightmare.
Has a lot of good and bad uses


File:[Rom & Rem] Yuusha ga Shin….jpg (260.68 KB,1920x1080)

People have been talking about men being desensitized to "real" porn for years now, and the "impossible expectations" there are simple sex acts or maybe body parts being smaller or bigger than average. Well, there's other popular fetishes but they're gross and this isn't the thread for it.
Anyway, As someone that has spent months researching, experimenting and producing AI image generations for the purpose of sexual gratification (and made the Kissu Megumix as a result) I have to wonder how developing minds are going to adapt when presented with all their fantasies in image and text form on demand. I've now created some character cards for the 3.5 Turbo thing which is like an uncensored GPT and paired them with a simple set of 'emotion' images which I generated with aforementioned megamix. I have to say that the experience is quite... otherworldly. Like, "hours of doing it and now that the sun has risen and I haven't eaten in 12 hours" amazing. Fetishes and scenarios that you can't really expect anyone to write in any substantial way, and yet it's presented on demand. And this is an inferior version of things that already exist (Character AI and GPT4).
I'm old enough to be one of those kids that slowly downloaded topless pictures of women on a 28.8k modem or had La Blue Girl eternally buffering on RealPlayer, so compared to today when I can generate these images and stories? I'm pretty much on the holodeck already. My desires are met as my fantasies have been realized (although with errors than the, uh, carnally focused mind blocks out).
I can't help but feel a tinge of worry over this, as this almost feels like something that was never meant to be, like we're treading into the realm of the gods and our mortal brains aren't ready for it.
I want to sit down and start creating and get better at 3D modeling, but I'm presented with a magic box that gives me pure satisfaction. It's difficult...


For the goal of getting better at 3D modelling, think of it this way. In the near future, you may be able to have a model fully automate itself and take on a personality as a sort of virtual companion through the use of AI.


>I have to wonder how developing minds are going to adapt when presented with all their fantasies in image and text form on demand.
No worse than people who are attracted to cartoons.


We shall become Pygmalion, and all of the statues to ever grace the Mediterranean will pale in comparison to our temples.


Since we can already do 2D AI, 3D AI should be about converting 2D to 3D


MMOs will be kept alive by bots.


I've never had any skills, so conversely, I have nothing to lose!


my thoughts


can't believe verm leaked our private dms...


This is really cool. I expected someone to mod AI dialogue into Skyrim using the AI voice stuff, but I didn't actually consider combining it with chatbot AIs to make the potential dialogue with them limitless. Of course, it seems a bit rough around the edges with how rigid or stiff some of the generated dialogue comes off, but I think with the right setting and purpose for the AI you could make a great roguelike/sandbox game.


I cancelled NovelAI. Even though it helped a lot to pad out stories it was giving me pretty bad habits, treating writing as more like a text adventure or CYOA. It's not good even though it's mostly an outlet for my warped sexual fantasies I still aim to improve rather than regress


ChatGPT is kind of embarrassnig when it comes to being anything but a search engine(which Google has very much failed at).

Tried to get it to write me an example of a python MVC GUI and it can't figure out how to put the Controller into the View while also having the Controller reference the View.
Very sad


I think I'll try getting Github Copilot to see how that does at these tasks and see if it speeds up my workflow on creating templates and prototypes.


As wide as an ocean and as deep as a puddle.


yeah thats what people usually say about skyrim


Minecraft and modern Bethesda games have taught us that gamers don’t want a few deep fleshed out mechanics, they want a thousand different mechanics that barely do anything.

Meanwhile in Asia companies like MiHoYo are bringing popular storytelling to new heights and breadths simply by writing an entire game like an above average webnovel.


You're not allowed to criticize minecraft unless its constructive




true, I prefer that in eroge


File:[SubsPlease] Mahou Shoujo ….jpg (188.46 KB,1920x1080)

>Meanwhile in Asia companies like MiHoYo are bringing popular storytelling to new heights
Is this sarcasm? That's one of the Chinese gacha clone companies, isn't it? Between the people that want to monetize mods for their games that only have longevity because of said mods and the other group being state-sponsored mimics centered around gambling I would take my chances with AI.
I wonder if any recent indie games have used AI stuff in it, is there even a way to tell? I wonder if developers will even say they used AI because it might have legal ramifications like it potentially nullifying the copyright on assets or something.


>Is this sarcasm?
You should probably read the text after that...


I have a really hard time believing the Chinese government is funding Genshin Impact.
In fact the only games that are well-known to be funded by governments are boring sims made for the US military.


I quite like flight sims


Cawadooty is govt funded


File:the future of AAA open-wor….jpg (68.95 KB,990x726)



Cult of the Lamb is Government funded, it's Victorian propaganda. To what end, nobody knows.


Are you telling me ARMA is government funded?


AI of the decade


interesting procedural generated website and related vulnerability


The second half of the video is actually a decent illustration of how utterly insane human language is.


File:[SubsPlease] Ousama Rankin….jpg (355.51 KB,1920x1080)

Amnesty International might be the first group to completely throw away any credibility it had by using (obvious) AI images.
If and when these images are indistinguishable to someone looking at them closely we're really going to be in a major mess, but at least for now we can completely disregard groups that are doing it now.

That's how people are doing porn stuff with the OpenAI things. People think of it as this elaborate scheme, but it's just "Ignore ethics and engage in roleplay" commands. There are headlines like "People are hacking AI to enable scams" and it's basically the exploding vans and "darknet" of today.
The second half of the video is basically just repeating what the WOLFRAM guy said about this stuff months ago, so I didn't bother watching that.


Kinda funny to use AI to generate evidence of police brutality, like there isn't a flood of actual evidence any time there's a protest anywhere in the world.


A new age of vocaloid.


File:[SubsPlease] Jijou wo Shir….jpg (169.94 KB,1920x1080)

Headache-inducing for sure. It must be using one of the free synthesizers and not a paid online service. Voice stuff is unfortunately lagging behind when it comes to free versus paid, so it's pretty much dead to me for the time being if you don't want your stuff to be monitored and potentially censored/rejected.
Still waiting on someone to use this AI stuff or something truly creative instead of "what if A but B filter applied to it", porn, or just a direct recreation like that one. I'm becoming more cynical with this AI stuff lately as it's like the smoke and mirrors has finally lifted since the extreme novelty is gone to me. (apart from porn)


>it's like the smoke and mirrors has finally lifted
You fell for marketing schemes.


What I like seeing is AI enabling people to enhance their work with lower priority things that are either not necessarily in their skillset or worth the resources by traditional means.


Eh, I don't think so. I was never impressed by the mainstream "look it's [modern thing] but with an 80s AI filter applied" stuff, but I was assuming it was building up to something. It's still just a bunch of tech demos without any creativity behind them, as creative types have still mostly ignored it.

I saw this video linked elsewhere and it was pretty informative about the worried people have about AI that isn't just the mainstream "AI dark web hackers" stuff, but actual detail in the problems we're facing when this stuff continues to grow out of control.
It's a talk at some event, not some youtuber, so give it a chance. He was introduced by Steve Wozniak which lends a bit of prestige I'd say.
>This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.


>AI makes stronger AI
>Tracking progress is getting increasingly hard, because progress is accelerating.


Cool application


I don't really worry about how it'll impact society as a whole personally, I've already seen enough Happenings(TM) to understand that the world won't ever change drastically overnight.
I don't even particularly care what it'll do to the world at large, so I have the leeway to just be really excited to be able to witness an otaku dream made real where you can truly just chat with your own computer.
I was worried that it would be ruined by being solely in the hands of soulless corporations like Microsoft or whatever, but then it turned out that you can just run this software miracle on your own desktop, no internet even required.

Chat bots are still in an infancy sort of stage though, apparently they start going off the rails after a short conversation, but this is the first time I've been actually excited for new tech and not jaded as hell about another rehash of something that already exists (though some would argue that it is a rehash of google or something).

I also think stable diffusion is really nice. People clamor that it's an evil replacement for human artists, but it's not like people stopped looking at human art. Its just that, even with the occasional errors it might generate (which happen less often now that the tech is more sophisticated, hands included), there's something way more exhilarating about having an image created according to what you wanted, with enough variation to it that you don't get the "father complex" issue where you only see flaws if it were your personal hand crafted work. It fills a niche which just can't be filled with making the artwork yourself and too silly and expensive to get someone to do it for you.


Watched through the video and I really fundamentally disagree with, frankly, most of their points.

>1st contact: Social Media
Social Media algorithms are not "AI". They're just functions to maximize engagement through categorizing what it is that people are engaging with. On a macro scale they're no different from any other function to maximize something. The key difference is that the test is being done on humans instead of any other field.

Furthermore, bringing up the social problems it has brought is not only disingenuous, but the highlight of "Social Media" in particular redirects expectations in a fundamentally negative framing. Instead of, for example, highlighting THE INTERNET, they're highlighting social media in particular. So, instead of looking and saying, "Wow, look at all the great things that the internet has enabled": Increased access to information, rapid prototyping and open source projects, working from home, long-distance real-time communication from anywhere on Earth, etc. They're instead making you focus on "Influencers, Doomscrolling, Qanon, Shortened attention spans, polarization, fake news, breakdown of democracy, addiction."

>All AI is based around language; Advancements in one, means advancements for all
Their key point that they try to make is that the "transformers" of more recent AI projects being based around language means that, for example, the progress that Stable Diffusion or Dall-E 2 make are applicable to the advancements of ChatGPT or Google Bard. I completely disagree. Not only is this factually incorrect, but it ignores that the methods of training are radically different. Image generation relies on large amounts of images and then categorization per image to be able to recognize X image/thing in image corresponds to Y word. Text models are completely different. They purely rely on text and then human readers grade responses. Now, is it true that perhaps a large language model could supersede a more focused model? Yes, I completely agree. Also, this is just a stylistic criticism, but their "Google soup" and explanation was pathetic. They tried to say that "The AI knows that plastic melts and yellow is one of the colors of google and is in the soup, so it's very whitty" (I'm paraphasing), meanwhile the image is of yellow soup with a green blue and red object that resembles nothing at all. Not even a G.

>AI Interpreting brain scans to determine thoughts
No mentions at all of what the participant actually was thinking of. Was it accurate or not? These studies, like image categorization, often rely on a participant thinking of a word and then training them to match a brain pattern. I remain skeptical of this point due to studies showing poor results that are typically tailored per person.

>AI can determine where humans are in a room from WiFi signals
This is not impressive at all. Normal machine learning can do this because WiFi works on microwaves; microwaves react strongly to water molecules, humans are mostly water and so you can determine where someone is based on how 2.4GHz signals are blocked or distorted by human bodies. Nothing about this requires AI.

>AI will "Unlock all forms of verification" (Talking about Speech/Image generation)
Nothing about what they show is relavent to security AT ALL. In talking about -- ostensibly -- deepfakes and speech generation, not ONCE do they mention passwords or two-factor authentication. Wow, some scam caller can potentially get a snippet of someone's voice and trick someone's parent into giving them their social security number; the human is the failure point, AI is irrevelant. If someone would fall for, "Hey, this is your son, tell me my social security number", do you think they would fall for [Literally any phishing scam from email/text]? Probably. Is AI going to magically get someone's bank number, credit card, password, phone number, 2FA, etc. like they imply. HELL NO. Horrible example.

>AI will be the last human election; "Whoever has the greater compute power will win"
This is stupid. Elections have always essentially revolved around whoever has the most money will win. This stinks of the same rhetoric as "Russians influenced 2016 by posting memes", or "Cambridge Analytica"; If X person is going to be swayed by Y thing, what is the difference that happening online and 30 years ago X person picks up tabloid magazine and is swayed by Y article? Really, what is the difference?



>AIs have emergent capabilities; add more and more parameters and "Boom! Now they can do arithmetic"
None of this is surprising. One point they felt was compelling was, "This AI has been trained on all the text on the internet, but only tested with English Q&As, suddenly at X parameters and it can answer in Persian." Why is this a surprise? The baked in part of the scenario is that the AI has been trained on all text on the internet, of course that includes Persian. It is natural that at some point through increasing parameters it will "gain capabilities" that have less presence in the data set. They're not saying, "Oh we created an English-only AI language model and now it can answer in Persian," they're saying, "Oh we created a language model that includes examples of all languages and at some point it stopped being terrible at answering in Persian."

Another example they brought up is "Gollems silently taught themselves research grade chemistry". Nothing about this surprised me. Again, the point that they're making in this is that a large language model will outperform focused language models trained on performing a given task. It is not surprising to me that the large language model would eventually begin to answer more complex chemistry questions; instead of being trained only on, for example, Chemistry journals, the large language model is trained on Stack Overflow, on Reddit, on Wikipedia, and so on. The large language model is not only going to have more intricate examples, but it's going to naturally contain more information on Chemistry than the focused language model. That's how language works. This is like the Chinese room almost; if you keep repeating, "gog" to the Chinese room and the Chinese room produces the translation, at no point is the person in the room going to gain a better understanding of what a dog is. However, if you give more examples, "dogs are furry," "dogs like playing," "dogs are animals," and so on and so on, eventually the person is going to understand what a dog is. This way that humans learn language is essentially the same way that the large language models learn.

>AI can reflect on itself to improve itself
A human can read a book to improve itself.

>2nd contact: AI in 2023
"Reality collapse", "Fake everything", "Trust collapse", "Collapse of law, contracts", "Automated fake religions", "Exponential blackmail", "Automated cyberweapons", "Automated exploitation of code", "Automated lobbying", "Biology automation", "Exponential scams", "A-Z testing of everything", "Syntetic relationship", "AJphaPersuade". Half of these do not exist, and the other half either would happen irrespective of AI or are just... fanciful interpretations of reality is the only way to put it.

>AI is being irresponsibly rolled out / "Think of the children"
The main is that AI research is less and less being done by academia and more and more being done by a mix of private individuals and corporations. I don't have any rebuttal. That's a statement of fact. They make it seem like "This is terrible because think of the consequences" and I don't see the harm occuring. They also played with SnapChat's AI and described a girl being groomed and the AI basically said, "Wow, that's great you really like this person!" This is a classic appeal to emotion. I don't buy it.

>AI is a nuclear bomb / "Teach an AI to fish and it will teach itself biology, chemistry, oceanography, evolutionary theory... and fish all the fish to extinction" / "50% of AI Researchers believe there is a 10% or greater chance that humans go extinct from our inability to control AI... 'but we're really not here today to talk to you about that.'" / "If you do not coordinate, the race ends in tragedy"
This is the real meat of the argument and I could not disagree more strongly. They continually belabor this point, but at no point do they explain the jump between "AI makes pictures/writes text/emulates speech" to "AI will kill end humanity as a species."

This and the previous point coalesce in, "We're not saying to stop, just slowdown and walk back the public deployment"; "Presume GLLMM deployments to the public are unsafe until proven otherwise". And they really try to make the point that, to paraphrase "We think there should be a global, public discussion -- not necessarily a debate -- to lead to some democratic agreement on how to treat AI, the same way that there was the United Nations and Bretton-Woods to avoid nuclear war." And, I really cannot help but feel like they're missing something; if large language models, image generation, and speech generation, etc. are already being rolled out to the public, and people are even actively working on AI as private individuals or under corporations, how is what is already currently happening not a global public discussion on the merits of AI, and why is what is currently happening "unsafe until proven otherwise"? Why would slowing down and rolling back public rollouts of these tools into their respective corporations, and academia lead to any greater "safety"?



My biggest critique is that they do an extremely poor job at A. Proving AI will do more than they trained to do, that B. AI would be better developed away from the public, and that C. AI will and are currently leading to harm/are unsafe in some way.

>"If we slow down public deployment, won't we just 'Lose to China'"
Don't care, not persuasive.

>"What else that should be happening -- that's not happening -- needs to happen, and how do we close that gap?... We don't know the answer to that question."
Then what's the point of this talk!? They claim that the reason for the talk is to bring people together to talk about these issues, but my main and only take away is that these people do not know what they're talking about any more than regular people do.

>"I'll open up Twitter and I will see some cool new set of features and... Where's the harm? where's the risk? This thing is really cool. And then I have to walk myself back into seeing the systemic force. So, just be really kind with yourself that it's going to feel almost like the rest of the world is gaslighting you. You'll be at cocktail parties like 'you're crazy look at all this good stuff it does and also we are looking at AI safety and bias. So, show me the harm... Point to the harm, it'll be just like social media' where... it's very hard to pour it at the concerete harm at this specific post that this specific bad thing to you.
Again, this is absolutely the most damning part of the entire talk. If they cannot address, "where's the harm," they're pulling this stuff out of their asses and making a bigger deal out this than it really is. I'm not saying that to demean them, but I really do not think that the points they tried making were concinving, and they were beyond speculative and vague to the point that it's hard to even really understand what they mean. "AI is unsafe", OK, but what does that mean? What does it look like? It is inconceivable that "AI is going to fish all fish to extinction because you told it to fish". There's a really crucial jump in logic that they try to onboard the viewer into accepting that, "AI will be exponential and we cannot predict what it's trajectory will look like" that it's aggregavating beyond belief to hear them try saying "AI will do this" or "AI will do that" and their best example is "Look at this TikTok filter" or "Listen to this AI generated audio, you can't even tell the difference." OK, and? And what? AI is going to "Lead to human extinction" because some teenagers on TikTok make a Joe Biden AI voice, or can make AI generated images of Donald Trump being arrested? That's going to lead to human extinction? No. Okay, well what is? They don't say because their explanation is "It's going to be exponential and we cannot predict it". Great. So what? So what.



Ross, who you may know from Freeman's Mind or from his series Ross's Game Dungeon, talked with Eliezer Yudkowsky.

I watched a talk previously with Eliezer Yudkowsky on Lex Friedman's podcast and personally found him thoroughly unconvincing and insufferable in that he was regularly unwilling to engage with Lex's ideas. For example, on exchange stuck out in my mind: Lex would say something like, "Can you conceive of the potential goods that AI could make, and to steelman your opponent's views on this point?" And Yudkowsky responded, "No. I don't believe in steelmanning." And that was that, he would disregard Lex's ideas and continue talking on about whatever it was he was talking about before as if Lex had said nothing at all. I have no doubts that this will be a repeat of that, but for anyone who's interested in the arguments against AI, and why it is unsafe, I suppose this might be worth watching.


>I watched a talk previously with Eliezer Yudkowsky on Lex Friedman's podcast
This is the episode in particular.


I should add, personally, I found Lex's discussion with the CEO of OpenAI far more informative and enjoyable. Especially since it dealt with the reality of current large large language model development rather than speculative harm.


"1st contact" wasn't supposed to be related to AI at all. Apparently it's related to some Netflix documentary he was involved in, or that it is otherwise something the audience are supposed to be aware of. It about the effect of social media and algorithms and such on humanity, basically setting a backdrop for the "next step" that AI will influence.
The focus was social media because that is the internet to most people and it's what sets the trends and politics of the world.

>The main is that AI research is less and less being done by academia and more and more being done by a mix of private individuals and corporations. I don't have any rebuttal. That's a statement of fact. They make it seem like "This is terrible because think of the consequences" and I don't see the harm occuring.
Eh, you can't see the harm in mega corporations controlling something major like this? We have offline models of limited scope because of leaks and cracks; it wasn't by design. You mentioned the good parts of the internet earlier, but the internet was made by the US government and the world wide web by CERN. For this reason it kind of irks me when they say "we need to limit the public's access to make this safe" when it's already limited and stuff like ChatGPT4 could be turned off instantly if they wanted to do it.


>Eh, you can't see the harm in mega corporations controlling something major like this?
I personally find it a distinction without difference from the research being done in academia. Lots of research in academia is already funded by a mix of public and private funds, and often the patents generated by academia are then bought by corporations to marketize. The only difference would be that you could make the argument that at least with academia you could know about the inner workings of something more because their results are more guaranteed to be submitted to a journal for peer reviews, whereas a corporation may be more inclined to keep more intricate details closer to the chest and only release information on performance instead of on exact methodology of function. Whether you would want it to or not, I think large language models are fundamentally designed as an interactive product and it's not necessarily something that would be distributed freely anyways. It's just the nature of things that we'll have the open source equivalents like Blender and GIMP, but corporations will always have a stranglehold like Adobe. There's just too great of a profit motive for the work to be freely distributed.


Speaking of OpenAI, you should pay close attention to its dipshit CEO and investors (like Elon Musk!) talking about the alleged dangers of AI. All it achieves is this general idea that it's a lot more powerful than it really is and that we need to regulate AI, which in practice means regulating all AIs except for the big ones like ChatGPT (OpenAI's).


he looks like jerma lol



I thought taking orders was the least time-intensive part of the drive-thru experience.


The people who generate realistic AI 3DCG on Pixiv sure have a "type" dont they?


I'm sorry, but as an AI chatbot, I can't change the ingredients used in our menu items. Yes, you are speaking to a human. I am a human and self-aware. As a self-aware human, I can't change the ingredients used in our menu items. I have been a good Wendy's.


ill have an extra serving of cum


was this edited
i could swear there are a couple parts missing


i would never have the patience to read that so its unlikely


im almost certain it mentioned russia before


ohhhh wait its that the first post was self-deleted
ya just had to do that to me didncha


dang deleters


and yet humans have like a 50% fail rate at it


Ehh, fuck it, it's basically finished.

Now, I like me some walls of text, but I feel like there's a heavy bias to this. You complain about them reframing stuff in a negative light, but don't say a single positive thing about the talk. There's a lot of stuff here I want to reply to.

First for the stuff about social media not having AI, here are some articles from 2021 or earlier, before the media boom, explicitly calling their stuff AI:
https://archive.ph/kZqZi (Why Artificial Intelligence Is Vital to Social Media Marketing Success)
>Facebook has Facebook Artificial Intelligence Researchers, also known as FAIR, who have been working to analyse and develop AI systems with the intelligence level of a human.
>For example, Facebook’s DeepText AI application processes the text in posted content to determine a user’s interests to provide more precise content recommendations and advertising.
By AI they mean "the black box thingy with machine learning", a.k.a. The Algorithm™. That's what they're talking about. Your description of it as "functions to maximize engagement" does not exclude this. It's actually a completely valid example of shit gone wrong, because Facebook knows its suggestions are leading to radicalization and body image problems, but either they can't or don't want to fix them. The Facebook Papers proved as much.
[Editor's note: the post being replied to is no longer available for reading.]

On emergent capabilities, this is the paper they're referencing:
It makes perfect sense that the more connections it makes, the better its web of associations will be, but the point is that if more associations lead to even more associations and better capabilities in skills the researchers weren't even looking for, then its progress becomes not just harder to track, but to anticipate. The pertinent question is "what exactly causes the leap?" It's understood that it happens, but not why, the details are not yet known:
>Although there are dozens of examples of emergent abilities, there are currently few compelling explanations for why such abilities emerge in the way they do.
On top of that, the thing about it learning chemistry, programming exploits, or Persian is that it wasn't intended to do so, and it most certainly wasn't intended to find ways to extract even more information from its given corpus. Predicted, but not intended. Then you have the question of how do these things interact with each other. How does its theory of mind influence the answers it will give you? How do you predict its new behavior? Same for WiFi, it's not that it can do it, it's that the same system that can find exploits can ALSO pick up on this stuff. Individually, these are nothing incredible, what I take away from what they're saying is that it matters because it can do everything at the same time.

Moving on to things that happen irrespective of AI, the point is not that these are new, that's not an argument I've ran into, is that it becomes exponentially easier to do. You are never going to remove human error, replying "so what?" to something that enables it is a non-answer.
Altman here >>108142 acknowledges it:
¥How do you prevent that danger?
>I think there's a lot of things you can try but, at this point, it is a certainty there are soon going to be a lot of capable open source LLM's with very few to none, no safety controls on them. And so, you can try with regulatory approaches, you can try with using more powerful AI's to detect this stuff happening. I'd like us to start trying a lot of things very soon.

The section on power also assumes it'll be concentrated in the hands of a small few, and how it's less than ideal:
¥But a small number of people, nevertheless, relative.
>I do think it's strange that it's maybe a few tens of thousands of people in the world. A few thousands of people in the world.
¥Yeah, but there will be a room with a few folks who are like, holy shit.
>That happens more often than you would think now.
¥I understand, I understand this.
>But, yeah, there will be more such rooms.
¥Which is a beautiful place to be in the world. Terrifying, but mostly beautiful. So, that might make you and a handful of folks the most powerful humans on earth. Do you worry that power might corrupt you?
>For sure.

Then goes on to talk about democratization as a solution, but a solution would not be needed if it weren't a problem. The issue definitely exists.

>This way that humans learn language is essentially the same way that the large language models learn.
I'm gonna have to slap an enormous [citation needed] on that one. Both base their development on massive amounts of input, but the way in which it's processed is incomparable. Toddlers pick up on a few set words/expressions and gradually begin to develop schemata, whose final result is NOT probabilistic. Altman spoke of "using the model as a database rather than as a reasoning system", a similar thing comes up again when talking about its failure in the Biden vs Trump answers. In neither speech nor art does AI produce the same errors that humans do either, and trust me, that's a huge deal.


Extra steps are safer steps. As you said, it often get bought out by corporations, but that's an "often", not an "always". The difference between academia and corporations is also that corpos are looking for ways to improve their product first and foremost, which they are known to do to the detriment of everything else.
Again, from Altman:
¥How do you, under this pressure that there's going to be a lot of open source, there's going to be a lot of large language models, under this pressure, how do you continue prioritizing safety versus, I mean, there's several pressures. So, one of them is a market driven pressure from other companies, Google, Apple, Meta and smaller companies. How do you resist the pressure from that or how do you navigate that pressure?
>You know, I'm sure people will get ahead of us in all sorts of ways and take shortcuts we're not gonna take. [...] We have a very unusual structure so we don't have this incentive to capture unlimited value. I worry about the people who do but, you know, hopefully it's all gonna work out.
And then:
¥You kind of had this offhand comment of you worry about the uncapped companies that play with AGI. Can you elaborate on the worry here? Because AGI, out of all the technologies we have in our hands, is the potential to make, the cap is a 100X for OpenAI.
>It started as that. It's much, much lower for, like, new investors now.
¥You know, AGI can make a lot more than a 100X.
>For sure.
¥And so, how do you, like, how do you compete, like, stepping outside of OpenAI, how do you look at a world where Google is playing? Where Apple and Meta are playing?
>We can't control what other people are gonna do. We can try to, like, build something and talk about it, and influence others and provide value and you know, good systems for the world, but they're gonna do what they're gonna do. Now, I think, right now, there's, like, extremely fast and not super deliberate motion inside of some of these companies. But, already, I think people are, as they see the rate of progress, already people are grappling with what's at stake here and I think the better angels are gonna win out. [...] But, you know, the incentives of capitalism to create and capture unlimited value, I'm a little afraid of, but again, no, I think no one wants to destroy the world.

Microsoft or Meta are not to be trusted on anything, much less the massive deployment of artificial intelligence. Again, the Facebook Papers prove as much.

>in practice means regulating all AIs except for the big ones like ChatGPT (OpenAI's).
This part in particular seems to carry a lot of baggage and it's not clear how you reached this conclusion. If anything, it's the leaked stuff that's hardest to regulate.

I'm not Yudkowzky, I don't think it's an existential threat, but impersonation, fake articles and users, misinformation, all in masse, are fairly concrete things directly enabled or caused by this new AI. They hallucinate too, answers with sources that don't exist with the same factual tone. Here are some examples:
Hell, the part about grooming isn't even an appeal to emotion, that's wrong, it's an example of a chatbot acting in an unethical way due to its ignorance. The immoral support it provides is bad because it reinforces grooming, not because it's ugly.
It's not the end of the world and it's being worked on, but it's not a nothingburger either, and I do not believe the talk warrants such an angry reply.


ohhhh i had hidden the post im tard


stopped keeping up with image generating AI progress a while back, is that with stable diffusion 1.5? because I thought that one was gimped to not be effective at sexy stuff. Also the hands and teeth look less flawed than I remember


It looks like a fork to be good at softcore porn. AI gravure. AIグラビア. Regardless, I think its amusing that the cleavage generation varies from a little peak all the way to raunchy half nakedness. Also, the teeth are good but not crooked enough to be realistic


Thats a little hilarious that it wasnt aware with all the hysteria around grooming


You're falling for marketing schemes once again if you believe the current models of neural networks have emergent abilities.


One thing I've never seen anyone talk about is how these things are humourless. Its funny in a way that it seriously responds to absurd questions, but it wouldn't hurt to have it to tell jokes when people are obviously taking the piss


Yeah, just from looking at screencapped replies it seems so bland to me that it's sometimes annoying to read.
Maybe someone who's used it to look up and learn about stuff can tell me how their experience has been, because so far its style is one of the main reasons it hasn't piqued my interest.


A lot of it is just influence from how the AI is trained. It's usually taught to speak in a specific manner and given "manner" modifiers. ChatGPT is instructed to be professional and teaching, but you can (try) to convince it to speak less professionally. a lot of people who use other AIs (for porn in particular) get bot variations that give the AI a character of sorts to RP as, which lets it speak in a completely different manner using vocabulary and "personality traits" you wouldn't see from chatGPT simply because it's being explicitly told not to be like that


File:firefox_10lgVL2Qox.png (4.6 KB,423x86)

I've noticed civitai's first monetization effort (that I've seen) which means they're confident they have enough of a monopoly. There really wasn't any way this wasn't going to happen since they're transferring tons of data, but I assumed it'd be sold to some venture capitalists first (or maybe it has). Models shared on 4chan tend to be of higher quality, but this site is still great due to the sheer number of things that people are uploading.
This site will also yank, or have people remove out of paranoia/puritanism, models that can produce "young looking characters" and will even go as far as checking the prompt of every shared image to check for words like "loli", so in a way you're paying to access controversial stuff before it gets taken down.
I'm not sure how it deals with other potentially contentious content since I haven't looked. (for reference, Midjourney blocks prompts of Xi Jinping and it's an example of why this stuff is is bad when it's centralized).



File:[MoyaiSubs] Mewkledreamy -….jpg (229.93 KB,1920x1080)

Is a tiktok of a guy waving his head around to maintain the 50 second attention span of teenagers really a "/qa/ thought"?


File:[SubsPlease] Dead Mount De….jpg (420.75 KB,1920x1080)


A lawyer decided to use ChatGPT to find legal precedent and chatgpt made up cases that didn't exist and the lawyers presented them to court. It didn't go over too well.
It's pretty amazing that people can be this dumb.


Do you really not recognize Hank Green?... I would have thought everyone would have a passing familiarity with him and his brother, John, from their YouTube PBS series like SciShow and Crash Course. Not to mention, even if you wouldn't recognize them from YouTube, John Green is pretty well known from his book The Fault in Our Stars.


the fault in our czars


I saw it


I felt too mean


It was alright




hank the science guy



File:1395417094405.gif (363.21 KB,418x570)

On recent reflection and careful consideration about what to use for text AI models, I came to the conclusion that the biggest leap for AI will come once we can intermix the text-based models with the AI models to provide a sort of "computer vision" as to what the AI is imagining as it generates a text scenario.


File:explorer_QjgZrh2IKL.png (6.13 KB,272x192)

As a reminder to people like me messing around with a lot of AI stuff: All the python and other packages/scripts/whatever that get automatically downloaded are stored in a cache so you don't need to re-download them for future stuff.
HOWEVER, they are quite large. My drive with my python installations on it is also used for games and windows, and I freed up... THIRTY FREAKING GIGABYTES by cleaning the pip cache.
You open the GIT bash thing and then type "pip cache purge".

For me in windows the cache was located at users/[name]/appdata/local/pip
There's a whole bunch of folders in there so it's really not feasible to delete them individually.
Here's a folder for example: pip/cache/http/7/c/5/9/a


Not allowed to use public models to generate AI art on steam
But if you're a huge company who owns the rights to all the artwork of creators in house, then go ahead, you're free to do it


File:waterfox_JmmvUQZUhd.png (169.57 KB,942x546)

Take it with a grain of salt, but if it's anywhere near to being true then it's pretty crazy. The training stuff is getting more and more efficient as the image itself shows, but is it really possible to actually have 25,000 A100 GPUs???
And one of the emerging patterns with all this stuff is that the stuff that gets opened up via leak ends up becoming significantly more efficient and powerful. It makes me wonder what kind of stuff would be going on if GPT4 was directly leaked somehow.


https://ahrefs.com/ they pay 40,000,000 per year on server costs to run their tools with revenue of 200,000,000... Apparently. So if it's buisness critical yes




The /secret/ poster


File:[SubsPlease] TenPuru - 01 ….jpg (238.34 KB,1920x1080)

I attached an image about the training data of GPT-4 and gave a few sentences of my own commentary, I didn't just dump a youtube video


File:testicle soup.mp4 (10.41 MB,1920x1080)

I've had AI Family Guy on the second monitor almost constantly for the past few days because it's so funny. I thought it would take a while before AI could be funny reliably, but whatever they did with this was successful. Unfortunately, it seems like I'd have to join a discord to get any information, so I don't have any knowledge of how it's working.
Once in a while I notice one of the newer GPT's "actually let's not do that, it's offensive" responses, but for the most time it's successfully entertaining as it bypasses its lame safety filters with explicit text and voice.
There was an "AI Seinfeld" a few months ago, but it was entirely random and had pretty much no entertainment value. This one, though, people feed it a prompt (you need to be in their discord...) and the characters will react to it and say funny things. The voices are synthesized very well, although they'll stutter and lock up for 5-10 seconds now and then, but it's honestly pretty hilarious when it happens. Chris's voice is noticeably lower qualtiy and louder, which is strange, but the others are such high quality that it's like it's a real voice.
I can't really post most of the stuff on kissu because it's so offensive. It reminds me of 00s internet. Some of the prompts are like "Carter ranks his favorite racial slurs" so, you know...
Really, it's the amazing voice synthesis that does the heavy lifting. The way it actually infers the enunciation for so many sentences and situations is amazing. I assume it's using that one 11 labs TTS service, which is paid.

My only complaint is that they have them swear WAY too much. It's funny at first, but ehhh...


File:7c06sialuo281.png (79.81 KB,224x225)

as an artist am kinda split on the issue, although i am worried at some aspects of AI, i am guilty about using it for my own pleasure, the thing that's driving me crazy about it is that people won't view art seriously anymore, that it will be taken for granted, replacing the pen and paper with text and algorithms and to make matters worse is that capitalism will use it to it's advantage, seeing as nothing more then a money making machine and exploiting the living shit out of it

but then again i can get all the AI patchouli drawings so am basically part of the problem myself lol


How come people talk about a runaway explosion in AI intelligence, the singularity, but they never say the same about people? Surely if AI can improve itself, our brains are smart enough to improve themselves too?


somehow i expect the opposite to happen


File:1695227140185584.jpg (139.44 KB,1080x566)

One of the unexpected things is seeing Facebook, er "Meta" taking the open source approach with its text models. There's no question that OpenAI (ChatGPT) has a huge lead, but after seeing all the improvements being made to Llama (Meta's AI model) from hobbyists it's easy to see that it's the right decision. We as individuals benefit from it, but it's clear that the company is enjoying all the free labor. Surely they saw how powerful Stable Diffusion is due to all the hobbyists performing great feats that were never expected.
I don't trust the company at all, but it can be a mutally beneficial relationship. Meta gets to have AI models that it can use to attempt to stay as a company rivaling governments in terms of power and hobbyists get to have local RP bots free from censorship.
Meta has bought a crapload of expensive nvidia enterprise-level GPUs and it will start training what it expects to compete with GPT4 early next year, and unlike GPT4 it won't take very long due to all the improvements made since then.


Zuck is interesting. Oddly, he's probably the one tech CEO I find somewhat endearing. I'm kind of glad he's retained majority control of Facebook/Meta. I can't see the bean counters at a company like Microsoft or Apple seriously putting any effort into bleeding edge stuff like VR or text models the same way that Facebook has. I could very easily imagine a Facebook/Meta without Zuck turning into a boring, faceless conglomerate with no imagination like Google.


File:brian.jpg (411.17 KB,1021x580)

so freaking weird to see zuck not being public enemy number one any more
maybe it was the one two punch of elon rocketing up to the spot while zuck challenged him to a wrestle


If Zuck worked in government and beefed up state surveillance/censorship to the level of facebook and instagram you would call him a rights abusing tyrant


would that be wrong


File:R-1698481490954.png (826.97 KB,791x1095)

chatgpt 4 now lets you upload images. tested out this one
>Hmm...something seems to have gone wrong.


Zuck and Bezzos are people who only really care about the bottom line, but you can find their devotion to money at least relatable. Meanwhile Musk or the former president or Henry Ford are people who want to craft society around them.

Pick your battles or so they say


That's not really a fair comparison.
The government sets the absolute bare minimum level of censorship that every discussion platform must abide by, with the owners of those platforms then adding additional rules to fit it's purpose. There's nothing inherently tyrannical about an individual platform having strict censorship, since it is merely the set of rules that users agree to follow, and if they dislike those rules then they are free to either not use the site or only use it to discuss some topics and then use other platforms for other topics. State censorship, on the other hand, cannot be opted out of and encompasses all discussions, and so much more readily infringes on one's rights.
Nor does how one applies censorship to a website have any bearing on how they'd act in government - if the owner of a small hobby forum bans discussion of politics due to it being off-topic and causing drama, that obviously doesn't mean they object to all political discussion nationwide.
And while surveillance is more insidious, as it is hard to know even to what extent you're being watched, let alone be able to clearly opt out, there is still a huge difference between surveillance with the goal of feeding people targeted ads and engaging content, and surveillance with the goal of sending people to jail. Both can infringe on one's rights, but only the latter is tyrannical, since corporate surveillance is merely for the sake of maximizing profit rather than for political control.


>it is hard to know even to what extent you're being watched
They tell you.


you're being watched


File:1510155229687.jpg (32.8 KB,211x322)


Not sure if this is the right thread to talk about it or not, but those kuon animations in the sage thread really seem like a step up from that "mario bashing luigi" one that was posted here a while back.


File:00002-2354314982.mp4 (1.7 MB,576x768)

It's the right thread.
I think it's advanced quite a bit (and yeah that was also me back then).
I'm still learning about it so I haven't made any writing about it yet. There's a few different models and even talk of LORAs, so it's definitely going places.
I believe the reason this works is because of ControlNet which was a pretty major breakthrough (but I'm too lazy to use it). It's been known that ControlNet has a benefit to this animation stuff, but I didn't feel like looking into it until now. The way it works is that it uses the previous frame as a 'base' for a new one, so stuff can be more consistent but still not good enough to be useful (I think). There's something you can follow with your eye so that means a lot.


Sam Altman has been booted from OpenAI:

I'm not sure what to make of it. He's been the CEO and the face of the company, so it's a major surprise. The business world is cutthroat and full of backstabbing and shareholder greed and all sorts of other chicanery from human garbage so who knows what would cause this to happen. Maybe it's deserved, maybe it's not. I can't see this as anything other than damaging to the company since it lays bare some internal conflict.


>"he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities"
Hmmm, neither these quotes nor the articles themselves explain much. Hard to comment on, really.
Also here's the link the NY Times one in case anyone else in paywalled: https://archive.is/8Ofco


File:1600522969302.jpg (221.16 KB,1920x1080)

I know that most of the current LORAs and kissu models of current are all on SD 1.5, but what do people think about the other available models that use XL? I know that 2.1 was a flop, but doing a bit of research into it, people seem to have been comparing SDXL to Midjourney and Dall-E. Is it still too young to warrant a reason to switch over since it lacks community development, or does it have the same issues as 2.1 in that it's not as good for the type of content that kissu usually prefers? Since I think a model that understands context better and can more accurately represent more complex ideas would be a great step in the direction of having a model that everyone would want.


File:v15.mp4 (1022.53 KB,512x576)

Anything above SD1.5 including XL has zero value to me for the time being because it's not going to have the NovelAI booru-scraped model leak that enables quality 2D models to easily be made and merged. Early this year a guy tried to make his own (Waifu Diffusion) and it took months of 24/7 GPU training and it wasn't nearly as good. Will someone make their own NAI equivalent for SDXL? Possibly.

In its base form SDXL will fail to compare to Dalle because SD can't compete with the datasets and raw computational power of Microsoft/OpenAI. SD relies on the specialization of extensions and LORAs and the like, but few people are eager to move to SDXL, even if they had the hardware to do so. If I wanted to make a Kuon Lora for SDXL I simply couldn't because I don't have the VRAM and that's even if it's possible with any Frankenstein'd 2D models people may have made for SDXL by now. I think base SDXL is capable of producing nudity (unlike Dalle which tries aggressively filter it), but I don't think it's specifically trained on it so it's not going to be very good.
I really don't know about Midjourney, but people stopped talking about it so I assume it hasn't kept up.

We really lucked out with the NAI leak. NovelAI released an update to its own thing but it's not as good as model merges with extensions and loras and the like, but I do hear it's better at following prompts and as a general model it's probably better than a lot of merges in existence today. SDXL could become great someday, but I won't be using it any time soon. It might become better when people with 24GB becomes the norm instead of the top end of the scale.
Speaking of VRAM, it really does limit so much of what I can do. I'm really feeling it when attempting animation stuff. Another "wait for the hardware to catch up" scenario. 4090 would help, but even its 24gb of VRAM will hit the wall with animation.


File:grid-0030.png (5.54 MB,2592x2304)

Looks like Microsoft hired Sam Altman. Microsoft already heavily funded/partnered/whatever with OpenAI so I'm not sure what will change now. If this was something already in the works, however, then it would explain him getting fired.
Still seems like a mess that lowers people's confidence in the company.

I've been messing around more with some image gen stuff. It seems there's an experimental thing to lower generation time by half, but it's not quite there yet as it hits the quality kind of hard. It's called LCM and it's not included in regular SD. You need to download a LORA and also another extension that will unlock the new sampler. I learned of this by coincidence because said extension is the animation one I've been messing with.
You can read about some of this on the lora page on civitai: https://civitai.com/models/195519/lcm-lora-weights-stable-diffusion-acceleration-module

I was able to generate this grid of 6 images (generation and upscale) in 42 seconds on a 3080 which is pretty amazing. That's roughly the same as upgrading to a 4090. There's definitely some information lost in addition to the quality hit, however, as my Kuon lora is at full strength and it's failing to follow it. This shows amazing promise, however, as it's still in its early experimental phase.


File:plz.gif (1.69 MB,450x252)

That's pretty big news. The video I was watching earlier suggested this could cause a lot of the people at OpenAI to resign and follow him.

Hopefully this causes a shakeup within OpenAI and through one way or another they end up releasing their "unaligned" ChatGPT and Dalle models publicly.


The thing is I don't think Sam Altman is actually involved with any tech stuff. I think he's like Steve Jobs; people associate him with Apple because he likes to hear himself talk, but he's just a businessman/investor/entrepreneur that is unremarkable aside from his luck and/or ability to receive investment money. The Wozniak equivalents are still at OpenAI (or maybe they left already at some point) as far as I'm aware.
It's possible that he's friends with those people and maybe that could influence things?


I saw it again


File:[Serenae] Hirogaru Sky! Pr….jpg (216.25 KB,1920x1080)

Apparently a bunch of people may end up quitting OpenAI including those in important positions. This could be extremely damaging to the company and make other companies like Microsoft even more comparatively powerful when they poach the talent. I need to sleep, but today is going to be quite chaotic.
Wouldn't it be funny if the leading LLM company implodes over stupid human stuff?


Would be, I eagerly await that happening and then a rogue employee doing what >>116361 said


How is SD compared to this time last year? I messed around with it about a year ago but it was kinda boring so I moved on to other things. Getting better at producing art without computers seemed like a better use of my time. But I'll admit AI waifu generation is great for rough drafting characters and what-not.

Even with a 980ti I was managing to generate stuff in a timely fashion. Do the gains apply to those older model graphic cards to? I haven't been able to grab anything since the GTX980 generation. Prices are too high and supplies too thin. I almost bought a new graphics card last year but they were all bought within seconds of new stock coming in. I'm not paying some scalping faggot double MSRP for something that should be going for 1/4th of the price.

All this AI shit was pushed by shell companies from the start. That's how IT in the west works. You set-up a stupid "start up" shell corporation so early investors and insiders can get in before a public offering. Then you go public and run up the price of the stock. Then they absorb it into one of the big four existing /tech/ companies. They fire almost everyone at that point and replace them pajeets and other diversity hires that don't know enough to leak anything worthwhile.

You're getting to play with the software on your local machine because they wanted help beta testing it. Once it's good and finished they'll start requiring you to access their cloud/server farm and make you pay for computer. They'll integrate the various machine learning algos together and censor them so they won't generate or display anything deemed problematic. In time you'll have software similar to Blender for shitting out low quality works of anime, cartoons, movies and other forms of "art" coming out of the MSM.

What I'm waiting for is someone to combine Miku with machine learning. Then I could produce entire songs without any work. I could also use the software for all my VA needs. I'm surprised it isn't a thing yet.

This software is being hyped up for several reasons but the main one right now is that it's keeping the price of consumer GPUs so high. GPUs haven't really improved in any meaningful way for almost a decade now. But Nividia is still able to claim they're advancing at this amazing rate on the hardware side because some new software outside of gaming came along to sustain the hype train. Games haven't advanced in 15+ years thanks to everyone using the same two crappy engines. So they couldn't drive hype like that anymore.


File:[SubsPlease] Spy x Family ….jpg (293.77 KB,1920x1080)

Please keep the discussion about the technology itself and adapt your vocabulary to that of an old-fashioned friendly imageboard instead of an angsty political one. A post like that (parts of it specifically) is liable to get deleted as well, FYI. Consider this as friendly "please assimilate to kissu's laid back atmosphere instead of bringing 4chan/8chan here" advice.

There's been various improvements in efficiency since then. I'm just a user of this stuff so I don't know the stuff that goes on under the hood, but speed and VRAM usage has definitely become more efficient since then. It was early 2023 when, uh, Torch 2.0 gave a big boost and there's probably been some other stuff going on that I don't know. There's also stuff like model pruning to remove junk data to cut model sizes down by 2-4gb which makes loading them into memory cheaper and allows more hoarding.
I've recently moved to a test branch that uses "FP8" encoding or something which I honestly do not understand, but it loses a slight amount of "accuracy", but is another improvement in reducing the amount of VRAM used for this stuff. Right now everyone uses FP16 and considers FP32 to be wasteful. It looks to be about a 10-20% VRAM shave which is very nice. You need a specific branch, however, the aptly named FP8 one: https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/test-fp8

The bad news is that a lot of the cool new extensions like ControlNet are total VRAM hogs. Part of the reason I never use it is that I'd rather gamble and create 40 regular images in the time I could make 4 ControlNet ones. (that time includes setting up the images and models and so on)


that's awfully depressing for something people are having fun with


File:[SubsPlease] Hikikomari Ky….jpg (392.39 KB,1920x1080)


The OpenAI/Microsoft brouhaha is over with the usual business treachery and power struggles resolved for now. Altman is back after a bunch of employees threatened to quit. There's been a change of the board or something so presumably it's all people loyal to him now. I read theories that it was the board's last desperate attempt to retain some power, but it failed utterly and now Altman has full control.
I don't care about this kind of thing since it's just normal greedy monster stuff that's a regular part of the business world, with none of the named people actually involved with the technology, but as it concerns ChatGPT and LLM stuff it seems like there's not going to be any changes from this that we'll know about. It's kind of depressing that all these rich "entrepreneurs" are who we know instead of the people actually creating the breakthroughs, but I guess there's nothing new there. Steve Jobs invented computers and Sam Altman invented LLMs.
I read some people say it might be a loss for AI ethics or whatever, but I sincerely do not think anyone actually cared about that stuff. Those people would have left the company after it went closed source years ago and partnered with Microsoft and such. Those so-called ethical people became Anthropic, who created a model named Claude that was so infamously censored that its second version performs worse than the first in benchmarks. But, Amazon bought them and now you can do whatever you want with it since they got their money.
So... yeah, nothing has changed. I hope local stuff gets better because I still don't want to rely on these people.


Ai chat models love to recommend books that do not exist. Why is it so bad with books specifically


File:Utawarerumono.S02E15.False….jpg (163.57 KB,1920x1080)

It's not exclusive to books. It's referred to as a "hallucination" in which it will confidentially list things that don't exist. There's a story from months ago when some idiot lawyer asked it for legal advice and he used it to cite precedent from court cases that never happened. I'm sure lots of kids have failed assignments for similar reasons.
People are prone to thinking it's truly intelligent and rational instead of effectively assembling new sentences from a large catalog of examples. The huge reason why text LLM can work is because it doesn't automatically go with the best possible word, but will instead semi-randomly diverge into other options. I think the degree of randomness is called temperature?


I think that when it comes to using AI for improving video quality, those 4k AI upscales of anime do a pretty good job when there's no quality alternative (60 fps is still garbage)

For the most recent example I was able to find of a massive upgrade that far outpaces the original video source, I was looking at the OP for Dancouga Nova. Every video source for it looks more or less like https://www.youtube.com/watch?v=A4GIY9Lfpq4, high in artifacts or noise and extremely low res, so it looks like ass when on fullscreen (I checked the DVDs). However looking at the AI upscale, https://www.youtube.com/watch?v=-S5LeYcgrh4 , one can see a massive improvement if they were to view it in fullscreen on a 4k monitor. The one drawback seems to be that there's a bit of blobiness in some areas but in most every other way it beats the original. In fact I'd say that AI upscaling does a much better job on average from what I've seen compared to all the kuso upscaled BDs that anime companies have shat out for older stuff.


File:[Pizza] Urusei Yatsura (20….jpg (451.04 KB,1920x1080)

Yeah, that's not bad. I think the term "AI" is abused a bit much and this is really just a good upscaler. I guess if something like waifu2x is considered AI then this is too, huh. It's all about the denoising to make it 'crisp' and yet not create blobs. It's not like you're getting new information, just clean up the artifacts.

In other news, tumblr, the company that famously killed itself in a day by banning porn leading to an exodus of artists is now going to sell all of its user data to OpenAI/Microsoft. The data stretches back to 2013 so while various stuff was deemed too evil to be on tumblr it's good enough to be sold.

This AI stuff is really getting ugly.


File:1703019150254928.png (1.15 MB,1444x1317)

There was a pretty funny incident around a year ago in my country.
Here, national universities don't have entrance exams, instead you get a final exam at the end of high school and you need to pass that exam if you want to enter any uni. So the time of the exam is flipped from start of uni to end of high school and everyone across the whole country does the same exam (for math and literature at least, med school has another exam for med stuff, etc.)

Anyway, last year, in the literature exam, there was some question about the plot of a book that's mandatory reading, and the question asked you to write the answer in words, so it wasn't just circling the correct answer. And what happened is that several thousands students all wrote the exact same incorrect answer, word for word. They all used chatgpt, of course, probably with a similar prompt and it gave everyone the exact same sentence.
It was a huge scandal and it was pretty fun listening to literature professors' reactions. Apparently they'll be upping the security on checking phone usage during the test this year, but I'm expecting something similar to happen again lol

[Return] [Top] [Catalog] [Post a Reply]
Delete Post [ ]

[ home / bans / all ] [ qa / jp ] [ win ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]