[ home / bans / all ] [ qa / jp ] [ maho ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]

/qa/ - Questions and Answers

Questions and Answers about QA

New Reply

Whitelist Token
Password (For file deletion.)
Markup tags exist for bold, itallics, header, spoiler etc. as listed in " [options] > View Formatting "

[Return] [Bottom] [Catalog]

File:AI art.png (370.42 KB,512x512)

 No.100268[View All]

There's two threads on /qa/ for different AI generation content and help. A thread on the morality and ethics of AI. One about the future potential AI holds. One on >>>/megu/ for more in-depth help with specifics. Then scattered about across all the boards some threads using AI generation for image openers and such. However, none of these actually encompass kissu's opinion on AI!

So, what do you /qa/eers think of AI currently? Do you like it, dislike it, find it useful in any meaningful way at all? Or are you simply not interested in the output of algorithms?

I myself find AI to be a useful tool in generating the kind of content I've either exhausted the available stock of or are gated off by some hurdle I need to spend more time overcoming. When it comes to text/story generation, it's like a CYOA playground where I play the role of DM and set up all the lore/situations/characters, and then the AI acts out events as I lay out the flow of the story. This creates a more interactive experience for me than just writing out a story myself for some reason, and I find it highly addictive. Then for AI art generation, I find that AI works wonders for filling a specific niche taste I have, or specific scenario I want to create an image for. It really is quite amazing in my eyes, and I have hopes for it getting even better in the future.
185 posts and 32 image replies omitted. Click reply to view.




The /secret/ poster


File:[SubsPlease] TenPuru - 01 ….jpg (238.34 KB,1920x1080)

I attached an image about the training data of GPT-4 and gave a few sentences of my own commentary, I didn't just dump a youtube video


File:testicle soup.mp4 (10.41 MB,1920x1080)

I've had AI Family Guy on the second monitor almost constantly for the past few days because it's so funny. I thought it would take a while before AI could be funny reliably, but whatever they did with this was successful. Unfortunately, it seems like I'd have to join a discord to get any information, so I don't have any knowledge of how it's working.
Once in a while I notice one of the newer GPT's "actually let's not do that, it's offensive" responses, but for the most time it's successfully entertaining as it bypasses its lame safety filters with explicit text and voice.
There was an "AI Seinfeld" a few months ago, but it was entirely random and had pretty much no entertainment value. This one, though, people feed it a prompt (you need to be in their discord...) and the characters will react to it and say funny things. The voices are synthesized very well, although they'll stutter and lock up for 5-10 seconds now and then, but it's honestly pretty hilarious when it happens. Chris's voice is noticeably lower qualtiy and louder, which is strange, but the others are such high quality that it's like it's a real voice.
I can't really post most of the stuff on kissu because it's so offensive. It reminds me of 00s internet. Some of the prompts are like "Carter ranks his favorite racial slurs" so, you know...
Really, it's the amazing voice synthesis that does the heavy lifting. The way it actually infers the enunciation for so many sentences and situations is amazing. I assume it's using that one 11 labs TTS service, which is paid.

My only complaint is that they have them swear WAY too much. It's funny at first, but ehhh...


File:7c06sialuo281.png (79.81 KB,224x225)

as an artist am kinda split on the issue, although i am worried at some aspects of AI, i am guilty about using it for my own pleasure, the thing that's driving me crazy about it is that people won't view art seriously anymore, that it will be taken for granted, replacing the pen and paper with text and algorithms and to make matters worse is that capitalism will use it to it's advantage, seeing as nothing more then a money making machine and exploiting the living shit out of it

but then again i can get all the AI patchouli drawings so am basically part of the problem myself lol


How come people talk about a runaway explosion in AI intelligence, the singularity, but they never say the same about people? Surely if AI can improve itself, our brains are smart enough to improve themselves too?


somehow i expect the opposite to happen


File:1695227140185584.jpg (139.44 KB,1080x566)

One of the unexpected things is seeing Facebook, er "Meta" taking the open source approach with its text models. There's no question that OpenAI (ChatGPT) has a huge lead, but after seeing all the improvements being made to Llama (Meta's AI model) from hobbyists it's easy to see that it's the right decision. We as individuals benefit from it, but it's clear that the company is enjoying all the free labor. Surely they saw how powerful Stable Diffusion is due to all the hobbyists performing great feats that were never expected.
I don't trust the company at all, but it can be a mutally beneficial relationship. Meta gets to have AI models that it can use to attempt to stay as a company rivaling governments in terms of power and hobbyists get to have local RP bots free from censorship.
Meta has bought a crapload of expensive nvidia enterprise-level GPUs and it will start training what it expects to compete with GPT4 early next year, and unlike GPT4 it won't take very long due to all the improvements made since then.


Zuck is interesting. Oddly, he's probably the one tech CEO I find somewhat endearing. I'm kind of glad he's retained majority control of Facebook/Meta. I can't see the bean counters at a company like Microsoft or Apple seriously putting any effort into bleeding edge stuff like VR or text models the same way that Facebook has. I could very easily imagine a Facebook/Meta without Zuck turning into a boring, faceless conglomerate with no imagination like Google.


File:brian.jpg (411.17 KB,1021x580)

so freaking weird to see zuck not being public enemy number one any more
maybe it was the one two punch of elon rocketing up to the spot while zuck challenged him to a wrestle


If Zuck worked in government and beefed up state surveillance/censorship to the level of facebook and instagram you would call him a rights abusing tyrant


would that be wrong


File:R-1698481490954.png (826.97 KB,791x1095)

chatgpt 4 now lets you upload images. tested out this one
>Hmm...something seems to have gone wrong.


Zuck and Bezzos are people who only really care about the bottom line, but you can find their devotion to money at least relatable. Meanwhile Musk or the former president or Henry Ford are people who want to craft society around them.

Pick your battles or so they say


That's not really a fair comparison.
The government sets the absolute bare minimum level of censorship that every discussion platform must abide by, with the owners of those platforms then adding additional rules to fit it's purpose. There's nothing inherently tyrannical about an individual platform having strict censorship, since it is merely the set of rules that users agree to follow, and if they dislike those rules then they are free to either not use the site or only use it to discuss some topics and then use other platforms for other topics. State censorship, on the other hand, cannot be opted out of and encompasses all discussions, and so much more readily infringes on one's rights.
Nor does how one applies censorship to a website have any bearing on how they'd act in government - if the owner of a small hobby forum bans discussion of politics due to it being off-topic and causing drama, that obviously doesn't mean they object to all political discussion nationwide.
And while surveillance is more insidious, as it is hard to know even to what extent you're being watched, let alone be able to clearly opt out, there is still a huge difference between surveillance with the goal of feeding people targeted ads and engaging content, and surveillance with the goal of sending people to jail. Both can infringe on one's rights, but only the latter is tyrannical, since corporate surveillance is merely for the sake of maximizing profit rather than for political control.


>it is hard to know even to what extent you're being watched
They tell you.


you're being watched


File:1510155229687.jpg (32.8 KB,211x322)


Not sure if this is the right thread to talk about it or not, but those kuon animations in the sage thread really seem like a step up from that "mario bashing luigi" one that was posted here a while back.


File:00002-2354314982.mp4 (1.7 MB,576x768)

It's the right thread.
I think it's advanced quite a bit (and yeah that was also me back then).
I'm still learning about it so I haven't made any writing about it yet. There's a few different models and even talk of LORAs, so it's definitely going places.
I believe the reason this works is because of ControlNet which was a pretty major breakthrough (but I'm too lazy to use it). It's been known that ControlNet has a benefit to this animation stuff, but I didn't feel like looking into it until now. The way it works is that it uses the previous frame as a 'base' for a new one, so stuff can be more consistent but still not good enough to be useful (I think). There's something you can follow with your eye so that means a lot.


Sam Altman has been booted from OpenAI:

I'm not sure what to make of it. He's been the CEO and the face of the company, so it's a major surprise. The business world is cutthroat and full of backstabbing and shareholder greed and all sorts of other chicanery from human garbage so who knows what would cause this to happen. Maybe it's deserved, maybe it's not. I can't see this as anything other than damaging to the company since it lays bare some internal conflict.


>"he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities"
Hmmm, neither these quotes nor the articles themselves explain much. Hard to comment on, really.
Also here's the link the NY Times one in case anyone else in paywalled: https://archive.is/8Ofco


File:1600522969302.jpg (221.16 KB,1920x1080)

I know that most of the current LORAs and kissu models of current are all on SD 1.5, but what do people think about the other available models that use XL? I know that 2.1 was a flop, but doing a bit of research into it, people seem to have been comparing SDXL to Midjourney and Dall-E. Is it still too young to warrant a reason to switch over since it lacks community development, or does it have the same issues as 2.1 in that it's not as good for the type of content that kissu usually prefers? Since I think a model that understands context better and can more accurately represent more complex ideas would be a great step in the direction of having a model that everyone would want.


File:v15.mp4 (1022.53 KB,512x576)

Anything above SD1.5 including XL has zero value to me for the time being because it's not going to have the NovelAI booru-scraped model leak that enables quality 2D models to easily be made and merged. Early this year a guy tried to make his own (Waifu Diffusion) and it took months of 24/7 GPU training and it wasn't nearly as good. Will someone make their own NAI equivalent for SDXL? Possibly.

In its base form SDXL will fail to compare to Dalle because SD can't compete with the datasets and raw computational power of Microsoft/OpenAI. SD relies on the specialization of extensions and LORAs and the like, but few people are eager to move to SDXL, even if they had the hardware to do so. If I wanted to make a Kuon Lora for SDXL I simply couldn't because I don't have the VRAM and that's even if it's possible with any Frankenstein'd 2D models people may have made for SDXL by now. I think base SDXL is capable of producing nudity (unlike Dalle which tries aggressively filter it), but I don't think it's specifically trained on it so it's not going to be very good.
I really don't know about Midjourney, but people stopped talking about it so I assume it hasn't kept up.

We really lucked out with the NAI leak. NovelAI released an update to its own thing but it's not as good as model merges with extensions and loras and the like, but I do hear it's better at following prompts and as a general model it's probably better than a lot of merges in existence today. SDXL could become great someday, but I won't be using it any time soon. It might become better when people with 24GB becomes the norm instead of the top end of the scale.
Speaking of VRAM, it really does limit so much of what I can do. I'm really feeling it when attempting animation stuff. Another "wait for the hardware to catch up" scenario. 4090 would help, but even its 24gb of VRAM will hit the wall with animation.


File:grid-0030.png (5.54 MB,2592x2304)

Looks like Microsoft hired Sam Altman. Microsoft already heavily funded/partnered/whatever with OpenAI so I'm not sure what will change now. If this was something already in the works, however, then it would explain him getting fired.
Still seems like a mess that lowers people's confidence in the company.

I've been messing around more with some image gen stuff. It seems there's an experimental thing to lower generation time by half, but it's not quite there yet as it hits the quality kind of hard. It's called LCM and it's not included in regular SD. You need to download a LORA and also another extension that will unlock the new sampler. I learned of this by coincidence because said extension is the animation one I've been messing with.
You can read about some of this on the lora page on civitai: https://civitai.com/models/195519/lcm-lora-weights-stable-diffusion-acceleration-module

I was able to generate this grid of 6 images (generation and upscale) in 42 seconds on a 3080 which is pretty amazing. That's roughly the same as upgrading to a 4090. There's definitely some information lost in addition to the quality hit, however, as my Kuon lora is at full strength and it's failing to follow it. This shows amazing promise, however, as it's still in its early experimental phase.


File:plz.gif (1.69 MB,450x252)

That's pretty big news. The video I was watching earlier suggested this could cause a lot of the people at OpenAI to resign and follow him.

Hopefully this causes a shakeup within OpenAI and through one way or another they end up releasing their "unaligned" ChatGPT and Dalle models publicly.


The thing is I don't think Sam Altman is actually involved with any tech stuff. I think he's like Steve Jobs; people associate him with Apple because he likes to hear himself talk, but he's just a businessman/investor/entrepreneur that is unremarkable aside from his luck and/or ability to receive investment money. The Wozniak equivalents are still at OpenAI (or maybe they left already at some point) as far as I'm aware.
It's possible that he's friends with those people and maybe that could influence things?


I saw it again


File:[Serenae] Hirogaru Sky! Pr….jpg (216.25 KB,1920x1080)

Apparently a bunch of people may end up quitting OpenAI including those in important positions. This could be extremely damaging to the company and make other companies like Microsoft even more comparatively powerful when they poach the talent. I need to sleep, but today is going to be quite chaotic.
Wouldn't it be funny if the leading LLM company implodes over stupid human stuff?


Would be, I eagerly await that happening and then a rogue employee doing what >>116361 said


How is SD compared to this time last year? I messed around with it about a year ago but it was kinda boring so I moved on to other things. Getting better at producing art without computers seemed like a better use of my time. But I'll admit AI waifu generation is great for rough drafting characters and what-not.

Even with a 980ti I was managing to generate stuff in a timely fashion. Do the gains apply to those older model graphic cards to? I haven't been able to grab anything since the GTX980 generation. Prices are too high and supplies too thin. I almost bought a new graphics card last year but they were all bought within seconds of new stock coming in. I'm not paying some scalping faggot double MSRP for something that should be going for 1/4th of the price.

All this AI shit was pushed by shell companies from the start. That's how IT in the west works. You set-up a stupid "start up" shell corporation so early investors and insiders can get in before a public offering. Then you go public and run up the price of the stock. Then they absorb it into one of the big four existing /tech/ companies. They fire almost everyone at that point and replace them pajeets and other diversity hires that don't know enough to leak anything worthwhile.

You're getting to play with the software on your local machine because they wanted help beta testing it. Once it's good and finished they'll start requiring you to access their cloud/server farm and make you pay for computer. They'll integrate the various machine learning algos together and censor them so they won't generate or display anything deemed problematic. In time you'll have software similar to Blender for shitting out low quality works of anime, cartoons, movies and other forms of "art" coming out of the MSM.

What I'm waiting for is someone to combine Miku with machine learning. Then I could produce entire songs without any work. I could also use the software for all my VA needs. I'm surprised it isn't a thing yet.

This software is being hyped up for several reasons but the main one right now is that it's keeping the price of consumer GPUs so high. GPUs haven't really improved in any meaningful way for almost a decade now. But Nividia is still able to claim they're advancing at this amazing rate on the hardware side because some new software outside of gaming came along to sustain the hype train. Games haven't advanced in 15+ years thanks to everyone using the same two crappy engines. So they couldn't drive hype like that anymore.


File:[SubsPlease] Spy x Family ….jpg (293.77 KB,1920x1080)

Please keep the discussion about the technology itself and adapt your vocabulary to that of an old-fashioned friendly imageboard instead of an angsty political one. A post like that (parts of it specifically) is liable to get deleted as well, FYI. Consider this as friendly "please assimilate to kissu's laid back atmosphere instead of bringing 4chan/8chan here" advice.

There's been various improvements in efficiency since then. I'm just a user of this stuff so I don't know the stuff that goes on under the hood, but speed and VRAM usage has definitely become more efficient since then. It was early 2023 when, uh, Torch 2.0 gave a big boost and there's probably been some other stuff going on that I don't know. There's also stuff like model pruning to remove junk data to cut model sizes down by 2-4gb which makes loading them into memory cheaper and allows more hoarding.
I've recently moved to a test branch that uses "FP8" encoding or something which I honestly do not understand, but it loses a slight amount of "accuracy", but is another improvement in reducing the amount of VRAM used for this stuff. Right now everyone uses FP16 and considers FP32 to be wasteful. It looks to be about a 10-20% VRAM shave which is very nice. You need a specific branch, however, the aptly named FP8 one: https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/test-fp8

The bad news is that a lot of the cool new extensions like ControlNet are total VRAM hogs. Part of the reason I never use it is that I'd rather gamble and create 40 regular images in the time I could make 4 ControlNet ones. (that time includes setting up the images and models and so on)


that's awfully depressing for something people are having fun with


File:[SubsPlease] Hikikomari Ky….jpg (392.39 KB,1920x1080)


The OpenAI/Microsoft brouhaha is over with the usual business treachery and power struggles resolved for now. Altman is back after a bunch of employees threatened to quit. There's been a change of the board or something so presumably it's all people loyal to him now. I read theories that it was the board's last desperate attempt to retain some power, but it failed utterly and now Altman has full control.
I don't care about this kind of thing since it's just normal greedy monster stuff that's a regular part of the business world, with none of the named people actually involved with the technology, but as it concerns ChatGPT and LLM stuff it seems like there's not going to be any changes from this that we'll know about. It's kind of depressing that all these rich "entrepreneurs" are who we know instead of the people actually creating the breakthroughs, but I guess there's nothing new there. Steve Jobs invented computers and Sam Altman invented LLMs.
I read some people say it might be a loss for AI ethics or whatever, but I sincerely do not think anyone actually cared about that stuff. Those people would have left the company after it went closed source years ago and partnered with Microsoft and such. Those so-called ethical people became Anthropic, who created a model named Claude that was so infamously censored that its second version performs worse than the first in benchmarks. But, Amazon bought them and now you can do whatever you want with it since they got their money.
So... yeah, nothing has changed. I hope local stuff gets better because I still don't want to rely on these people.


Ai chat models love to recommend books that do not exist. Why is it so bad with books specifically


File:Utawarerumono.S02E15.False….jpg (163.57 KB,1920x1080)

It's not exclusive to books. It's referred to as a "hallucination" in which it will confidentially list things that don't exist. There's a story from months ago when some idiot lawyer asked it for legal advice and he used it to cite precedent from court cases that never happened. I'm sure lots of kids have failed assignments for similar reasons.
People are prone to thinking it's truly intelligent and rational instead of effectively assembling new sentences from a large catalog of examples. The huge reason why text LLM can work is because it doesn't automatically go with the best possible word, but will instead semi-randomly diverge into other options. I think the degree of randomness is called temperature?


I think that when it comes to using AI for improving video quality, those 4k AI upscales of anime do a pretty good job when there's no quality alternative (60 fps is still garbage)

For the most recent example I was able to find of a massive upgrade that far outpaces the original video source, I was looking at the OP for Dancouga Nova. Every video source for it looks more or less like https://www.youtube.com/watch?v=A4GIY9Lfpq4, high in artifacts or noise and extremely low res, so it looks like ass when on fullscreen (I checked the DVDs). However looking at the AI upscale, https://www.youtube.com/watch?v=-S5LeYcgrh4 , one can see a massive improvement if they were to view it in fullscreen on a 4k monitor. The one drawback seems to be that there's a bit of blobiness in some areas but in most every other way it beats the original. In fact I'd say that AI upscaling does a much better job on average from what I've seen compared to all the kuso upscaled BDs that anime companies have shat out for older stuff.


File:[Pizza] Urusei Yatsura (20….jpg (451.04 KB,1920x1080)

Yeah, that's not bad. I think the term "AI" is abused a bit much and this is really just a good upscaler. I guess if something like waifu2x is considered AI then this is too, huh. It's all about the denoising to make it 'crisp' and yet not create blobs. It's not like you're getting new information, just clean up the artifacts.

In other news, tumblr, the company that famously killed itself in a day by banning porn leading to an exodus of artists is now going to sell all of its user data to OpenAI/Microsoft. The data stretches back to 2013 so while various stuff was deemed too evil to be on tumblr it's good enough to be sold.

This AI stuff is really getting ugly.


File:1703019150254928.png (1.15 MB,1444x1317)

There was a pretty funny incident around a year ago in my country.
Here, national universities don't have entrance exams, instead you get a final exam at the end of high school and you need to pass that exam if you want to enter any uni. So the time of the exam is flipped from start of uni to end of high school and everyone across the whole country does the same exam (for math and literature at least, med school has another exam for med stuff, etc.)

Anyway, last year, in the literature exam, there was some question about the plot of a book that's mandatory reading, and the question asked you to write the answer in words, so it wasn't just circling the correct answer. And what happened is that several thousands students all wrote the exact same incorrect answer, word for word. They all used chatgpt, of course, probably with a similar prompt and it gave everyone the exact same sentence.
It was a huge scandal and it was pretty fun listening to literature professors' reactions. Apparently they'll be upping the security on checking phone usage during the test this year, but I'm expecting something similar to happen again lol


File:1585150848080.png (29.11 KB,186x183)

When I was young and got my first computer, even a little bit before that, I always had an infatuation with the idea of chatting with an AI. I narrowly avoided turning out an Applefag, I asked for an iPhone for one of my birthdays exclusively because of siri. It was too expensive at the time so I was spared an unfortunate alternative future, but I know for sure I'd be talking to my phone for hours on end even if it's a bad facsimile of the real deal.
I'm pretty happy with how things are going these days, to say the least. Lots of people are throwing around doomsday scenarios about how the hidden shadow elite will cull humanity using magic lizard methods activated via G5 or something, but I don't really care if AI is going to have a negative impact on society. I'm just content I get to actually try out a childhood dream I had, even if I grew out of that fascination over the years.



File:[Serenae] Wonderful Precur….jpg (365.3 KB,1920x1080)

For those unaware, the go-to joke for GPT3 was "What did the fish say when it hit a wall?" or however it went.
That 2023 is no longer entirely true, although it's up to opinion. Claude3 is pretty good at humor stuff and it makes you wonder where it's scraping the data from (there's obviously lots of 4chan and forum stuff). It's a weird situation because it can't actually be novel since it's an LLM and an important thing about humor is novelty. Basically it's funny to you as long as the data it's referencing isn't directly known to you.

I'll be able to show some examples soon, I think...


File:e1dbc920aa.png (428.88 KB,1300x1265)

Some people said that Claude isn't good at coding stuff, but I like it's tech analysis more


File:12f6f99659.png (170.03 KB,1324x418)


<- Opus


I wonder how I could feed it some information about newer tech problems I have a hard time to understand and digest them to being youtuber-tier


File:20240519_000805.jpg (438.66 KB,1922x2048)

While learning JP to read VNs/Manga and watch anime in harmony is that what I post while doing so will be inaccessible to a fair bit of people. So seeing the advancements of AI in making translation more real time and convenient makes me happy and hopeful that we'll have on the fly OCR translations in the future people that don't know jp can use.


File:[SubsPlease] Spice and Wol….jpg (227.71 KB,1920x1080)

Yeah, "live" OCR stuff is nothing new and people have been doing it for nearly a decade now, but having far faster stuff that's also a bit better (but still not great, contextual language and all that stuff) is really quite amazing. I didn't stop the Nosuri playthrough I was doing because of the translation, but because of the font being unreadable with OCR...
Well, maybe AI OCR stuff will progress, too. I don't think I could get away with sending GPT4-O thousands of screenshots without paying


I like the AI/live-OCR stuff but I worry about people using it to churn out lazy translations they don't bother to check. We're already seeing a lot of that and now some companies are trying to cash-in.

But I think it would be a very valuable tool for learning a second language. As long as it doesn't teach you bad habits. What I'm really looking forward to is live-speech translation improving. Picking up kana and some basic kanji didn't take me that long. But learning how to speak like a native speaker and being able to understand a native speaker are a very different matter. Especially when you do not have access to one IRL to practice with. Even then they're usually speaking slow and not teaching you certain words and concepts (like internet slang). No Japanese teacher in an institution of learning is going to cover subjects like common otaku slang or curse words.

Then there is the issue of dialects. You could spend years learning one dialect and be totally unable to understand someone speaking the language in a dialect common just 1 hour outside of the major cities. The main barrier I had learning how to speak basic Japanese was the fact that our teacher couldn't understand our local English dialect well and we could barely understand her Engrish. Every lesson was incredibly frustrating especially with a class room of idiots making fun of her daily.


File:SUCKS.jpg (258.58 KB,1280x720)




[Return] [Top] [Catalog] [Post a Reply]
Delete Post [ ]

[ home / bans / all ] [ qa / jp ] [ maho ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]