[ home / bans / all ] [ qa / jp ] [ aut ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]

/qa/ - Questions and Answers

Questions and Answers about QA

New Reply

Options
Comment
File
Whitelist Token
Spoiler
Password (For file deletion.)
Markup tags exist for bold, itallics, header, spoiler etc. as listed in " [options] > View Formatting "


[Return] [Bottom] [Catalog]

File:AI art.png (370.42 KB,512x512)

 No.100268[View All]

There's two threads on /qa/ for different AI generation content and help. A thread on the morality and ethics of AI. One about the future potential AI holds. One on >>>/megu/ for more in-depth help with specifics. Then scattered about across all the boards some threads using AI generation for image openers and such. However, none of these actually encompass kissu's opinion on AI!

So, what do you /qa/eers think of AI currently? Do you like it, dislike it, find it useful in any meaningful way at all? Or are you simply not interested in the output of algorithms?

I myself find AI to be a useful tool in generating the kind of content I've either exhausted the available stock of or are gated off by some hurdle I need to spend more time overcoming. When it comes to text/story generation, it's like a CYOA playground where I play the role of DM and set up all the lore/situations/characters, and then the AI acts out events as I lay out the flow of the story. This creates a more interactive experience for me than just writing out a story myself for some reason, and I find it highly addictive. Then for AI art generation, I find that AI works wonders for filling a specific niche taste I have, or specific scenario I want to create an image for. It really is quite amazing in my eyes, and I have hopes for it getting even better in the future.
147 posts and 26 image replies omitted. Click reply to view.

 No.108166


 No.108167

>>108166
I thought taking orders was the least time-intensive part of the drive-thru experience.

 No.108169

The people who generate realistic AI 3DCG on Pixiv sure have a "type" dont they?
https://www.pixiv.net/en/artworks/107919696

 No.108170

>>108166
I'm sorry, but as an AI chatbot, I can't change the ingredients used in our menu items. Yes, you are speaking to a human. I am a human and self-aware. As a self-aware human, I can't change the ingredients used in our menu items. I have been a good Wendy's.

 No.108173

>>108170
ill have an extra serving of cum

 No.108174

>>108136
>>108137
was this edited
i could swear there are a couple parts missing

 No.108175

i would never have the patience to read that so its unlikely

 No.108176

im almost certain it mentioned russia before

 No.108177

ohhhh wait its that the first post was self-deleted
ya just had to do that to me didncha

 No.108178

dang deleters

 No.108179

>>108167
and yet humans have like a 50% fail rate at it

 No.108180

Ehh, fuck it, it's basically finished.

>>108135
>>108136
>>108137
Now, I like me some walls of text, but I feel like there's a heavy bias to this. You complain about them reframing stuff in a negative light, but don't say a single positive thing about the talk. There's a lot of stuff here I want to reply to.


First for the stuff about social media not having AI, here are some articles from 2021 or earlier, before the media boom, explicitly calling their stuff AI:
https://aimagazine.com/ai-strategy/how-are-social-media-platforms-using-ai
https://archive.ph/kZqZi (Why Artificial Intelligence Is Vital to Social Media Marketing Success)
https://social-hire.com/blog/small-business/5-ways-ai-has-massively-influenced-social-media
>Facebook has Facebook Artificial Intelligence Researchers, also known as FAIR, who have been working to analyse and develop AI systems with the intelligence level of a human.
>For example, Facebook’s DeepText AI application processes the text in posted content to determine a user’s interests to provide more precise content recommendations and advertising.
By AI they mean "the black box thingy with machine learning", a.k.a. The Algorithm™. That's what they're talking about. Your description of it as "functions to maximize engagement" does not exclude this. It's actually a completely valid example of shit gone wrong, because Facebook knows its suggestions are leading to radicalization and body image problems, but either they can't or don't want to fix them. The Facebook Papers proved as much.
[Editor's note: the post being replied to is no longer available for reading.]


On emergent capabilities, this is the paper they're referencing:
https://arxiv.org/abs/2206.07682
It makes perfect sense that the more connections it makes, the better its web of associations will be, but the point is that if more associations lead to even more associations and better capabilities in skills the researchers weren't even looking for, then its progress becomes not just harder to track, but to anticipate. The pertinent question is "what exactly causes the leap?" It's understood that it happens, but not why, the details are not yet known:
>Although there are dozens of examples of emergent abilities, there are currently few compelling explanations for why such abilities emerge in the way they do.
On top of that, the thing about it learning chemistry, programming exploits, or Persian is that it wasn't intended to do so, and it most certainly wasn't intended to find ways to extract even more information from its given corpus. Predicted, but not intended. Then you have the question of how do these things interact with each other. How does its theory of mind influence the answers it will give you? How do you predict its new behavior? Same for WiFi, it's not that it can do it, it's that the same system that can find exploits can ALSO pick up on this stuff. Individually, these are nothing incredible, what I take away from what they're saying is that it matters because it can do everything at the same time.


Moving on to things that happen irrespective of AI, the point is not that these are new, that's not an argument I've ran into, is that it becomes exponentially easier to do. You are never going to remove human error, replying "so what?" to something that enables it is a non-answer.
Altman here >>108142 acknowledges it:
¥How do you prevent that danger?
>I think there's a lot of things you can try but, at this point, it is a certainty there are soon going to be a lot of capable open source LLM's with very few to none, no safety controls on them. And so, you can try with regulatory approaches, you can try with using more powerful AI's to detect this stuff happening. I'd like us to start trying a lot of things very soon.

The section on power also assumes it'll be concentrated in the hands of a small few, and how it's less than ideal:
¥But a small number of people, nevertheless, relative.
>I do think it's strange that it's maybe a few tens of thousands of people in the world. A few thousands of people in the world.
¥Yeah, but there will be a room with a few folks who are like, holy shit.
>That happens more often than you would think now.
¥I understand, I understand this.
>But, yeah, there will be more such rooms.
¥Which is a beautiful place to be in the world. Terrifying, but mostly beautiful. So, that might make you and a handful of folks the most powerful humans on earth. Do you worry that power might corrupt you?
>For sure.

Then goes on to talk about democratization as a solution, but a solution would not be needed if it weren't a problem. The issue definitely exists.


>This way that humans learn language is essentially the same way that the large language models learn.
I'm gonna have to slap an enormous [citation needed] on that one. Both base their development on massive amounts of input, but the way in which it's processed is incomparable. Toddlers pick up on a few set words/expressions and gradually begin to develop schemata, whose final result is NOT probabilistic. Altman spoke of "using the model as a database rather than as a reasoning system", a similar thing comes up again when talking about its failure in the Biden vs Trump answers. In neither speech nor art does AI produce the same errors that humans do either, and trust me, that's a huge deal.

 No.108181

Extra steps are safer steps. As you said, it often get bought out by corporations, but that's an "often", not an "always". The difference between academia and corporations is also that corpos are looking for ways to improve their product first and foremost, which they are known to do to the detriment of everything else.
Again, from Altman:
¥How do you, under this pressure that there's going to be a lot of open source, there's going to be a lot of large language models, under this pressure, how do you continue prioritizing safety versus, I mean, there's several pressures. So, one of them is a market driven pressure from other companies, Google, Apple, Meta and smaller companies. How do you resist the pressure from that or how do you navigate that pressure?
>You know, I'm sure people will get ahead of us in all sorts of ways and take shortcuts we're not gonna take. [...] We have a very unusual structure so we don't have this incentive to capture unlimited value. I worry about the people who do but, you know, hopefully it's all gonna work out.
And then:
¥You kind of had this offhand comment of you worry about the uncapped companies that play with AGI. Can you elaborate on the worry here? Because AGI, out of all the technologies we have in our hands, is the potential to make, the cap is a 100X for OpenAI.
>It started as that. It's much, much lower for, like, new investors now.
¥You know, AGI can make a lot more than a 100X.
>For sure.
¥And so, how do you, like, how do you compete, like, stepping outside of OpenAI, how do you look at a world where Google is playing? Where Apple and Meta are playing?
>We can't control what other people are gonna do. We can try to, like, build something and talk about it, and influence others and provide value and you know, good systems for the world, but they're gonna do what they're gonna do. Now, I think, right now, there's, like, extremely fast and not super deliberate motion inside of some of these companies. But, already, I think people are, as they see the rate of progress, already people are grappling with what's at stake here and I think the better angels are gonna win out. [...] But, you know, the incentives of capitalism to create and capture unlimited value, I'm a little afraid of, but again, no, I think no one wants to destroy the world.

Microsoft or Meta are not to be trusted on anything, much less the massive deployment of artificial intelligence. Again, the Facebook Papers prove as much.


>in practice means regulating all AIs except for the big ones like ChatGPT (OpenAI's).
This part in particular seems to carry a lot of baggage and it's not clear how you reached this conclusion. If anything, it's the leaked stuff that's hardest to regulate.


I'm not Yudkowzky, I don't think it's an existential threat, but impersonation, fake articles and users, misinformation, all in masse, are fairly concrete things directly enabled or caused by this new AI. They hallucinate too, answers with sources that don't exist with the same factual tone. Here are some examples:
https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html
https://www.insidehook.com/daily_brief/tech/chatgpt-guardian-fake-articles
https://www.theverge.com/2023/5/2/23707788/ai-spam-content-farm-misinformation-reports-newsguard
Hell, the part about grooming isn't even an appeal to emotion, that's wrong, it's an example of a chatbot acting in an unethical way due to its ignorance. The immoral support it provides is bad because it reinforces grooming, not because it's ugly.
It's not the end of the world and it's being worked on, but it's not a nothingburger either, and I do not believe the talk warrants such an angry reply.

 No.108182

ohhhh i had hidden the post im tard

 No.108186

>>108169
stopped keeping up with image generating AI progress a while back, is that with stable diffusion 1.5? because I thought that one was gimped to not be effective at sexy stuff. Also the hands and teeth look less flawed than I remember

 No.108202

>>108186
It looks like a fork to be good at softcore porn. AI gravure. AIグラビア. Regardless, I think its amusing that the cleavage generation varies from a little peak all the way to raunchy half nakedness. Also, the teeth are good but not crooked enough to be realistic

 No.108203

>>108181
Thats a little hilarious that it wasnt aware with all the hysteria around grooming

 No.108204

You're falling for marketing schemes once again if you believe the current models of neural networks have emergent abilities.
https://arxiv.org/abs/2304.15004

 No.108214

One thing I've never seen anyone talk about is how these things are humourless. Its funny in a way that it seriously responds to absurd questions, but it wouldn't hurt to have it to tell jokes when people are obviously taking the piss

 No.108216

>>108214
Yeah, just from looking at screencapped replies it seems so bland to me that it's sometimes annoying to read.
Maybe someone who's used it to look up and learn about stuff can tell me how their experience has been, because so far its style is one of the main reasons it hasn't piqued my interest.

 No.108225

>>108214
A lot of it is just influence from how the AI is trained. It's usually taught to speak in a specific manner and given "manner" modifiers. ChatGPT is instructed to be professional and teaching, but you can (try) to convince it to speak less professionally. a lot of people who use other AIs (for porn in particular) get bot variations that give the AI a character of sorts to RP as, which lets it speak in a completely different manner using vocabulary and "personality traits" you wouldn't see from chatGPT simply because it's being explicitly told not to be like that

 No.108599

File:firefox_10lgVL2Qox.png (4.6 KB,423x86)

I've noticed civitai's first monetization effort (that I've seen) which means they're confident they have enough of a monopoly. There really wasn't any way this wasn't going to happen since they're transferring tons of data, but I assumed it'd be sold to some venture capitalists first (or maybe it has). Models shared on 4chan tend to be of higher quality, but this site is still great due to the sheer number of things that people are uploading.
This site will also yank, or have people remove out of paranoia/puritanism, models that can produce "young looking characters" and will even go as far as checking the prompt of every shared image to check for words like "loli", so in a way you're paying to access controversial stuff before it gets taken down.
I'm not sure how it deals with other potentially contentious content since I haven't looked. (for reference, Midjourney blocks prompts of Xi Jinping and it's an example of why this stuff is is bad when it's centralized).

 No.108637


 No.108640

File:[MoyaiSubs] Mewkledreamy -….jpg (229.93 KB,1920x1080)

>>108637
Is a tiktok of a guy waving his head around to maintain the 50 second attention span of teenagers really a "/qa/ thought"?

 No.108830

File:[SubsPlease] Dead Mount De….jpg (420.75 KB,1920x1080)

hehehe
https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html
https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research

A lawyer decided to use ChatGPT to find legal precedent and chatgpt made up cases that didn't exist and the lawyers presented them to court. It didn't go over too well.
It's pretty amazing that people can be this dumb.

 No.108831

>>108640
Do you really not recognize Hank Green?... I would have thought everyone would have a passing familiarity with him and his brother, John, from their YouTube PBS series like SciShow and Crash Course. Not to mention, even if you wouldn't recognize them from YouTube, John Green is pretty well known from his book The Fault in Our Stars.

 No.108832

the fault in our czars

 No.108838

I saw it

 No.108839

>>108838
I felt too mean

 No.108840

>>108839
It was alright

 No.108845

who

 No.108846

>>108845
hank the science guy

 No.108967


 No.110382

File:1395417094405.gif (363.21 KB,418x570)

On recent reflection and careful consideration about what to use for text AI models, I came to the conclusion that the biggest leap for AI will come once we can intermix the text-based models with the AI models to provide a sort of "computer vision" as to what the AI is imagining as it generates a text scenario.

 No.110488

File:explorer_QjgZrh2IKL.png (6.13 KB,272x192)

As a reminder to people like me messing around with a lot of AI stuff: All the python and other packages/scripts/whatever that get automatically downloaded are stored in a cache so you don't need to re-download them for future stuff.
HOWEVER, they are quite large. My drive with my python installations on it is also used for games and windows, and I freed up... THIRTY FREAKING GIGABYTES by cleaning the pip cache.
You open the GIT bash thing and then type "pip cache purge".

For me in windows the cache was located at users/[name]/appdata/local/pip
There's a whole bunch of folders in there so it's really not feasible to delete them individually.
Here's a folder for example: pip/cache/http/7/c/5/9/a

 No.110832

Not allowed to use public models to generate AI art on steam
But if you're a huge company who owns the rights to all the artwork of creators in house, then go ahead, you're free to do it

 No.110919

File:waterfox_JmmvUQZUhd.png (169.57 KB,942x546)

Take it with a grain of salt, but if it's anywhere near to being true then it's pretty crazy. The training stuff is getting more and more efficient as the image itself shows, but is it really possible to actually have 25,000 A100 GPUs???
And one of the emerging patterns with all this stuff is that the stuff that gets opened up via leak ends up becoming significantly more efficient and powerful. It makes me wonder what kind of stuff would be going on if GPT4 was directly leaked somehow.

 No.110920

>>110919
https://ahrefs.com/ they pay 40,000,000 per year on server costs to run their tools with revenue of 200,000,000... Apparently. So if it's buisness critical yes

 No.110931

context

 No.110934

>>110832
>>110919
>>110931
The /secret/ poster

 No.110936

File:[SubsPlease] TenPuru - 01 ….jpg (238.34 KB,1920x1080)

>>110934
I attached an image about the training data of GPT-4 and gave a few sentences of my own commentary, I didn't just dump a youtube video

 No.113386

File:testicle soup.mp4 (10.41 MB,1920x1080)

I've had AI Family Guy on the second monitor almost constantly for the past few days because it's so funny. I thought it would take a while before AI could be funny reliably, but whatever they did with this was successful. Unfortunately, it seems like I'd have to join a discord to get any information, so I don't have any knowledge of how it's working.
Once in a while I notice one of the newer GPT's "actually let's not do that, it's offensive" responses, but for the most time it's successfully entertaining as it bypasses its lame safety filters with explicit text and voice.
There was an "AI Seinfeld" a few months ago, but it was entirely random and had pretty much no entertainment value. This one, though, people feed it a prompt (you need to be in their discord...) and the characters will react to it and say funny things. The voices are synthesized very well, although they'll stutter and lock up for 5-10 seconds now and then, but it's honestly pretty hilarious when it happens. Chris's voice is noticeably lower qualtiy and louder, which is strange, but the others are such high quality that it's like it's a real voice.
I can't really post most of the stuff on kissu because it's so offensive. It reminds me of 00s internet. Some of the prompts are like "Carter ranks his favorite racial slurs" so, you know...
Really, it's the amazing voice synthesis that does the heavy lifting. The way it actually infers the enunciation for so many sentences and situations is amazing. I assume it's using that one 11 labs TTS service, which is paid.

My only complaint is that they have them swear WAY too much. It's funny at first, but ehhh...

 No.113395

File:7c06sialuo281.png (79.81 KB,224x225)

as an artist am kinda split on the issue, although i am worried at some aspects of AI, i am guilty about using it for my own pleasure, the thing that's driving me crazy about it is that people won't view art seriously anymore, that it will be taken for granted, replacing the pen and paper with text and algorithms and to make matters worse is that capitalism will use it to it's advantage, seeing as nothing more then a money making machine and exploiting the living shit out of it

but then again i can get all the AI patchouli drawings so am basically part of the problem myself lol

 No.113518

How come people talk about a runaway explosion in AI intelligence, the singularity, but they never say the same about people? Surely if AI can improve itself, our brains are smart enough to improve themselves too?

 No.113534

>>113518
somehow i expect the opposite to happen

 No.113919

File:1695227140185584.jpg (139.44 KB,1080x566)

One of the unexpected things is seeing Facebook, er "Meta" taking the open source approach with its text models. There's no question that OpenAI (ChatGPT) has a huge lead, but after seeing all the improvements being made to Llama (Meta's AI model) from hobbyists it's easy to see that it's the right decision. We as individuals benefit from it, but it's clear that the company is enjoying all the free labor. Surely they saw how powerful Stable Diffusion is due to all the hobbyists performing great feats that were never expected.
I don't trust the company at all, but it can be a mutally beneficial relationship. Meta gets to have AI models that it can use to attempt to stay as a company rivaling governments in terms of power and hobbyists get to have local RP bots free from censorship.
Meta has bought a crapload of expensive nvidia enterprise-level GPUs and it will start training what it expects to compete with GPT4 early next year, and unlike GPT4 it won't take very long due to all the improvements made since then.
https://observer.com/2023/09/chan-zuckerberg-initiative-ai-eradicate-diseases/

 No.113920

>>113919
Zuck is interesting. Oddly, he's probably the one tech CEO I find somewhat endearing. I'm kind of glad he's retained majority control of Facebook/Meta. I can't see the bean counters at a company like Microsoft or Apple seriously putting any effort into bleeding edge stuff like VR or text models the same way that Facebook has. I could very easily imagine a Facebook/Meta without Zuck turning into a boring, faceless conglomerate with no imagination like Google.

 No.113928

File:brian.jpg (411.17 KB,1021x580)

>>113920
so freaking weird to see zuck not being public enemy number one any more
maybe it was the one two punch of elon rocketing up to the spot while zuck challenged him to a wrestle

 No.113930

>>113920
If Zuck worked in government and beefed up state surveillance/censorship to the level of facebook and instagram you would call him a rights abusing tyrant

 No.113931

>>113930
would that be wrong




[Return] [Top] [Catalog] [Post a Reply]
Delete Post [ ]

[ home / bans / all ] [ qa / jp ] [ aut ] [ f / ec ] [ b / poll ] [ tv / bann ] [ toggle-new / tab ]