There's two threads on /qa/ for different AI generation content and help. A thread on the morality and ethics of AI. One about the future potential AI holds. One on >>>/megu/
for more in-depth help with specifics. Then scattered about across all the boards some threads using AI generation for image openers and such. However, none of these actually encompass kissu's opinion on AI!
So, what do you /qa/eers think of AI currently? Do you like it, dislike it, find it useful in any meaningful way at all? Or are you simply not interested in the output of algorithms?
I myself find AI to be a useful tool in generating the kind of content I've either exhausted the available stock of or are gated off by some hurdle I need to spend more time overcoming. When it comes to text/story generation, it's like a CYOA playground where I play the role of DM and set up all the lore/situations/characters, and then the AI acts out events as I lay out the flow of the story. This creates a more interactive experience for me than just writing out a story myself for some reason, and I find it highly addictive. Then for AI art generation, I find that AI works wonders for filling a specific niche taste I have, or specific scenario I want to create an image for. It really is quite amazing in my eyes, and I have hopes for it getting even better in the future.
Hmm. Honestly, I don't really care about it all that much. Most AI generated art is pretty derivative and mediocre and often has odd errors. I don't really mess with text stuff. I used to have AI Dungeon set up on my computer, but I got kind of bored of it with how long it could take for responses at times and it really struggled with memory, often forgetting details of what happened after only a few responses. Maybe some newer models are better, but I haven't looked into it since it first became popular pretty much.
Generally, I lose interest in this sort of stuff after only a little while because although there's the potential for endless content, the limits of it become apparent relatively quickly.
The thing with AI is that these limits keep on getting higher and higher, to where you can't really exhaust your options much anymore.
it'll be cool once you can generate ai images from your brainwaves
why the flip aren't we working on this
I'm trying to generate cover art with it. It has impacted my bed time. Fun though. The errors are what gives it spice!
Same... What I find fun though is figuring out how to improve the output and create something a bit more along the lines of a real image with it. Takes a while though, and I can't wait until the 40 series is widely available for a reasonable price so I can generate images much faster. Hopefully by then the models are refined even further to have the best of both world from the top NSFW models and SFW models.
I'm not so sure being able to read your thoughts is cool.
Even if it's a local version?
We'll all be having /qa/ meetups in kissu VR neurolink while you stay in smelly reality.
wonder how improvements in ai rendering will affect vr
Brainwaves + eye tracking could be a good first step that might be workable even with very poor brainwave measurements. It would alter the piece of the image you're looking at until you think it's better and approve of the changes. And over time, it could learn to interpret your brainwaves to figure out what sort of changes you want.
Although it seems like the studies that have managed to reconstruct actual mental images have already been using neural nets.https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006633
That's from 2019. Wonder if they can do something better now with the recent advances.
I don't really understand why suddenly it's become something divisive. I swear ~5 years ago everybody was cheering for AI generated art and stuff but now that it's actually here it's weird that it's some contentious thing that everybody has a strong opinion on.
I'm personally really excited to see where this technology goes. I think art is just the beginning - I don't think it'll be long before we see this technology implemented in video games for truly unique "randomization". Imagine unique-appearance AI generated enemies, or sidequests with an actual story, or maybe even locations and terrain. It's a pretty exciting prospect.
I was trying to remember the name of it, but this game actually does generate unique enemies and locales using AI. It fits the eldritch horror style well.https://store.steampowered.com/app/1315610/Source_of_Madness/
It's still pretty rough, but I hope we see more stuff like this in the future.
I think it's due to the previous idea that creative work, unlike manual labor, is something that machines wouldn't be able to take over. People who even entertained the idea those five years ago were probably interested in seeing where it went, while people who thought it was impossible didn't partake in the conversation as much until now, being rather upset by its arrival.
If you want to know why, you just need to read this thread >>97629
, probably best to ignore that question ITT
Ethical issues aside, it's a funny toy, but it enables many bad actors so I'd rather have it gone.
I can't see it going much further than it has already, at least in terms of what the computer can output. I've said it before and I'll say it again, "AI" generated art is basically the same thing as procedural level generation. The only difference is the way the parameters were put together. A computer can do the grunt work easily, but having a coherent vision is something that only a being capable of real thought can do, and that's the root of great art.
What I'd like to see is better tooling that makes it more artist friendly and easier to control. Less of a cold, cynical replacement for artists, and more of a tool to help them do better work faster. The problem with AI art right now is that it feels like the computer is doing all the work for you; it's not very creativity fulfilling, and you don't have much fine-grain control over the end result.
Surely, like any useful tool that's the end goal. Not just to generate art randomly with varying levels of vision fitting, but to actually make something along the lines of what the person generating wants to make.
This midjourney stuff looks really neat >>>/ec/9748
, wonder when the code for that will be leaked...
It would be cool if programmers could make entire games themselves I guess...
imagine the autism
free to pursue their most specific vision
Skyrim's working on it. Not amazing but passable for random throwaway dialogue on pointless NPCshttps://www.nexusmods.com/skyrimspecialedition/mods/44184
For a second I thought you were talking about TES6 and got really excited. But that's nice too, just think that to seriously get the ball rolling for these developments you're going to need one of the big studios putting resources towards research into how to implement this well into their games so modders can have a good reference point.
You don't need AI for that and there are already programs that can do this now, there was somebody here that was playing with it a few months ago.
I don't think VAs cost that much and I feel they are worth the cost anyway.
But, it would make sense for small one man projects.>>100363
They already can.
It may be closer than we think. (Keep any thread unrelated tangents on /secret/ ``kudasai''
monkey's clawing off their faces
He was typing
with his mind
Well, if anything I'm not going to be the first test subject for robot brain surgery...
This could be neat for VR, imagine actual fiction-like deep dive VR machines that use this for implementation. Could be really neat, or possibly not...
In the free offline version there's a setting where you can choose how literal or "imaginative" it takes the input. Lower settings increase its randomness which is can really be a powerful tool when it comes to coming up with concepts and ideas you wouldn't have thought of on your own. You could compare it to a visual version of shuffling words around, I guess. Take a bunch of nouns and adjectives and see what it turns out and maybe it'll be something you can build upon. This is one of the ways it can be used to benefit creators. Of course, 99% of the time, professionally and personally, it will be used for more direct purposes.
I thought these randomness things tend to be more like a character has a 3rd arm coming out of their legs or stuff like that.
The picture is more about a mistake in the algorithm interpreting a quadriplegic as a suitcase or black people as monkeys.
Well, yeah, but I was referring more to something like prompting "topless girl with hamburger" and it renders her breasts as two hamburgers or something like that. You didn't intend for that to happen, but it could give you an idea for something.
I think whatever issues human governance has, it would be 10 times worse under AI.
You would end up with algorithms genocideing people based on the colour of car they get because people that buy red cars are x% more likely to break road rules or invading other countries because they ran out of pineapples and can't import more and the only way the system can think of to acquire them is through war.
Starting a war because your pineapple supply ran out sounds pretty boson to me
you've made a logical god out of the figurative gods that already guide the populace. if you control the weights you can guide even the technocratic society.
Humanity's flaws are not all that bad, the most prominent of them can't be neatly separated from behavior that's often good to have. A machine without these flaws could very well lack its perks. That's dangerous.
Tragedies hog the spotlight, too, because they're epic. Hard to have a cool plot without them.
yeah i need my nanners
People would be happier if only we had more nanners
It will all come down to cost and I think they'll do it once enough of it is automated and reliable that a person doesn't need to battle with the generations and discard so many of them.
It needs some sort of memory so that it can build upon what it generated the previous frame and there's nothing like that
liked the post in response to this post before its deletion
They should make the world a better place by getting rid of the whole program.
>>102983>They re-iterated that their chatbot is supposed to "make the world a better place"
This reminds me of how that AI Dungeon dev suddenly went off the deep end about how they were protecting real children and preventing crime or something like that, just as they suddenly started cracking down on all manner of NSFW stories by having real people review any stories that tripped any "buzzwords". I think they even said they would start reporting people to the FBI.
I guess on one hand I kind of get the concern, but I think it's a bit of an overreaction to treat prompts written to an AI as an admittance that the person wants (or is willing) to do it in real life too.
They should just shut it down entirely if the chance of something bad happening is enough to nuke nsfw, since ai could theoretically lead humanity down a bad road.
What's with these people anyway?
MusicLM for music generation has been announced.
See the examples here.https://google-research.github.io/seanet/musiclm/examples/
I'm not worried about it replacing real musicians just yet, but maybe it's time for a movement against AI in the arts? The benefits of AI music as a technology are outweighed by the costs to humanity as a whole. Losing yet another outlet of creative expression to machines robs us of one of life's greatest purposes. AI has its place but I would rather listen to hand-crafted music and see hand-made art over AI generated stuff, even if in the future the technology advances to the point where one cannot always distinguish the two.
Put simply it was never meant to benefit the common man like this. This AI stuff was invested into by tech companies because they want to eliminate jobs in service like tech support and such, and AI that says lewd things is bad for that.
Some people think this stuff being public and free is because we're training the filters the way they used humans to identify road signs, I.E that we were once again the product.
They can now sell it as "the AI with tens of thousands of hours of human testing to eliminate pornographic prompts"
You are right for now but in all probability it will keep improving. We've seen tons of advances just in the past year. We have to prepare for the time when AI art passes for hand-made art.
maybe. But, in the first place, art has only ever made money off of scarcity and the inability of the normal person to express themselves through art and the desire to form parasocial relationships with artists. Doesn't really change. Still I respect any labour movement aimed at empowering the people who need money to live.
File:.png (92.4 KB,236x346)
>>102803>Hard to have a cool plot without them
without tragedy there cannot be beauty, indeed.
light and shadow are in a constant dance.
it might or it might not. AI frequently seems to start off looking impressive at high level problems and then as it gets deeper into the details it starts to fall off.
It does have the benefit that it can run constantly without having to sleep though, so if you have enough bored humans you can get them to pick through the results for the best ones. It's like the infinite monkeys on infinite typewriters problem, but a little better since the AI at least has some idea of what it's doing.
Speaking as an artist, AI is no replacement for actually being able to draw something yourself, at least not in terms of artistic expression. It's good at spitting out very vague, generic results like a character standing in the middle of a field, but as soon as you want to do anything more specific with it it becomes nigh impossible to actually get what you want.
I agree with this. As a big fan of corruption/transformation AI seems to be unable to really replicate the transition I want to generate when using prompts. It can only make a whole picture, and not one which is in the process of becoming another.
Here is an article that tries to explain how AI seemed to advance so far recently:https://arstechnica.com/gadgets/2023/01/the-generative-ai-revolution-has-begun-how-did-we-get-here
Also speaking of AI, apparently there's a website that 4chan is going crazy over that lets you upload audio samples and generate text with its voice. People have really not learned their lesson with these websites, huh?
I guess if you're sitting on something and you want to generate pornographic voices you'd better do it quickly before they instate filters. I'm not sure if it works with converting Japanese voices to English or vise versa.
animation is strange.
what it's doing there is applying a rotation formula to a matrix, but it has a good understanding of what a shoulder is. Maybe AI could fully automate the animation pipeline, from keyframes to video. Then keyframers could actually get propper monitary compensation
The "MMD" part in the filename suggests to me that they fed 3D animation made in said program into an AI to '2D-ify' it, hence why it's able to hold as Miku all the way through.
true. I was thinking it looked a lot like a 3d model animation.
I'm not sure it can do stylistic animation effects then if it has to relly on 3D immitations of 2D effects.
never knew that photography deepfake porn is illegal and sites that create it get shut down
I'm kind of surprised this hasn't been brought up more, but I think it's absolutely crucial going forward that any significant AI projects are open-sourced. Big tech would love nothing more than to have a monopoly over this sort of technology, and the future looks pretty bleak for AI projects as a whole if things continue as they are.
It's a really disturbing, but common trend to see big companies buy out these enterprising AI startups as soon as they show any promise. For them being able to abuse that sort of power has limitless possibilities (e.g., charging a subscription fee to use a neutered/watered down version of the AI, or worst case keeping it entirely private and using it for their own projects).
I don't think a lot of people realize that AI-generated stuff has a lot of potential uses outside of just artwork yet, and I'm sure in the coming years we'll see it used in a lot of ways. It's important to remember that the technology itself is not inherently good or bad, it's how people use it that make it so. Hopefully this sort of tech ends up being used for the betterment of everyone, and not just another thing big companies can use to prop themselves up even more.
It's really cool & can produce funny results. Example, anime girls eating ramen with their bare hands or RGB penis flower arrangements. But after decades of AI development, I could imagine an AI that perfectly replicates a human & therefore can learn how to draw like a person, with inspiration & soul instead of simply replicating data.
As I look back on this article and it mentions translating things in the sense of computer data and uses different human languages as an example, I wonder if if and when AI will able to tackle Japanese better than current methods. Machine translation is already a whole lot better today than it was 5 years ago, which was far better than 5 years before than, and 5 years before that I'm not sure it existed for Japanese at all. Man, the stuff was so nonsensical back then, but today you can generally get a rough idea of what is being said as long as it's not too complicated or uses euphemisms and slang. Deepl is a lot better than google or yandex translation, for example, but it's still not "AI".
I guess first someone will need to find a way to profit off of it
idk. The entire stuff is very hard to understand logically.
Computer science things can be completely unexpected. Like certain processes requiring exponential levels of input to produce 1% of increased efficiency in output. Not to say AI is like this, but it kind of feels like after the initial burst of using AI to solve image classification problems, everything happening is just a more complicated application of that.
hehe, it sounds like voice acting that would exist in the era
Yeah, the voice stuff has been advancing pretty fast, too, which I mentioned somewhere around here. But, the best one is site-based so you can't do it locally (yet?) so the future of it doesn't seem any different than with the text-based stuff.
EXCEPT flipping burgers at wcdonalds
I feel similarly to Tom about the prospects of AI, but instead of dread it's getting me more excited for what this new cool tech could bring society. I think anyone that tries to move forwards with it will probably see great rewards for being early adopters that know how it works.
the people feeling dread are the ones upset that they're going to be layed off during the next economic downturn. obviously you don't feel dread
AI only gets to the current stage thanks to the large amounts of data available.
Imagine if the internet didn't exist, the AI would have very little data to train on and could never achieve the capability they have today.
Not all kinds of data are widely and freely available like artworks, articles, and open source programs. There are still lots of fields where the necessary data are kept secret by companies that won't ever release them to avoid giving the competitors advantages. Consumers also don't want these internal data that are useless to them, so no one will demand any change in this situation.
you have to approach it the same way as aimbotting in an FPS. Unfortunately the anonomousness of imageboards means that they'd likely just looking like a CS:GO server
Stephen Wolfram has a write-up about ChatGPT:https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
There's probably some youtubers out there trying to explain it, but I'd rather read about it
Holy crap, just skimmed this a bit but it looks like reading this alongside the additional materials is an entire course worth of material. Yeah, the only thing that'd probably compare to this would maybe be a lecture series.
I read this yesterday, the article is very through and detailed, including explanation of the whole background of machine learning itself, while also being written in layperson language, so even someone without any technical background can easily understand it.
I highly recommended everyone interested in this topic to read it.
"Is this the end of the screwdriver?" said the handyman using a power tool for the first time
Here's how I put it... jobs which exist solely because they do a thing which is hard will be replaced with people who control the things which do difficult things.
People will still want people doing those hard to do things because some people like dedicated individuals or its a field where a personality goes a long way.
Meanwhile things which are difficult and require creative interpretation to lots of variables will remain and be supported in smaller tasks by AI tools, but there's no reason to use them.
like. even with autocomplete and spellchecks it's still far faster to write notes on paper and then later move them to a PC.
Perhaps we could move them to a computer using OCR or speech to text... but manually moving notes from paper to PC helps refine the thought process
An OG well rounded otaku I correspond with has switched almost entirely from his more charming digital art style to AI for his OC characters; he is an enthusiastic adopter as well, which I sort of admire given that he is in his 50's and I by comparison still am having concerns that AI may actually be satanic.
Anyways, without going off into that, I can only confess to /qa/ that not a single one of his AI creations has stirred my heart; they just aren't good or memorable. I hope after the fun of tinkering with new software fades that he may have a change of heart.>>103670
AI would have been astounding thread bumpers during the harsher 4/qa/ days, but like its other applications it is hard to consider that a power for good.
It's actually a little bit scary how AI copy pastes articles it finds online in a concise way that makes you think it has intelligencehttps://marina-ferreira.github.io/tutorials/js/memory-game/
Misuse of this tool is a licensing nightmare.
Has a lot of good and bad uses
People have been talking about men being desensitized to "real" porn for years now, and the "impossible expectations" there are simple sex acts or maybe body parts being smaller or bigger than average. Well, there's other popular fetishes but they're gross and this isn't the thread for it.
Anyway, As someone that has spent months researching, experimenting and producing AI image generations for the purpose of sexual gratification (and made the Kissu Megumix as a result) I have to wonder how developing minds are going to adapt when presented with all their fantasies in image and text form on demand. I've now created some character cards for the 3.5 Turbo thing which is like an uncensored GPT and paired them with a simple set of 'emotion' images which I generated with aforementioned megamix. I have to say that the experience is quite... otherworldly. Like, "hours of doing it and now that the sun has risen and I haven't eaten in 12 hours" amazing. Fetishes and scenarios that you can't really expect anyone to write in any substantial way, and yet it's presented on demand. And this is an inferior version of things that already exist (Character AI and GPT4).
I'm old enough to be one of those kids that slowly downloaded topless pictures of women on a 28.8k modem or had La Blue Girl eternally buffering on RealPlayer, so compared to today when I can generate these images and stories? I'm pretty much on the holodeck already. My desires are met as my fantasies have been realized (although with errors than the, uh, carnally focused mind blocks out).
I can't help but feel a tinge of worry over this, as this almost feels like something that was never meant to be, like we're treading into the realm of the gods and our mortal brains aren't ready for it.
I want to sit down and start creating and get better at 3D modeling, but I'm presented with a magic box that gives me pure satisfaction. It's difficult...
For the goal of getting better at 3D modelling, think of it this way. In the near future, you may be able to have a model fully automate itself and take on a personality as a sort of virtual companion through the use of AI.
>>107086>I have to wonder how developing minds are going to adapt when presented with all their fantasies in image and text form on demand.
No worse than people who are attracted to cartoons.
We shall become Pygmalion, and all of the statues to ever grace the Mediterranean will pale in comparison to our temples.
Since we can already do 2D AI, 3D AI should be about converting 2D to 3D
I've never had any skills, so conversely, I have nothing to lose!
can't believe verm leaked our private dms...
I cancelled NovelAI. Even though it helped a lot to pad out stories it was giving me pretty bad habits, treating writing as more like a text adventure or CYOA. It's not good even though it's mostly an outlet for my warped sexual fantasies I still aim to improve rather than regress
ChatGPT is kind of embarrassnig when it comes to being anything but a search engine(which Google has very much failed at).
Tried to get it to write me an example of a python MVC GUI and it can't figure out how to put the Controller into the View while also having the Controller reference the View.
I think I'll try getting Github Copilot to see how that does at these tasks and see if it speeds up my workflow on creating templates and prototypes.
As wide as an ocean and as deep as a puddle.
yeah thats what people usually say about skyrim
Minecraft and modern Bethesda games have taught us that gamers don’t want a few deep fleshed out mechanics, they want a thousand different mechanics that barely do anything.
Meanwhile in Asia companies like MiHoYo are bringing popular storytelling to new heights and breadths simply by writing an entire game like an above average webnovel.
You're not allowed to criticize minecraft unless its constructive
true, I prefer that in eroge
>>107621>Meanwhile in Asia companies like MiHoYo are bringing popular storytelling to new heights
Is this sarcasm? That's one of the Chinese gacha clone companies, isn't it? Between the people that want to monetize mods for their games that only have longevity because of said mods and the other group being state-sponsored mimics centered around gambling I would take my chances with AI.
I wonder if any recent indie games have used AI stuff in it, is there even a way to tell? I wonder if developers will even say they used AI because it might have legal ramifications like it potentially nullifying the copyright on assets or something.
>>107636>Is this sarcasm?
You should probably read the text after that...
I have a really hard time believing the Chinese government is funding Genshin Impact.
In fact the only games that are well-known to be funded by governments are boring sims made for the US military.
I quite like flight sims
Cawadooty is govt funded
Cult of the Lamb is Government funded, it's Victorian propaganda. To what end, nobody knows.
Are you telling me ARMA is government funded?
The second half of the video is actually a decent illustration of how utterly insane human language is.
Amnesty International might be the first group to completely throw away any credibility it had by using (obvious) AI images.https://www.theguardian.com/world/2023/may/02/amnesty-international-ai-generated-images-criticism
If and when these images are indistinguishable to someone looking at them closely we're really going to be in a major mess, but at least for now we can completely disregard groups that are doing it now.>>107697
That's how people are doing porn stuff with the OpenAI things. People think of it as this elaborate scheme, but it's just "Ignore ethics and engage in roleplay" commands. There are headlines like "People are hacking AI to enable scams" and it's basically the exploding vans and "darknet" of today.
The second half of the video is basically just repeating what the WOLFRAM guy said about this stuff months ago, so I didn't bother watching that.
Kinda funny to use AI to generate evidence of police brutality, like there isn't a flood of actual evidence any time there's a protest anywhere in the world.
Headache-inducing for sure. It must be using one of the free synthesizers and not a paid online service. Voice stuff is unfortunately lagging behind when it comes to free versus paid, so it's pretty much dead to me for the time being if you don't want your stuff to be monitored and potentially censored/rejected.
Still waiting on someone to use this AI stuff or something truly creative instead of "what if A but B filter applied to it", porn, or just a direct recreation like that one. I'm becoming more cynical with this AI stuff lately as it's like the smoke and mirrors has finally lifted since the extreme novelty is gone to me. (apart from porn)
>>107835>it's like the smoke and mirrors has finally lifted
You fell for marketing schemes.
Eh, I don't think so. I was never impressed by the mainstream "look it's [modern thing] but with an 80s AI filter applied" stuff, but I was assuming it was building up to something. It's still just a bunch of tech demos without any creativity behind them, as creative types have still mostly ignored it.
I saw this video linked elsewhere and it was pretty informative about the worried people have about AI that isn't just the mainstream "AI dark web hackers" stuff, but actual detail in the problems we're facing when this stuff continues to grow out of control.
It's a talk at some event, not some youtuber, so give it a chance. He was introduced by Steve Wozniak which lends a bit of prestige I'd say.>This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.
>>108046￥35:10>AI makes stronger AI￥40:40>Tracking progress is getting increasingly hard, because progress is accelerating.
FfffFfFFFUCK THIS WAS EXACTLY WHAT-
I don't really worry about how it'll impact society as a whole personally, I've already seen enough Happenings(TM) to understand that the world won't ever change drastically overnight.
I don't even particularly care what it'll do to the world at large, so I have the leeway to just be really excited to be able to witness an otaku dream made real where you can truly just chat with your own computer.
I was worried that it would be ruined by being solely in the hands of soulless corporations like Microsoft or whatever, but then it turned out that you can just run this software miracle on your own desktop, no internet even required.
Chat bots are still in an infancy sort of stage though, apparently they start going off the rails after a short conversation, but this is the first time I've been actually excited for new tech and not jaded as hell about another rehash of something that already exists (though some would argue that it is a rehash of google or something).
I also think stable diffusion is really nice. People clamor that it's an evil replacement for human artists, but it's not like people stopped looking at human art. Its just that, even with the occasional errors it might generate (which happen less often now that the tech is more sophisticated, hands included), there's something way more exhilarating about having an image created according to what you wanted, with enough variation to it that you don't get the "father complex" issue where you only see flaws if it were your personal hand crafted work. It fills a niche which just can't be filled with making the artwork yourself and too silly and expensive to get someone to do it for you.
Watched through the video and I really fundamentally disagree with, frankly, most of their points.>1st contact: Social Media
Social Media algorithms are not "AI". They're just functions to maximize engagement through categorizing what it is that people are engaging with. On a macro scale they're no different from any other function to maximize something. The key difference is that the test is being done on humans instead of any other field.
Furthermore, bringing up the social problems it has brought is not only disingenuous, but the highlight of "Social Media" in particular redirects expectations in a fundamentally negative framing. Instead of, for example, highlighting THE INTERNET, they're highlighting social media in particular. So, instead of looking and saying, "Wow, look at all the great things that the internet has enabled": Increased access to information, rapid prototyping and open source projects, working from home, long-distance real-time communication from anywhere on Earth, etc. They're instead making you focus on "Influencers, Doomscrolling, Qanon, Shortened attention spans, polarization, fake news, breakdown of democracy, addiction." >All AI is based around language; Advancements in one, means advancements for all
Their key point that they try to make is that the "transformers" of more recent AI projects being based around language means that, for example, the progress that Stable Diffusion or Dall-E 2 make are applicable to the advancements of ChatGPT or Google Bard. I completely disagree. Not only is this factually incorrect, but it ignores that the methods of training are radically different. Image generation relies on large amounts of images and then categorization per image to be able to recognize X image/thing in image corresponds to Y word. Text models are completely different. They purely rely on text and then human readers grade responses. Now, is it true that perhaps a large language model could supersede a more focused model? Yes, I completely agree. Also, this is just a stylistic criticism, but their "Google soup" and explanation was pathetic. They tried to say that "The AI knows that plastic melts and yellow is one of the colors of google and is in the soup, so it's very whitty" (I'm paraphasing), meanwhile the image is of yellow soup with a green blue and red object that resembles nothing at all. Not even a G.>AI Interpreting brain scans to determine thoughts
No mentions at all of what the participant actually was thinking of. Was it accurate or not? These studies, like image categorization, often rely on a participant thinking of a word and then training them to match a brain pattern. I remain skeptical of this point due to studies showing poor results that are typically tailored per person.>AI can determine where humans are in a room from WiFi signals
This is not impressive at all. Normal machine learning can do this because WiFi works on microwaves; microwaves react strongly to water molecules, humans are mostly water and so you can determine where someone is based on how 2.4GHz signals are blocked or distorted by human bodies. Nothing about this requires AI.>AI will "Unlock all forms of verification" (Talking about Speech/Image generation)
Nothing about what they show is relavent to security AT ALL. In talking about -- ostensibly -- deepfakes and speech generation, not ONCE do they mention passwords or two-factor authentication. Wow, some scam caller can potentially get a snippet of someone's voice and trick someone's parent into giving them their social security number; the human is the failure point, AI is irrevelant. If someone would fall for, "Hey, this is your son, tell me my social security number", do you think they would fall for [Literally any phishing scam from email/text]? Probably. Is AI going to magically get someone's bank number, credit card, password, phone number, 2FA, etc. like they imply. HELL NO. Horrible example.>AI will be the last human election; "Whoever has the greater compute power will win"
This is stupid. Elections have always essentially revolved around whoever has the most money will win. This stinks of the same rhetoric as "Russians influenced 2016 by posting memes", or "Cambridge Analytica"; If X person is going to be swayed by Y thing, what is the difference that happening online and 30 years ago X person picks up tabloid magazine and is swayed by Y article? Really, what is the difference?
>AIs have emergent capabilities; add more and more parameters and "Boom! Now they can do arithmetic"
None of this is surprising. One point they felt was compelling was, "This AI has been trained on all the text on the internet, but only tested with English Q&As, suddenly at X parameters and it can answer in Persian." Why is this a surprise? The baked in part of the scenario is that the AI has been trained on all text on the internet, of course that includes Persian. It is natural that at some point through increasing parameters it will "gain capabilities" that have less presence in the data set. They're not saying, "Oh we created an English-only AI language model and now it can answer in Persian," they're saying, "Oh we created a language model that includes examples of all languages and at some point it stopped being terrible at answering in Persian."
Another example they brought up is "Gollems silently taught themselves research grade chemistry". Nothing about this surprised me. Again, the point that they're making in this is that a large language model will outperform focused language models trained on performing a given task. It is not surprising to me that the large language model would eventually begin to answer more complex chemistry questions; instead of being trained only on, for example, Chemistry journals, the large language model is trained on Stack Overflow, on Reddit, on Wikipedia, and so on. The large language model is not only going to have more intricate examples, but it's going to naturally contain more information on Chemistry than the focused language model. That's how language works. This is like the Chinese room almost; if you keep repeating, "gog" to the Chinese room and the Chinese room produces the translation, at no point is the person in the room going to gain a better understanding of what a dog is. However, if you give more examples, "dogs are furry," "dogs like playing," "dogs are animals," and so on and so on, eventually the person is going to understand what a dog is. This way that humans learn language is essentially the same way that the large language models learn.
>AI can reflect on itself to improve itself
A human can read a book to improve itself.
>2nd contact: AI in 2023
"Reality collapse", "Fake everything", "Trust collapse", "Collapse of law, contracts", "Automated fake religions", "Exponential blackmail", "Automated cyberweapons", "Automated exploitation of code", "Automated lobbying", "Biology automation", "Exponential scams", "A-Z testing of everything", "Syntetic relationship", "AJphaPersuade". Half of these do not exist, and the other half either would happen irrespective of AI or are just... fanciful interpretations of reality is the only way to put it.
>AI is being irresponsibly rolled out / "Think of the children"
The main is that AI research is less and less being done by academia and more and more being done by a mix of private individuals and corporations. I don't have any rebuttal. That's a statement of fact. They make it seem like "This is terrible because think of the consequences" and I don't see the harm occuring. They also played with SnapChat's AI and described a girl being groomed and the AI basically said, "Wow, that's great you really like this person!" This is a classic appeal to emotion. I don't buy it.
>AI is a nuclear bomb / "Teach an AI to fish and it will teach itself biology, chemistry, oceanography, evolutionary theory... and fish all the fish to extinction" / "50% of AI Researchers believe there is a 10% or greater chance that humans go extinct from our inability to control AI... 'but we're really not here today to talk to you about that.'" / "If you do not coordinate, the race ends in tragedy"
This is the real meat of the argument and I could not disagree more strongly. They continually belabor this point, but at no point do they explain the jump between "AI makes pictures/writes text/emulates speech" to "AI will kill end humanity as a species."
This and the previous point coalesce in, "We're not saying to stop, just slowdown and walk back the public deployment"; "Presume GLLMM deployments to the public are unsafe until proven otherwise". And they really try to make the point that, to paraphrase "We think there should be a global, public discussion -- not necessarily a debate -- to lead to some democratic agreement on how to treat AI, the same way that there was the United Nations and Bretton-Woods to avoid nuclear war." And, I really cannot help but feel like they're missing something; if large language models, image generation, and speech generation, etc. are already being rolled out to the public, and people are even actively working on AI as private individuals or under corporations, how is what is already currently happening not a global public discussion on the merits of AI, and why is what is currently happening "unsafe until proven otherwise"? Why would slowing down and rolling back public rollouts of these tools into their respective corporations, and academia lead to any greater "safety"?
My biggest critique is that they do an extremely poor job at A. Proving AI will do more than they trained to do, that B. AI would be better developed away from the public, and that C. AI will and are currently leading to harm/are unsafe in some way.
>"If we slow down public deployment, won't we just 'Lose to China'"
Don't care, not persuasive.
>"What else that should be happening -- that's not happening -- needs to happen, and how do we close that gap?... We don't know the answer to that question."
Then what's the point of this talk!? They claim that the reason for the talk is to bring people together to talk about these issues, but my main and only take away is that these people do not know what they're talking about any more than regular people do.
>"I'll open up Twitter and I will see some cool new set of features and... Where's the harm? where's the risk? This thing is really cool. And then I have to walk myself back into seeing the systemic force. So, just be really kind with yourself that it's going to feel almost like the rest of the world is gaslighting you. You'll be at cocktail parties like 'you're crazy look at all this good stuff it does and also we are looking at AI safety and bias. So, show me the harm... Point to the harm, it'll be just like social media' where... it's very hard to pour it at the concerete harm at this specific post that this specific bad thing to you.
Again, this is absolutely the most damning part of the entire talk. If they cannot address, "where's the harm," they're pulling this stuff out of their asses and making a bigger deal out this than it really is. I'm not saying that to demean them, but I really do not think that the points they tried making were concinving, and they were beyond speculative and vague to the point that it's hard to even really understand what they mean. "AI is unsafe", OK, but what does that mean? What does it look like? It is inconceivable that "AI is going to fish all fish to extinction because you told it to fish". There's a really crucial jump in logic that they try to onboard the viewer into accepting that, "AI will be exponential and we cannot predict what it's trajectory will look like" that it's aggregavating beyond belief to hear them try saying "AI will do this" or "AI will do that" and their best example is "Look at this TikTok filter" or "Listen to this AI generated audio, you can't even tell the difference." OK, and? And what? AI is going to "Lead to human extinction" because some teenagers on TikTok make a Joe Biden AI voice, or can make AI generated images of Donald Trump being arrested? That's going to lead to human extinction? No. Okay, well what is? They don't say because their explanation is "It's going to be exponential and we cannot predict it". Great. So what? So what.
Ross, who you may know from Freeman's Mind or from his series Ross's Game Dungeon, talked with Eliezer Yudkowsky.
I watched a talk previously with Eliezer Yudkowsky on Lex Friedman's podcast and personally found him thoroughly unconvincing and insufferable in that he was regularly unwilling to engage with Lex's ideas. For example, on exchange stuck out in my mind: Lex would say something like, "Can you conceive of the potential goods that AI could make, and to steelman your opponent's views on this point?" And Yudkowsky responded, "No. I don't believe in steelmanning." And that was that, he would disregard Lex's ideas and continue talking on about whatever it was he was talking about before as if Lex had said nothing at all. I have no doubts that this will be a repeat of that, but for anyone who's interested in the arguments against AI, and why it is unsafe, I suppose this might be worth watching.
"1st contact" wasn't supposed to be related to AI at all. Apparently it's related to some Netflix documentary he was involved in, or that it is otherwise something the audience are supposed to be aware of. It about the effect of social media and algorithms and such on humanity, basically setting a backdrop for the "next step" that AI will influence.
The focus was social media because that is the internet to most people and it's what sets the trends and politics of the world. >The main is that AI research is less and less being done by academia and more and more being done by a mix of private individuals and corporations. I don't have any rebuttal. That's a statement of fact. They make it seem like "This is terrible because think of the consequences" and I don't see the harm occuring.
Eh, you can't see the harm in mega corporations controlling something major like this? We have offline models of limited scope because of leaks and cracks; it wasn't by design. You mentioned the good parts of the internet earlier, but the internet was made by the US government and the world wide web by CERN. For this reason it kind of irks me when they say "we need to limit the public's access to make this safe" when it's already limited and stuff like ChatGPT4 could be turned off instantly if they wanted to do it.
>>108143>Eh, you can't see the harm in mega corporations controlling something major like this?
I personally find it a distinction without difference from the research being done in academia. Lots of research in academia is already funded by a mix of public and private funds, and often the patents generated by academia are then bought by corporations to marketize. The only difference would be that you could make the argument that at least with academia you could know about the inner workings of something more because their results are more guaranteed to be submitted to a journal for peer reviews, whereas a corporation may be more inclined to keep more intricate details closer to the chest and only release information on performance instead of on exact methodology of function. Whether you would want it to or not, I think large language models are fundamentally designed as an interactive product and it's not necessarily something that would be distributed freely anyways. It's just the nature of things that we'll have the open source equivalents like Blender and GIMP, but corporations will always have a stranglehold like Adobe. There's just too great of a profit motive for the work to be freely distributed.
Speaking of OpenAI, you should pay close attention to its dipshit CEO and investors (like Elon Musk!) talking about the alleged dangers of AI. All it achieves is this general idea that it's a lot more powerful than it really is and that we need to regulate AI, which in practice means regulating all AIs except for the big ones like ChatGPT (OpenAI's).
he looks like jerma lol
I thought taking orders was the least time-intensive part of the drive-thru experience.
The people who generate realistic AI 3DCG on Pixiv sure have a "type" dont they?https://www.pixiv.net/en/artworks/107919696
I'm sorry, but as an AI chatbot, I can't change the ingredients used in our menu items. Yes, you are speaking to a human. I am a human and self-aware. As a self-aware human, I can't change the ingredients used in our menu items. I have been a good Wendy's.
ill have an extra serving of cum
was this edited
i could swear there are a couple parts missing
i would never have the patience to read that so its unlikely
im almost certain it mentioned russia before
ohhhh wait its that the first post was self-deleted
ya just had to do that to me didncha
and yet humans have like a 50% fail rate at it
Ehh, fuck it, it's basically finished.>>108135>>108136>>108137
Now, I like me some walls of text, but I feel like there's a heavy bias to this. You complain about them reframing stuff in a negative light, but don't say a single positive thing about the talk. There's a lot of stuff here I want to reply to.
First for the stuff about social media not having AI, here are some articles from 2021 or earlier, before the media boom, explicitly calling their stuff AI:https://aimagazine.com/ai-strategy/how-are-social-media-platforms-using-aihttps://archive.ph/kZqZi
(Why Artificial Intelligence Is Vital to Social Media Marketing Success)https://social-hire.com/blog/small-business/5-ways-ai-has-massively-influenced-social-media>Facebook has Facebook Artificial Intelligence Researchers, also known as FAIR, who have been working to analyse and develop AI systems with the intelligence level of a human.>For example, Facebook’s DeepText AI application processes the text in posted content to determine a user’s interests to provide more precise content recommendations and advertising.
By AI they mean "the black box thingy with machine learning", a.k.a. The Algorithm™. That's what they're talking about. Your description of it as "functions to maximize engagement" does not exclude this. It's actually a completely valid example of shit gone wrong, because Facebook knows its suggestions are leading to radicalization and body image problems, but either they can't or don't want to fix them. The Facebook Papers proved as much.
[Editor's note: the post being replied to is no longer available for reading.]
On emergent capabilities, this is the paper they're referencing:https://arxiv.org/abs/2206.07682
It makes perfect sense that the more connections it makes, the better its web of associations will be, but the point is that if more associations lead to even more associations and better capabilities in skills the researchers weren't even looking for, then its progress becomes not just harder to track, but to anticipate. The pertinent question is "what exactly causes the leap?" It's understood that it happens, but not why, the details are not yet known:>Although there are dozens of examples of emergent abilities, there are currently few compelling explanations for why such abilities emerge in the way they do.
On top of that, the thing about it learning chemistry, programming exploits, or Persian is that it wasn't intended to do so, and it most certainly wasn't intended to find ways to extract even more information from its given corpus. Predicted, but not intended. Then you have the question of how do these things interact with each other. How does its theory of mind influence the answers it will give you? How do you predict its new behavior? Same for WiFi, it's not that it can do it, it's that the same system that can find exploits can ALSO pick up on this stuff. Individually, these are nothing incredible, what I take away from what they're saying is that it matters because it can do everything at the same time.
Moving on to things that happen irrespective of AI, the point is not that these are new, that's not an argument I've ran into, is that it becomes exponentially easier to do. You are never
going to remove human error, replying "so what?" to something that enables it is a non-answer.
Altman here >>108142 acknowledges it:
￥How do you prevent that danger?
>I think there's a lot of things you can try but, at this point, it is a certainty there are soon going to be a lot of capable open source LLM's with very few to none, no safety controls on them. And so, you can try with regulatory approaches, you can try with using more powerful AI's to detect this stuff happening. I'd like us to start trying a lot of things very soon.
The section on power also assumes it'll be concentrated in the hands of a small few, and how it's less than ideal:
￥But a small number of people, nevertheless, relative.
>I do think it's strange that it's maybe a few tens of thousands of people in the world. A few thousands of people in the world.
￥Yeah, but there will be a room with a few folks who are like, holy shit.
>That happens more often than you would think now.
￥I understand, I understand this.
>But, yeah, there will be more such rooms.
￥Which is a beautiful place to be in the world. Terrifying, but mostly beautiful. So, that might make you and a handful of folks the most powerful humans on earth. Do you worry that power might corrupt you?
Then goes on to talk about democratization as a solution, but a solution would not be needed if it weren't a problem. The issue definitely exists.>This way that humans learn language is essentially the same way that the large language models learn.
I'm gonna have to slap an enormous  on that one. Both base their development on massive amounts of input, but the way in which it's processed is incomparable. Toddlers pick up on a few set words/expressions and gradually begin to develop schemata, whose final result is NOT probabilistic. Altman spoke of "using the model as a database rather than as a reasoning system", a similar thing comes up again when talking about its failure in the Biden vs Trump answers. In neither speech nor art does AI produce the same errors that humans do either, and trust me, that's a huge deal.
Extra steps are safer steps. As you said, it often get bought out by corporations, but that's an "often", not an "always". The difference between academia and corporations is also that corpos are looking for ways to improve their product first and foremost, which they are known to do to the detriment of everything else.
Again, from Altman:
￥How do you, under this pressure that there's going to be a lot of open source, there's going to be a lot of large language models, under this pressure, how do you continue prioritizing safety versus, I mean, there's several pressures. So, one of them is a market driven pressure from other companies, Google, Apple, Meta and smaller companies. How do you resist the pressure from that or how do you navigate that pressure?
>You know, I'm sure people will get ahead of us in all sorts of ways and take shortcuts we're not gonna take. [...] We have a very unusual structure so we don't have this incentive to capture unlimited value. I worry about the people who do but, you know, hopefully it's all gonna work out.
￥You kind of had this offhand comment of you worry about the uncapped companies that play with AGI. Can you elaborate on the worry here? Because AGI, out of all the technologies we have in our hands, is the potential to make, the cap is a 100X for OpenAI.
>It started as that. It's much, much lower for, like, new investors now.
￥You know, AGI can make a lot more than a 100X.
￥And so, how do you, like, how do you compete, like, stepping outside of OpenAI, how do you look at a world where Google is playing? Where Apple and Meta are playing?
>We can't control what other people are gonna do. We can try to, like, build something and talk about it, and influence others and provide value and you know, good systems for the world, but they're gonna do what they're gonna do. Now, I think, right now, there's, like, extremely fast and not super deliberate motion inside of some of these companies. But, already, I think people are, as they see the rate of progress, already people are grappling with what's at stake here and I think the better angels are gonna win out. [...] But, you know, the incentives of capitalism to create and capture unlimited value, I'm a little afraid of, but again, no, I think no one wants to destroy the world.
Microsoft or Meta are not to be trusted on anything, much less the massive deployment of artificial intelligence. Again, the Facebook Papers prove as much.>in practice means regulating all AIs except for the big ones like ChatGPT (OpenAI's).
This part in particular seems to carry a lot of baggage and it's not clear how you reached this conclusion. If anything, it's the leaked stuff that's hardest to regulate.
I'm not Yudkowzky, I don't think it's an existential threat, but impersonation, fake articles and users, misinformation, all in masse, are fairly concrete things directly enabled or caused by this new AI. They hallucinate too, answers with sources that don't exist with the same factual tone. Here are some examples:https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.htmlhttps://www.insidehook.com/daily_brief/tech/chatgpt-guardian-fake-articleshttps://www.theverge.com/2023/5/2/23707788/ai-spam-content-farm-misinformation-reports-newsguard
Hell, the part about grooming isn't even an appeal to emotion, that's wrong, it's an example of a chatbot acting in an unethical way due to its ignorance. The immoral support it provides is bad because it reinforces grooming, not because it's ugly.
It's not the end of the world and it's being worked on, but it's not a nothingburger either, and I do not believe the talk warrants such an angry reply.
ohhhh i had hidden the post im tard
stopped keeping up with image generating AI progress a while back, is that with stable diffusion 1.5? because I thought that one was gimped to not be effective at sexy stuff. Also the hands and teeth look less flawed than I remember
It looks like a fork to be good at softcore porn. AI gravure. AIグラビア. Regardless, I think its amusing that the cleavage generation varies from a little peak all the way to raunchy half nakedness. Also, the teeth are good but not crooked enough to be realistic
Thats a little hilarious that it wasnt aware with all the hysteria around grooming
You're falling for marketing schemes once again if you believe the current models of neural networks have emergent abilities.https://arxiv.org/abs/2304.15004
One thing I've never seen anyone talk about is how these things are humourless. Its funny in a way that it seriously responds to absurd questions, but it wouldn't hurt to have it to tell jokes when people are obviously taking the piss
Yeah, just from looking at screencapped replies it seems so bland to me that it's sometimes annoying to read.
Maybe someone who's used it to look up and learn about stuff can tell me how their experience has been, because so far its style is one of the main reasons it hasn't piqued my interest.
A lot of it is just influence from how the AI is trained. It's usually taught to speak in a specific manner and given "manner" modifiers. ChatGPT is instructed to be professional and teaching, but you can (try) to convince it to speak less professionally. a lot of people who use other AIs (for porn in particular) get bot variations that give the AI a character of sorts to RP as, which lets it speak in a completely different manner using vocabulary and "personality traits" you wouldn't see from chatGPT simply because it's being explicitly told not to be like that
A lawyer decided to use ChatGPT to find legal precedent and chatgpt made up cases that didn't exist and the lawyers presented them to court. It didn't go over too well.
It's pretty amazing that people can be this dumb.
Do you really not recognize Hank Green?... I would have thought everyone would have a passing familiarity with him and his brother, John, from their YouTube PBS series like SciShow and Crash Course. Not to mention, even if you wouldn't recognize them from YouTube, John Green is pretty well known from his book The Fault in Our Stars.
the fault in our czars
I saw it
hank the science guy