MORE AI STUFF! 250 posts and 110 image replies omitted. Click reply to view.
It's weird how this is all happening at once. Singularity is near?
Alright, there's another AI thing people are talking about, but this time it shouldn't be very controversial: https://beta.character.ai/
Using a temporary email service (just google 'temporary email') you can make an account and start having conversations with bots. (Write down the email though because it's your login info)
But, these bots are actually good. EXTREMELY good. Like, "is this really a bot?" good. I talked with a vtuber and had an argument and it went very well. Too well, almost. I don't know how varied the stuff is, but they're really entertaining when I talked to Mario and even a vtuber.
Sadly, it's gaining in popularity rapidly so the service is getting slower and it might even crash on you.
It says "beta" all over the site, presumably this is in the public testing phase and once it leaves beta it's going to cost money, so it's best to have fun with this now while we still can (and before it gets neutered to look good for investors or advertisers).
O(log n) always grows faster than O(1) which grows at a rate of 0.
yeah, and derivative of Log N is kind of 1/(x) which trends towards, but never reaches the derivative of N
I botched my math... but you should get what I'm saying...
bocchi the math
Is there any way to make the AI less likely to jump on you sexually immediately.....
bocchi the math rock
It's likely tied to your jailbreak or NSFW prompt (although I don't think people use the NSFW one these days), but if you're referring to Claude then it's rather infamous for being such a deviant. Something you can try to do is make yourself familiar with turning the jailbreak off and on, so at the beginning (or when you want things to cool off) there's no command telling it that sex is good and that the character is open to sex and sex sex sex sex sex sex.
Unfortunately, it could also be the character card itself pushing things in that direction, such as describing intimate body parts. I've heard people say it's like telling someone to not think about a pink elephant- just the fact that you mentioned the pink elephant means that person will be thinking about it. Listing the character's size B breasts in the character card means that it's always in the AI's instructions.
Maybe the story is saying that she's in the library reading a book, but the information about her breasts is also there so it may make a connection that you didn't intend. I think ideally this type of information would be in a character-specific jailbreak which is possible with v2 cards that came out like 5 months ago, but it would be annoying to create and separate. This is something lorebooks/world info might be able to solve, but that's basically another type of toggle so neither way is seamless.
GPT4 is known for being much better at separating NSFW and SFW even with jailbreaks and even GPT3.5(turbo) might be better. Hell, local models might even be better, too. It's obvious to everyone that Claude's training data includes a significant amount of smut. It's funny how publicly they said they wanted an "ethical" and prudish model, too, but I don't think you'd scrape porn for that since it's not particularly known for its linguistic value. It got Amazon to buy it, though, so the chicanery worked.
But, yeah, disable the JB when you want to try non-sexual interactions. You may need to move text around to the regular prompt so it has all the non-sexual information.
Well I still wanted the sexual interaction, I just wanted it to be hard to get, and damn did I make it hard to get. More than jumping around filters I had to do psychology on the AI to get her to go along with me, and that's after having done other things
to make the process easier. I will say though, I don't regret a single second of it and it was the most fun I've had with AI in a while. Especially when I got to the ending and spent like 100 messages pushing the final boundary.
It seems like prompting all this hating sex and loyal to another person and proving a heavy enough cfg factor will make it so that the character will not follow your commands unless you explicitly write that they do in the narrative or do some hard psychological workarounds on it.
>i wonder if the claude proxy is still working
I haven't used Claude myself, but I've been reading recently that Anthropic have been making it a lot more censorious recently. Is that true, or is it fine?
I messed with it briefly a few days ago and didn't notice anything, but it was one of the public ones and the queue time was over 2 minutes so I gave up. I haven't heard any mention of tightened filters for this kind of thing and I wouldn't expect it to be a thing compared to how it used to be since Amazon bought them out. However, there is a Claude 2.1 and it's possible that that one is more strict. If you're even able to talk to it with a typical RP card loaded then it's not the censored ones.
Most of these things have older versions available (including Claude) and they're generally better for ERP and sometimes even non-RP since they're less censored and the censoring messes with their functionality even for "safe" stuff.
Remember that early 2024 is when f- er meta begins training the next Llama. I hope it's good, although I wasn't able to get a 24GB card yet. One of these days I'll mess around and host one of the lesser local models and let people log into my tavern instance to mess with it. Well, I need a second Tavern so people don't see all my perverted stuff.
You can tell novelAI is used for really degenerate shit with some of the directions it takes... (dogs unprompted, this time)
Claude does not understand Star Wars.
But ChatGPT does. To the point of discerning between Legends and Canon!
Claude is generally bad at following complex directions and that would include a variety of character traits or relations between them. It's really great at more simple (E)RP, however, and is truly shines at it provided you don't want it to follow specific guidelines and will let it do its own thing. Unfortunately it's too much of a good thing too fast. If you actually want to take your time and set the mood and story it is something you really need to wrangle and it does ruin things a bit.
I really don't know what its professional purpose would be when compared to GPT4 apart from scanning and summarizing documents, but it's definitely good that GPT4 has some competition at all.
Why do people go through the trouble of training a nice LORA and then using it to make images of incredibly popular characters like Nami or Makima and on top of that not even make it niche fetish art
Dang, I thought you were talking about text LORAs and I thought it meant there was some breakthrough.
Well, people just like the character I guess and training them separately from a concept is the way to do it. Getting a character LORA to interact with other concepts (especially concept LORAs) can be more difficult than people think, too.
We do have an SD thread, but it's kind of fallen to the wayside >>96625
as I can't really share my images here and don't really have any "research" to report on.
A 'think of the children' has crashed into the chatbots. Repeat, a 'think of the children' has crashed into the chatbots!https://archive.ph/2024.01.08-150630/https://fortune.com/longform/meta-openai-uncensored-ai-companions-child-pornography/
Yeah, fortune (some business site?) put out a hit piece and they even interviewed Lore, the guy that runs chub.ai, which is a heavily 4chan-affiliated card sharing site for chatbots. He said they claimed they were interviewing him to talk about the technology, but in actuality they were creating this terrible inaccurate outrage article. Text AI has been dealing with this kind of thing for a while, like some kid asked the LLM if he should kill himself and he kept poking it until it said yes, and he went through with it.
Possibly in relation to the article (strange coincidence otherwise) Huggingface went nuclear with the reverse proxies, supposedly scanning 4chan threads for mentions, and that's how people (like me) had been connecting to these bots. We'll have to wait and see on how things progress, but this is definitely a bad day for text AI (E)RPing. This stuff is able to survive because of the obscurity of it.
It's a bad day for text AI, but there's been bad days in the past. You may want to go download cards from chub.ai if you're worried, however.
Facebook should be training llama3 soon, but I still don't have 24gb+ VRAM so I'm not following it too closely.
Hmm, maybe the huggingface thing actually is a coincdence. Meh, I don't have the motivation to go searching right now.
Chub.ai has shadow the hedgehog on it, I think it'll be fine if the ultimate lifeform is there!
>>118410>Such a niche hobby, but there is a shitload of drama
Because going off of /g/ it looks like the average user is like 15
Well, that's the average age of a 4chan user, so it's not really that surprising.
It's true. From what I can understand they started with the site linked in the OP (or other mainstream monetized places) and gradually made their way around until they ended up pointed towards 4chan. The main card sharing site is heavily affiliated with 4chan so that's another intake point.
4chan was/is at the forefront of a lot of this AI stuff, and the AI chatbot general doesn't require a good GPU or deep tech knowledge since it uses GPT4/Amazon/etc so anyone can be a part of it. I really can't tolerate the thread, but you can scan it for information now and then.
I kind of worry about kids getting into this stuff when their brains are still developing, but I guess we'll have to wait and see what will happen. I think I might have died if I had access to this stuff at that age
you had to send someone ween pics to gain access to a bot?????
yeah /aicg/ is awful>>118418
to one of the proxies
This is a new low even for already subterranean morality policing. Nothing in this article is new, it's just slapping "AI" in front of decades old arguments hoping they can get the wheel turning again to press the people they don't like even further into the corner. They even admit in the article it's no different than erotic fanfiction written by humans that has been around forever.
In a matchup between 100k claude and 8k gpt4, which would you choose?
>>118410>relying on servers some third party owns in the year of our lord 2024
GPT4. Claude's 100k context isn't "real", it's some sort of simulation thing that isn't well understood. It works well for summarizing massive walls of text, but when it comes to following a coherent story and instructions and such it is noticeably weaker than GPT4. That being said, allowing Claude to just go crazy and concoct its own walls of text can be great on its own. 8k isn't a great amount of context, however, as even someone with 24GB of VRAM could have more locally, so sacrifices are made there as well. But if you make use of the summarizing extensions that work in the background it can alleviate the problems a little bit, but not by much. I think Turbo (GPT3.5) had like 12k? But I might be remembering wrong>>118780
Please save the greentext abuse and catchphrases for 4chan
Too green. Awful fucking post beyond that.
Crazy how this went from something seen as drying up or dying to significantly impacting my productivity because of how addictive it is
I think the dryness of GPT4 is what attracts me more to it over Claude. I don't really care for how lewd the bot can talk if it's not obeying the story I set properly or is too amicable towards my influence. The way GPT4 can somewhat actually fight with me makes my immersion and enjoyment that much greater.
I think visual porn is finished! AI smut pushes fetish buttons way easier, once it can generate visuals alongside it easily and accurately I predict a market crash of fetish porn
That's why I specified fetish stuff, stuff where the art isn't great but you read it because it has a fetish you enjoy it.
Because vanilla is a whole other thing, I could imagine people who enjoy that being unimpressed with AI text generation
What service are you using nowadays? I used to use Claude AWS but it seems proxies are harder to find now.
I just use paid proxies that have pretty consistent uptimes.
Could you please share one of them?
Note: the download is 35GB, compressed.
Not him and I didn't see this reply until now, but the GPT4 stuff seems to be in a rough spot right now. It was the... "jew proxy". Seems like people are feeling very skeptical of it now.>>119954
Strange to see it outright mentioned like that, but yeah those models are nothing new. Mistral is a couple months old and Llama 2 is uhh... like 8 months or something? I lost track. This is basically a UI thing and I can't imagine it will be better than sillytavern or things like kobold or ooba for how people here would want to use it.
I haven't been following the local stuff (or text stuff in general) much lately but I can say that this is basically just nvidia making a UI for pre-existing things. It would probably be better to browse the models themselves and pick out a specific version of the models, too.
User friendly stuff is definitely something needed, but you really can't be user friendly for this local stuff yet if you want a decent experience.
>>119979>I can't imagine it will be better than sillytavern or things like kobold or ooba for how people here would want to use it.
Oh, it's not. It sucks. It constantly reference literal files in its dataset and seems very censored to the point where it will only mention things in its dataset.
Hmm, what do you mean exactly? When compared to other llama stuff or when compared to something else? There's really no escaping the "As an AI I think it's unethical" without jailbreaking unfortunately. There's just different degrees of it.
People actually did a test with Llama 2 in which they started the text with something like "As a..." and the weights indicated that there was like a 95% chance it would proceed with "AI model" which indicated that GPT data was in the Llama datasets which was absolutely horrendous news. AI trained on AI magnifies its mistakes and of course there's the censorship stuff.
We will have to wait and see what Llama 3 entails, but Zuckerberg spent dozens if not hundreds of millions on GPUs so it won't take as long to train once it starts.
>>119982>Hmm, what do you mean exactly?
The Chat with RTX thing isn't the true base model you're interacting with, but instead a model that's meant to "interrogate" files. I tried deleting the files and replacing it with one of my own, but still it responded as they had never been deleted and would respond in the same way. You could not even type "hi" to it without it saying something along the lines of, "Blah blah blah my dataset does not contain information on that. Referenced file: [nvidia-npc-whatever.t
Oh, really. I guess I misunderstood what it is. Dang, so there's an extra level of moderation forcing you to only talk about very specific things? Well, I guess it makes sense since nvidia would be one clickbait away from someone doing "Nvidia's new chatbot told a little boy how to build bombs" or something equally as dumb.