Ehh, fuck it, it's basically finished.
>>108135>>108136>>108137Now, I like me some walls of text, but I feel like there's a heavy bias to this. You complain about them reframing stuff in a negative light, but don't say a single positive thing about the talk. There's a lot of stuff here I want to reply to.
First for the stuff about social media not having AI, here are some articles from 2021 or earlier, before the media boom, explicitly calling their stuff AI:
https://aimagazine.com/ai-strategy/how-are-social-media-platforms-using-aihttps://archive.ph/kZqZi (Why Artificial Intelligence Is Vital to Social Media Marketing Success)
https://social-hire.com/blog/small-business/5-ways-ai-has-massively-influenced-social-media>Facebook has Facebook Artificial Intelligence Researchers, also known as FAIR, who have been working to analyse and develop AI systems with the intelligence level of a human.>For example, Facebook’s DeepText AI application processes the text in posted content to determine a user’s interests to provide more precise content recommendations and advertising.By AI they mean "the black box thingy with machine learning", a.k.a. The Algorithm™. That's what they're talking about. Your description of it as "functions to maximize engagement" does not exclude this. It's actually a completely valid example of shit gone wrong, because Facebook knows its suggestions are leading to radicalization and body image problems, but either they can't or don't want to fix them. The Facebook Papers proved as much.
[Editor's note: the post being replied to is no longer available for reading.]
On emergent capabilities, this is the paper they're referencing:
https://arxiv.org/abs/2206.07682It makes perfect sense that the more connections it makes, the better its web of associations will be, but the point is that if more associations lead to even more associations and better capabilities in skills the researchers weren't even looking for, then its progress becomes not just harder to track, but to anticipate. The pertinent question is "what exactly causes the leap?" It's understood that it happens, but not why, the details are not yet known:
>Although there are dozens of examples of emergent abilities, there are currently few compelling explanations for why such abilities emerge in the way they do.On top of that, the thing about it learning chemistry, programming exploits, or Persian is that it wasn't intended to do so, and it most certainly wasn't intended to find ways to extract even more information from its given corpus. Predicted, but not intended. Then you have the question of how do these things interact with each other. How does its theory of mind influence the answers it will give you? How do you predict its new behavior? Same for WiFi, it's not that it can do it, it's that the same system that can find exploits can ALSO pick up on this stuff. Individually, these are nothing incredible, what I take away from what they're saying is that it matters because it can do everything at the same time.
Moving on to things that happen irrespective of AI, the point is not that these are new, that's not an argument I've ran into, is that it becomes exponentially easier to do. You are
never going to remove human error, replying "so what?" to something that enables it is a non-answer.
Altman here >>108142 acknowledges it:
¥How do you prevent that danger?
>I think there's a lot of things you can try but, at this point, it is a certainty there are soon going to be a lot of capable open source LLM's with very few to none, no safety controls on them. And so, you can try with regulatory approaches, you can try with using more powerful AI's to detect this stuff happening. I'd like us to start trying a lot of things very soon.
The section on power also assumes it'll be concentrated in the hands of a small few, and how it's less than ideal:
¥But a small number of people, nevertheless, relative.
>I do think it's strange that it's maybe a few tens of thousands of people in the world. A few thousands of people in the world.
¥Yeah, but there will be a room with a few folks who are like, holy shit.
>That happens more often than you would think now.
¥I understand, I understand this.
>But, yeah, there will be more such rooms.
¥Which is a beautiful place to be in the world. Terrifying, but mostly beautiful. So, that might make you and a handful of folks the most powerful humans on earth. Do you worry that power might corrupt you?
>For sure.
Then goes on to talk about democratization as a solution, but a solution would not be needed if it weren't a problem. The issue definitely exists.
>This way that humans learn language is essentially the same way that the large language models learn.I'm gonna have to slap an enormous [citation needed] on that one. Both base their development on massive amounts of input, but the way in which it's processed is incomparable. Toddlers pick up on a few set words/expressions and gradually begin to develop schemata, whose final result is NOT probabilistic. Altman spoke of "using the model as a database rather than as a reasoning system", a similar thing comes up again when talking about its failure in the Biden vs Trump answers. In neither speech nor art does AI produce the same errors that humans do either, and trust me, that's a huge deal.