>>4449>the nvidia shillHow much do you think nvidia is paying me to call their GPUs ridiculously expensive and to take months beating myself up over wanting to buy one?
You've entered my thread to call me a shill because I'm genuinely excited about a technology and sharing my personal experience on it to advise others. Not only that, but you're doing it purely out of ignorance.
Stable Diffusion/Image Generation?
CUDA.
Audio Generation?
CUDA.Image Captioning?
CUDA... I think? (most likely)
Video Generation? It took me a solid week to get the latest
CUDA drivers to work with sage attention and pytorch.
Local Text Generation? This one is
actually less dependant on CUDA. You can run text gen on CPU, but there's a
large speed penalty. This is how people are running deepseek locally without spending $80,000.
Everything is built on CUDA and CUDA is nvidia.
Since you're proclaiming AMD over and over, do you know the name of AMD's CUDA analog?
I'll give you a second to think about it.
It's ZLUDA. It's not their own thing. It's a hack to try and get nvidia's CUDA to run on an AMD GPU and people have been working on it for years at this point. AMD as a company is not working towards it, just random people online. AMD forfeited.
If you think I have loyalty towards a monopolistic shit company like nvidia you are sorely mistaken. I would love it if there was actual competition in AI, but that is not our present world.
>what is better, a single 5090 or two R7900s?For AI, the 5090. Unfortunately it's not even a comparison. AMD prices similarly to nvidia to such an extent that people said AMD blew it this year with its high prices, choosing to nickel and dime people instead of expanding their market share. They joined nvidia with the price gouging. Their "AI" card is being sold at a fraction of the nvidia price because it will be a fraction as useful.