Hi there, I want to share some thoughts and want to hear your opinions on it.
Recently, AI developments are booming also in the sense of game development. E.g. NVIDIA ACE which would bring the possibility of NPCs which run an AI model to communicate with players. Also, there are developments on an alternative to ray tracing where lighting, shadows and reflections are generated using AI which would need less performance and has similar visual aesthetics as ray tracing.
So it seems like raster performance is already at a pretty decent level. And graphic card manufacturers are already putting increasingly AI processors on the graphics card.
In my eyes, the next logical step would be to separate the work of the graphics card, which would be rasterisation and ray tracing, from AI. Resulting in maybe a new kind of PCIe card, an AI accelerator, which would feature a processor optimized for parallel processing and high data throughput.
This would allow developers to run more advanced AI models on the consumer’s pc. For compatibility, they could e.g. offer a cloud based subscription system.
So what are your thoughts on this?
Your GPU is an AI accelerator already. Running trained AI models is not as resource demanding as training one. Unless local training becomes universal, AI acclerators for consumers make very few sense.
Apple’s been putting AI accelerators in phones for years. They use it for things like real time face recognition to help the camera decide where to focus.
The GPU can do the same task, but AFAIK it uses something like 20x more power.
I think it totally makes sense to have hardware accelerated AI even if only to free up the GPU for other tasks.
The newest gen GPUs have sections dedicated to AI already, so we effectively already have dedicated AI accelerators.
Yes there are but the op is talking about discrete AI accelerators…