Hi there, I want to share some thoughts and want to hear your opinions on it.

Recently, AI developments are booming also in the sense of game development. E.g. NVIDIA ACE which would bring the possibility of NPCs which run an AI model to communicate with players. Also, there are developments on an alternative to ray tracing where lighting, shadows and reflections are generated using AI which would need less performance and has similar visual aesthetics as ray tracing.

So it seems like raster performance is already at a pretty decent level. And graphic card manufacturers are already putting increasingly AI processors on the graphics card.

In my eyes, the next logical step would be to separate the work of the graphics card, which would be rasterisation and ray tracing, from AI. Resulting in maybe a new kind of PCIe card, an AI accelerator, which would feature a processor optimized for parallel processing and high data throughput.

This would allow developers to run more advanced AI models on the consumer’s pc. For compatibility, they could e.g. offer a cloud based subscription system.

So what are your thoughts on this?

  •  thejml   ( @thejml@lemm.ee ) 
    link
    fedilink
    English
    31 year ago

    The Apple silicon (https://en.wikipedia.org/wiki/Apple_M1#Other_features and M2 and their variants) have 16+ neural engine cores for on chip AI, separate from the GPU cores. But it’s still a package deal.

    I could see them splitting it out for cases of high end AI clusters and dedicated servers for that use case, but I feel like their current goal is to make sure that those cores are included in common hardware so that everyone can leverage local AI and not worry about “does this person have hardware to do this?” Issues.

    I think current industry thinking is that making those cores commonplace helps the adoption of AI for everyday software more so that requiring a separate add-on card.