• it’s not the game that does it, it’s literally the graphic cards that does it The game is just software. It will execute on the GPU and CPU. DLSS (proprietary) and XeSS (OSS) are both libraries to run the AI bits of the cards for upscaling, because they weren’t really being used for anything. Gamedevs have the skills to use them just like regular AI devs do.

    By AI here I mean what is traditionally meant by “game AI”, pathfinding, decisionmaking, co-ordination, etc. There is a counterstrike bot which uses neural nets (CPU), and it’s been around for decades now. It is trained like normal bots are trained. You can train an AI in a game and then have the AI as NPCs, enemies, etc.

    We should use the AI cores to do AI.

      • what is the benefit over just using classical algorithms

        Utilisation. A CPU isn’t really built for deep AI code, so it can’t really do realistic AI given the frame budget of doing other things. This is famously why games have bad AI. Training AI via AI algorithms could make the NPCs more realistic or smarter, and you could do this within reasonable frame budgets.

        • I see. You want to offload AI-specific computations to the Nvidia AI cores. Not a bad idea, although it does mean that hardware that do not have them will have more CPU load so perhaps the AI will have to be tuned down based on the hardware they run on…

          • so perhaps the AI will have to be tuned down based on the hardware they run on…

            Yes, similar to Raytracing which still needs a traditional pipeline, with AI you will have “enhanced” (Neural Nets) and “basic” (if statements).