• Training AI models takes a lot of development on the software side, and is computationally intense on the hardware side. Loading a shitload of data into the process, and letting the training algorithms dig down on how to value each of billions or even trillions of parameters is going to take a lot of storage space, memory, and actual computation through ASICs dedicated to that task.

    Using pre-trained models, though, is a less computationally intensive task. Once the parameters are defined on that huge training set, that model can be applied by software that just takes the parameters already defined in training and applies it to smaller data sets.

    So I would expect the AI/ML chips in actual phones would continue to benefit from AI development, including models developed many chip generations later.

    • The thing is more complicated than than. Moreover, there is a wish/needs to train/fine-tune models locally. This is not comparable to initial training of chatGPT like models, but still require some power. Juts today I read that some pixel 8 video improvement features will not be ported to pixel 7 because they need tensor 3 power.