•  etrotta   ( @etrotta@beehaw.org ) 
    link
    fedilink
    English
    51 month ago

    To be fair, I wouldn’t include “loading the whole model into VRAM” as part of the cost, given they can just keep it in there between different requests, and it might be down to hundreds of billions or dozens of billions instead of trillions… but even after all improvements it should still be orders of magnitude more expensive than normal search, which just makes their decision even crazier