

That’s my bet too. They weren’t hosting the site itself on GCP but they were using them for trust and safety services, and I bet that one of those services was anti scraping prevention with things like ip blocking and captchas, which would explain why scraping suddenly became a problem for them the day their contract ended. It can’t be a coincidence.
I’m also shocked that this happened the day after their contract with Google for trust and safety services ended. Totally a coincidence right? One of those services surely couldn’t be anti scraper protection could it?