• I’m not a fan of this. I can see a lot of potential problems with a system designed like this. In particular, a focus on quantifying attributes in an institution will create an incentive to drive the attribute towards a better number without actually looking at the mechanisms underlying the data. This played out in the American education system over the last 20-30 years with standardized testing driving an entire era of failed policymaking (no child left behind) which resulted in people teaching to the test, rather than teaching to the material.

      On a larger scale this kind of sentiment analysis has exploded in companies in the last 10-20 years with nearly every retailer now partnering with giants like gallup and neilson to ask you how your experience was. Internal employee polling to gauge engagement has been pushed by the likes of qualtrics, garter, and a slew of new startups focused on measuring and reporting on these kinds of metrics. If you’ve ever worked at a company and dealt with this kind of analysis, you’ll find a distinct lack of higher level thinking - they hyperfocus on the score and how to increase the score, never really paying a whole lot of attention to what’s driving the score.

      • I am also not a fan. “Resulted in people teaching to the test, rather than teaching to the material” as you say has already been a problem before AI and I agree that it is about to get worse with this. Another point is that psychological and emotional changes that occur during childhood and adolescence may differ from child to child. Will there be sone ‘gentle pressure’ to ‘meet’ some data? And what does it do to children if the teachers looks at the data instead of just talking to them?

        I’m not an expert for this, but I feel very uneasy. It seems an education system preparing you for a job at Amazon or so.

        • To be fair I don’t think that AI is involved here, it’s a simple likert scale where data is aggregated. I could see the company attempting to use AI to sell talking points or more complicated analysis on top of this data such as by ‘proactively identifying’ students which might need more help, but it wasn’t mentioned in the article. Either way I agree it creates perverse incentives centered around the data points and not the children.