A growing problem at the interface of AI/LLMs & science is that while it raises research productivity (measured by publications & citations) it's also starting to narrow the problems that scientists work on.
It steers them towards areas & issues where there is abundant & reliable data.
When such data is less evident, the productivity gains by using AI/LLMs are less & so acts as a disincentive in a profession where publication is (often) everything.
This looks dangerous!
