even if they didn't, seeing the limits of those models in different environments would certainly help the community overall, and if it saves a tenth of the energy it does for Google somewhere else it would be an enormous immediate reduction in energy consumption on a global scale, so it's hard to see what the downside would be.
If one was truly concerned about 'democratizing AI', as companies like Google so often claim, then sharing access to the trained models would arguably be far more effective than just sharing research papers which many companies don't even know how to implement. In fact the large majority of companies that are responsible for energy consumption don't have any data on the scale that Google has, so I would go as far as call this a bad faith deflection from the beginning.
It's exactly because the majority of companies don't know how to implement those models that it wouldn't do any good. This is the "give me teh codez" of AI. Google's models are specific to their environment, and they would be more likely to generate waste (at least in time and money required to implement them) than to reduce it.
If one was truly concerned about 'democratizing AI', as companies like Google so often claim, then sharing access to the trained models would arguably be far more effective than just sharing research papers which many companies don't even know how to implement. In fact the large majority of companies that are responsible for energy consumption don't have any data on the scale that Google has, so I would go as far as call this a bad faith deflection from the beginning.