Agreed, and the chance of it backfiring on them is indeed pleasingly high. If the compute moat for initial training gets lower (e.g. trinary/binary models) or distributed training (Hivemind etc) takes off, or both, or something new, all bets are off.
Thanks muchly, I appreciate the long term view so much more, media gives us a week or so then the cycle is over (and from a fellow Oz linux dev pundit, also I speak from a T580, upgrade your desktop…, so there’s that as well). It seems like my next laptop (but likely an AMD 16, I’m in no hurry, but the potential for a 8xOculink for LLM is highly attractive, have you been keeping an eye on that, local is king…, )