I spent 8 hours debugging datetime parsing in a PySpark migration. Every solution failed… built-in functions returned silent NULLs, UDFs took 6 hours to run, pandas UDFs still failed on 8% of rows.
Sure, but the harder part is measuring whether the developer is actually 5% faster. Otherwise you can make the same case for every $10/mo subscription service in the world, and so we should all be operating at infinite efficiency.
> Sure, but the harder part is measuring whether the developer is actually 5% faster. Otherwise you can make the same case for every $10/mo subscription service in the world, and so we should all be operating at infinite efficiency.
After n such iterations, the developer gets 100*(1-0.95^n)% faster. So, after some such $10/month purchases, the developer gets so fast that buying another improvement yields diminishing returns.
I agree. I can see these AI assistants becoming a game changer and eventually a requirement to keeping up, but the costs will be prohibitive for engineers in many regions.