I'd like you to think about the TDD article harder -- testing isn't easy, and asserts aren't substitutes for good tests. Just because an ORM solves SQL injection doesn't mean it has protected you against bad data. And how do you have confidence that pending changes don't regress prod? Asserts are great for bailing at runtime (which nobody actually wants in practice), but they haven't yet found use at predicting which builds will fail. For that you need sample inputs, and these sample inputs are the basis of every test :)
Also, coordinating multiple processes/threads/drones/ect is easy when the mutex is reliable and transactionally safe. Often neither is true, especially for systems that scale to tens of millions of users and have their data partitioned or in volatile containers. What patterns should developers consider?
http://www.joelonsoftware.com/navlinks/fog0000000262.html
I'd like you to think about the TDD article harder -- testing isn't easy, and asserts aren't substitutes for good tests. Just because an ORM solves SQL injection doesn't mean it has protected you against bad data. And how do you have confidence that pending changes don't regress prod? Asserts are great for bailing at runtime (which nobody actually wants in practice), but they haven't yet found use at predicting which builds will fail. For that you need sample inputs, and these sample inputs are the basis of every test :)
Also, coordinating multiple processes/threads/drones/ect is easy when the mutex is reliable and transactionally safe. Often neither is true, especially for systems that scale to tens of millions of users and have their data partitioned or in volatile containers. What patterns should developers consider?