> The UNIX philosophy of small tools that do one thing well is still very applicable. It's easier to deploy small changes quickly than the massive deploys that could take significant amounts of time.
The UNIX philosophy was conceived in the context of a single computer—with dynamic linking the cost of splitting out a new binary is close to zero, so making each module boundary an executable boundary makes some degree of sense.
In the context of a cloud computing network, there's a lot more overhead to splitting out a million tiny modules. Cloud service provider costs, network latency, defending against the possibility of downtime and timeouts—these are all things that UNIX programs never had to deal with but that are everyday occurrences on the cloud.
The trade-offs are very different and so the UNIX philosophy is not equally applicable. You have to weigh the benefits against the risks.
> In the context of a cloud computing network, there's a lot more overhead to splitting out a million tiny modules.
Don't get me wrong - this can be taken to an extreme and from my experience the places where MS seems to fail (the service is not used, or hated by the engineering teams) is in scenarios where it was absolute overkill.
> The trade-offs are very different and so the UNIX philosophy is not equally applicable.
There is a reasonableness standard that exists in both contexts and applies to both in the same way.
> There is a reasonableness standard that exists in both contexts and applies to both in the same way.
It applies to both in the same way in that there exists a definition of reasonable and exceeding it would be bad. But those definitions of reasonable are going to be completely different because the environments are completely different.
I guess my point is that invoking the UNIX philosophy as such brings to mind mostly tools whose scope would frankly be far too small to justify deploying a whole cloud service. It would be better to treat deployable units on the cloud as something new, rather than trying to treat them as if they were a logical extension of single-node UNIX executables.
The UNIX philosophy was conceived in the context of a single computer—with dynamic linking the cost of splitting out a new binary is close to zero, so making each module boundary an executable boundary makes some degree of sense.
In the context of a cloud computing network, there's a lot more overhead to splitting out a million tiny modules. Cloud service provider costs, network latency, defending against the possibility of downtime and timeouts—these are all things that UNIX programs never had to deal with but that are everyday occurrences on the cloud.
The trade-offs are very different and so the UNIX philosophy is not equally applicable. You have to weigh the benefits against the risks.