> but don't try to think of this stuff as anything on the same level as the sciences.
Depends on what kind of sciences we're talking about. For an enormous span of the sciences, serious reproduction and funding crises have emerged to place whole fields on the verge of if not the middle of conceptual and practical collapse (hence the continuing brain-drain from science into tech).
At a deeper level, I think you're making a lot of assumptions that don't seem rooted in reality. Worse still, you don't seem to feel any need to root them in reality -- you minimize and diminish uncomfortably sharp edges in a very whiggish kind of historiography. Why is that? Your conjecture is that architecture is not well understood because it is not formalized. What if you're merely conflating cause and effect, and software architecture has not been formalized because it is not well understood? Then formalization is merely a memento of something else -- something far more primal, something far more deeper. And I think (though only you can say) that you might be afraid of that.
When we learn how to use powerful tools, we want to use them for everything. That includes philosophical ones. What if you're taking it for granted that architecture will /ever/ reach the level of formalization as the hard sciences? After all, there is something a bit hubristic about that. Wouldn't it be ironic to have such a deep faith in formalization so as to ignore (and even invert) the fundamentally unidirectional flow from phenomena to formal abstraction? But that seems to be what you're doing. After all, you have no proof that formalism, as a tool in and of itself, lends any utility to making sense from and building structures for the world.
It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing. It is onanistic and serves no formal purpose. And frankly, software as a field is very young by human standards. For it to not have the level of formal abstractions far older human pursuits have is a function of its youth. I think that to love formal abstraction for its own sake is an obscurantism of its own, and this kind of thing has a name: it's called scientism.
Bertrand Russell tried to axiomatize mathematics into logic, and completely failed. And not because he was unintelligent. He failed because the opposite premise was vindicated by reality. What will you do if the same holds true for you here? Even he was so taken and seduced with the elegant possibility of axiomatizing mathematics that he was unable to see the logistical (and finally logically provable) impossibility of doing so. And finally even he had to ultimately give up his endeavor as directly fruitless (although the silver lining was that his work did pave the way for a lot of crucial discoveries). Consider whether you're doing the same thing.
Aside from the context of this thread, you've really eloquently put into words something I've started to believe over the last few years. Please never delete this comment ;) I'm sure to bookmark it.
> It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing.
>At a deeper level, I think you're making a lot of assumptions that don't seem rooted in reality. Worse still, you don't seem to feel any need to root them in reality -- you minimize and diminish uncomfortably sharp edges in a very whiggish kind of historiography. Why is that?
What the hell is whiggish historiography? You want me to to come up with some textbook historical account of how software design has moved in circles? I assume it's obvious and it's just generally hard to write about a trend that defies exact categorization. It probably can be done, I just can't spare the effort.
As for your other stuff I respectfully request that you keep this argument formal rather then comment on my personal character. Nobody appreciates another person speaking about their personal character in a negative and demeaning way. You are doing exactly this and there's really no point or need unless your goal is set me off personally and have this argument degrade into something where we both talk about each other personally.
>What if you're merely conflating cause and effect, and software architecture has not been formalized because it is not well understood? Then formalization is merely a memento of something else -- something far more primal, something far more deeper. And I think (though only you can say) that you might be afraid of that.
If it's a paradox then your reasoning could have existing evidence. Are there any attempts at formalizing software organization in academia? Have those attempts been ignored or have they actually failed?
Either way, in the industry, it's pretty clear to me that a lot of energy is spent discussing, debating and coming up with design analogies year after year. In my opinion this is actually a misguided attempt at optimization. People think the latest software design analogy or architecture pattern is going to save the world but it's always just a step to the side rather then forward and that's because most people in the industry don't even know what it means to truly "step forward."
> What if you're taking it for granted that architecture will /ever/ reach the level of formalization as the hard sciences?
Well I'm obviously betting that it can. Either way there's no denying that given a formal definition of a problem, several or a single best optimum solution does exist. In other words There must exist a configuration of assembly instructions that fits a general definition of best solution to a problem. Whether a formal method other than brute force can help us find or identify this solution remains to be seen, but the existence of a best solution is enough for me to "predict" that a formal method for finding this thing exists.
>It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing. It is onanistic and serves no formal purpose.
You're conflating science and logic. Science is about observing reality and coming to conclusions based off of observations and the assumption that logic and probability are real. It involves a lot of statistics.
Logic is just a game with rules. We create some axioms and some formal rules and play a game where we come up with theorems. The game exists apart from reality. We build computers on the assumption that logic applies to reality. But more importantly we use computers on this assumption as well.
Thus the computer is in fact a tool designed to facilitate this logic game. The computer is in itself a simulator of a simple game involving formal rules with assembly language terms as the axioms.
That's why formality applies to software and we can disregard science for a good portion of computing.
Scientism has nothing to do with this. You are at an extreme point of misunderstanding here.
>Bertrand Russell tried to axiomatize mathematics into logic, and completely failed. And not because he was unintelligent. He failed because the opposite premise was vindicated by reality.
No he failed because he assumed that logical systems must be consistent and complete. But others have succeeded in saying that logical systems can exist within our logical games, so long as they do not hold the above two properties of consistency and completeness at the same time.
Also the logical premise you are speaking of was not vindicated by reality (aka scientific observations). It was vindicated by additional formal logical analysis by another logician. Reality and logic are two separate things. The main connection in those two areas is that science assumes logic and probability are part of reality. Outside of that assumption the two have zero relation.
> Thus the computer is in fact a tool designed to facilitate this logic game. The computer is in itself a simulator of a simple game involving formal rules with assembly language terms as the axioms.
It's somewhat poetic, beautiful and accidentally funny that your argument frays at the same blind spots that your tone does, which is at the gap between theoretical and on-the-ground meaning. It is true in a theoretical sense that you could describe a computer as a simulator which can be modeled with formal rules. But it tells me nothing about why the computer as a mass-market electronic device which runs mass-market software occupies the societal role it does, nor where it will go, nor why it was an innovation which transformationally altered society in the same way as the flame, the wheel, the chariot, the bow, the written word, the loom, the printing press.
Now, if you're not going to try and engage with any of that because "It probably can be done, I just can't spare the effort" what do you think is more likely:
1) your complaint most loudly voices new and fresh insight to the industry which it had previously missed
2) your complaint most loudly voices your own gaps in curiosity and understanding
If it's 2, then I just think that's sad. I see a lot of engineers (frequently early-stage in their careers and suffering from a bit of imposter syndrome) get stuck in a rut where they dogmatically devour formalism as a vehicle to elevate the level and quality of their engineering to the level of rigor seen in the sciences. But the appropriation of surface level aesthetics masks a methodological inability to do the dirty work to get the actual job done via "less sexy" paths (as reality often requires), to differentiate and harden critical sections when necessary. It's sad because it generally only speaks to their own lack of exposure with just how power and impact software can be built to have, and it takes the momentary weakness of inexperience and calcifies it into eternal inexperience and terminal juniority owing to an unrealistic, radioactive hubris far out of proportion with actual capability.
> So? Then tell me how my gap in understanding is reflected in my POINT rather then in my character.
Not only have I already done so, so too have a bunch of other accounts. Why don't you re-read what they've all said and think about it some more? I'm sure you'll be able to find something more useful to add if you do that.
> So what? Who cares if it's me? How does this matter to you and what does this have to do with the topic?
Because if you do that, you will never fully mature as an adult human being. You'll repel others away from you and then feel the pain of isolation and loneliness. You'll watch others live rich lives you wish you had. I think that is a very unfortunate path. I hope you will avoid that pain for yourself before it's too late and you can't redo all the time you will inevitably kick yourself for wasting. Life goes by very quickly and you don't get chances for do-overs.
Thank you for putting in the good fight. It will take a high-profile level catastrophe akin to the Challenger Disaster for the industry to move on from the current state and formalize even basic things we've known for decades such as category theory and functional programming. And unfortunately, the results will be heavy regulation. Until then, we will just have to watch history repeat through these types of discussions. [0]
Please don't continue this thread. Whether you agree with me or not, it has devolved into personal and character insults by the other poster. You can continue the discussion with me somewhere else in the thread just not under yowlingcat. Blood has been spilled (figuratively) and tensions are extremely high.
Suffice to say, I'm not talking directly about category theory, that's one possibility of formalization (that seems quite unlikely). Other parts that have successfully made it into programming (and have changed programming for the better) including complexity theory, type theory and the Rust borrow checker. Two theories automate correctness and one is helpful in analysis of execution speed. My proposal is about one for modular organization and while category theory is a great candidate, it does have an incredibly high learning curve and to me that is why I agree with you that it is an unlikely candidate. I mean it does sort of exist already as a type checker in haskell... I'm more saying that I don't expect people to actually try to learn CT.
Either way, I believe more formality can be done and we don't have to force the entire industry to use monads and FP (FYI I prefer the style myself). If a formal theory about modular organization ever comes into vogue my guess is that it will exist more popularly as a "design pattern" in the industry rather than a programming framework or formal method. People can choose to use a pattern, or bend the rules... the main point and the crux of my entire effort is to say something along the lines of the fact that we need design patterns that provably improves "design." Without proof most of our efforts seem like steps to the side rather than forward.
If you have a reply, please place it directly under my parent post, not here.
edit:
That github link is picture perfect mirror of what I'm talking about in the industry. Thumbs up for bringing it to my attention.
> The main connection in those two areas is that science assumes logic and probability are part of reality.
That's definitely not the case with respect to science, which is empirical and not rationalist. What you're describing in that sentence is Platonic realism, not science.
Examine the quotation in the first section: "Empiricism in the philosophy of science emphasises evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation. "
The key word here is "solely" meaning that empiricism involves the addition of observational evidence on top of other forms of analysis.
Additionally take this sentence: "Empiricism, often used by natural scientists, says that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification"
Note that falsification used here cannot exist without assuming logic is true.
True in what sense? I appreciate the rigor you're trying to bring to this discussion. Most of the points regarding logic you raise here and in your other replies on the thread have been comprehensively debunked in Kant's Critique of Pure Reason.
One omission is the nature of human senses and perceptions. The understanding of which eliminates whatsoever any possibility of conclusively distinguishing the statement "logic is true" from "logic appears true to us but it's actual truth status is unknowable". This is just a toy explanation and the Critique is far more comprehensive and rigorous.
> empiricism involves the addition of observational evidence on top of other forms of analysis.
That may be the case in circumstances where it's convenient and other forms of analysis serve the function of useful tools or mental aids. However empirical observation always trumps other forms of analysis when they're in conflict. Otherwise we're no longer discussing science anymore.
>"logic is true" from "logic appears true to us but it's actual truth status is unknowable".
That's just a paradox. No point in going to deep into the paradox as it's unresolvable. At best it can be "Assumed" that logic is true. Replace "assume" with "pretend" it's the same thing. Science pretends logic is true. Whether it's actually true or not is the paradox.
>That may be the case in circumstances where it's convenient and other forms of analysis serve the function of useful tools or mental aids. However empirical observation always trumps other forms of analysis when they're in conflict. Otherwise we're no longer discussing science anymore.
Sure but this can only be done in conjunction with logic. If you observe evidence and the evidence leads to a conclusion the "leading to a conclusion" is done through logic.
Another way to put it is, if logic wasn't real no observation would make sense. If I observed a pig, then pigs exist by logic. If logic wasn't real then the observation doesn't imply pigs exist. By logical analysis, An observation is therefore useless without logic.
Suffice to say at this level of analysis we can clearly conclude that Science "pretends" logic and probability are true. Getting deeper into this dives into paradoxes which are ultimately uninteresting dead ends to me because it's unresolvable.
What exactly gives you the right to do that? And why is your reason any better than a different one provided by someone else who wants to argue the opposite?
> Replace "assume" with "pretend" it's the same thing.
We've now turned science into dogma. At what point do we then need to become aware that we're pretending? Surely there comes a point where pretending costs us epistemic legitimacy. Where is that point? And what response do we offer to an interlocutor who insists the earth is flat and our logical deduction of it's spherical shape is "pretend"?
> Sure but this can only be done in conjunction with logic. If you observe evidence and the evidence leads to a conclusion the "leading to a conclusion" is done through logic.
> If I observed a pig, then pigs exist by logic. If logic wasn't real then the observation doesn't imply pigs exist.
Both of these statements are nonsensical and are debunked in the Critique of Pure Reason.
> Getting deeper into this dives into paradoxes which are ultimately uninteresting dead ends to me because it's unresolvable.
I have a feeling we're dealing with a small bit of motivated reasoning with respect to interestingness here.
>What exactly gives you the right to do that? And why is your reason any better than a different one provided by someone else who wants to argue the opposite?
Why use science if it won't work without assuming the principles it's built on are not true? We assume science is true, we established that logic cannot be established to be true. Thus if logic cannot be established to be true, then science cannot be established to be true, then why do we use science?
The only other conclusion is we "assume" science is true and therefore "assume" logic is true even though we can't truly know if it's true.
>We've now turned science into dogma. At what point do we then need to become aware that we're pretending? Surely there comes a point where pretending costs us epistemic legitimacy. Where is that point? And what response do we offer to an interlocutor who insists the earth is flat and our logical deduction of it's spherical shape is "pretend"?
I'm not a philosopher. I'm not into epistemology as I'm not even entirely sure what it is. SO if you dive into that world too deeply the argument is over because I can't argue with something I don't know about. Either you explain your points in layman terms or the argument can't proceed very far because I won't be able to understand you.
I'm just saying that "pretending" is the same thing as "assuming" We don't actually know if something is true, but we still use science as if it's true. The contradiction is what allows us to use the word "pretend" we know that it cannot be known yet we act as if it is known. Hence "pretend"
>Both of these statements are nonsensical and are debunked in the Critique of Pure Reason.
Well declaring a statement nonsensical doesn't mean anything to me without you explaining the reasoning behind your declaration. Citing a book won't really do anything for me because I haven't read the book. We're at a dead end here. Obviously I won't read the book because it's too long to read right now and obviously you won't explain the book for the same reason, so for this point the argument is over... we reached an impasse and can only agree to disagree unless you decide to explain the book to me.
>I have a feeling we're dealing with a small bit of motivated reasoning with respect to interestingness here.
I'm interested up to a point. If the point is a paradox I'm not interested in exploring the paradox. If that's the direction you're taking your argument then it's an impasse. Either way we're just debating nomenclature here.
Depends on what kind of sciences we're talking about. For an enormous span of the sciences, serious reproduction and funding crises have emerged to place whole fields on the verge of if not the middle of conceptual and practical collapse (hence the continuing brain-drain from science into tech).
At a deeper level, I think you're making a lot of assumptions that don't seem rooted in reality. Worse still, you don't seem to feel any need to root them in reality -- you minimize and diminish uncomfortably sharp edges in a very whiggish kind of historiography. Why is that? Your conjecture is that architecture is not well understood because it is not formalized. What if you're merely conflating cause and effect, and software architecture has not been formalized because it is not well understood? Then formalization is merely a memento of something else -- something far more primal, something far more deeper. And I think (though only you can say) that you might be afraid of that.
When we learn how to use powerful tools, we want to use them for everything. That includes philosophical ones. What if you're taking it for granted that architecture will /ever/ reach the level of formalization as the hard sciences? After all, there is something a bit hubristic about that. Wouldn't it be ironic to have such a deep faith in formalization so as to ignore (and even invert) the fundamentally unidirectional flow from phenomena to formal abstraction? But that seems to be what you're doing. After all, you have no proof that formalism, as a tool in and of itself, lends any utility to making sense from and building structures for the world.
It is the observations, the data accumulated in enough curious detail, kept preserved through painstaking work via the investigator and archiver, which provide material for abstraction to compact into greater expression: succinctness, clarity, and semantic power. But without that raw material, the abstraction is nothing but window dressing. It is onanistic and serves no formal purpose. And frankly, software as a field is very young by human standards. For it to not have the level of formal abstractions far older human pursuits have is a function of its youth. I think that to love formal abstraction for its own sake is an obscurantism of its own, and this kind of thing has a name: it's called scientism.
Bertrand Russell tried to axiomatize mathematics into logic, and completely failed. And not because he was unintelligent. He failed because the opposite premise was vindicated by reality. What will you do if the same holds true for you here? Even he was so taken and seduced with the elegant possibility of axiomatizing mathematics that he was unable to see the logistical (and finally logically provable) impossibility of doing so. And finally even he had to ultimately give up his endeavor as directly fruitless (although the silver lining was that his work did pave the way for a lot of crucial discoveries). Consider whether you're doing the same thing.