Design Thinking is a subset of Systems Thinking (this is the polite interpretation).
Design Thinking does with its sole existence what Systems Thinking tried to avoid: Another category to put stuff into, divide and conquer. It is an over-simplified version of the original theories.
Better: Jump directly to Systems Thinking, Cybernetics and Systems Theory (and if measurements are more your thing, even try System Dynamics).
I can only recommend that anyone interested in this topic take a look at the work of one of the masters of Systems Thinking, Russel Ackoff:
This talk from 1991 is several dozen books heavily condensed into one hour.
(Russell Ackoff is considered one of the founders of Operations Research and ironically came to be regarded an apostate as he tried to reform the field he co-founded. He subsequently became a prominent figure of Systems Thinking)
Design Thinking (and more broadly, human-centered design) is a pragmatic framework for doing product design in an effective and productive manner. Systems Thinking is a massively more general superset. I'm not really sure how you'd operationalize that on a design project, except by following first principles, which would essentially get you to DT / HCD.
Taking a theory (Systems Thinking), a mental model which has the primary goal of holistically identifying, describing, and understanding wholes and reducing it down to a set of methods/framework out of ease of use (the pragmatism) is exactly the wrong approach in my opinion.
Systems Thinking and all of its applications scenarios are based on epistemology. To turn it into a recipe is a wrongdoing. The whole notion is that one size does not fit all.
The operationalization of Systems Theory for a given case at hand is the responsibility and the transfer function of the operator whose approach this is. The process itself yields understanding and should not be abbreviated.
I practiced Design Thinking at IDEO for 10 years, and I can assure you it's not "one size fits all." And you can onboard an intern or a client CEO in days, without requiring them to internalize a very abstract system for decomposing problems.
That may possibly explain your motivation but even ten years do not make it right, nor the speed of teaching.
You are saying it yourself: internalising the very abstract system for decomposing and adapting it has a value of its own you cannot replicate by pre-solving it. The spinning-off of Design Thinking only accomplished further segmentation of a space which was already too fractured and was a disservice to the field.
I don’t think we will approach a consensus here, and that’s fine.
It's always valuable to have a generalizable skill. But design is fundamentally a craft; an applied art. It's problem-solving. And like any craft, there are tools and techniques that are tried and true. You could approach woodworking with a ground-up Systems Thinking approach, but would you turn down the advice of a carpenter with 30 years of experience? Technically all you need to understand woodworking is a physics textbook and maybe an organic chemistry textbook.
My guess is you're a software developer (as I am), and in my opinion the fatal flaw of our group is the incorrect belief that we could do anything or solve any problem by simply decomposing it into smaller and smaller components. The thing is, for a big enough problem, there are an almost infinite number of ways to break it down and then build it back up. In optimization terms, complex projects are highly nonlinear problems, so you may be able to understand what the inputs are, but it sometimes takes wisdom and experience to tune the parameters.
Around ten years ago, I was a designer for some 20 years. A strange path led me to a different place which is intermixed or adjacent to the field of organisational theory.
At that time, I decomposed problems too, maybe a bit differently than a developer, I can’t really know. I still decompose, except that the difference to the past is that analysis only makes up one part of the larger whole. I knew many designers which never did either.
I agree with you that there are some areas which do not need theory. That depends on where you define the system boundaries. In the example of a carpenter: Yes, 30 years, the person indeed knows that stuff. One of first question of Systems Thinking, however, would be: What’s the reference system, is his company viable in the future?
I very much believe that if you apply this to complex projects, to ‘communication and control’ of an enterprise, that one should know the backstory.
The reductionist approach got us to the problems, applying reductionism to a theory trying to solve reductionism is courageous. In my opinion, the method which is used to teach must incorporate the principle which it is trying to convey. An alternative worldview needs to have a starting point somewhere, and I like to think it starts with the education, which is not to say that I do not understand the urge to speed up absorption of the theory.
So your argument is don't use an off the shelf tool that gets the job done, build your own tool every time which likely doesn't offer any advantage over the standard tool?
If you think using Design Thinking goes against Systems Thinking, I don't think you really get either.
> So what do you mean by "Design Thinking does with its sole existence what Systems Thinking tried to avoid"?
It’s its approach to Systems. Take the 5 stages. Why 5, not 10 or 3? Why stages at all? Who’s to say? Why not enable people to create stages themselves and run from there? Or whatever fits their business.
Why not teach methodology instead of method?
>I'm not sure why you think it's relevant here.
I can only repeat myself:
The value is in the process of inquiry itself. Systems Theory is not a set of methods. It is an epistemological based theory and requires a shift in how a person perceives reality, the often cited worldview. How do you know what you know? By assuming 5 stages? Is that objectively induced? What happens to that if looking through the lens of radical constructivism? The theory requires to incorporate multiple worldviews and with that, negates the assumption of an objective truth.
So your argument is don't use an off the shelf tool (5 stages) that gets the job done, build your own tool (10 or 3 or none) every time which likely doesn't offer any advantage over the standard tool?
I don't think you really get either Design Thinking or Systems Thinking.
What you are talking about here is not Systems Thinking, which is a particular approach to understanding complex problems by viewing everything as systems of systems. Design Thinking is a methodology for approaching the design process, which is quite orthogonal to whether or not you employ Systems Thinking. The more general field of trying to understand how we determine whether something is true and what it means for it to be true is the field of epistemology; "epistemological based theory" is a meaningless description, like "philosophical based worldview".
I'm not arguing anything here, I am trying to help you realize that you aren't talking about what you think you're talking about.
I am not referring to methodology "in the strict sense of science" indeed I don't know what "in the strict sense of science" is supposed to mean. I'm using the dictionary definition of methodology: "a body of methods, rules, and ideas that are important in a science, art, or discipline : a particular procedure or set of procedures."
Quite the opposite. You are now trying to reframe your accusations as help.
1. I surely didn’t ask for accusations nor for help with this. If I need any, I will let you know.
2. Your assessment of my situation is incorrect and as such I'm removing myself from this conversation now. But I want you to have the last word, so, go ahead.
> maybe system thinking is really complex and thus hard to convey and use.
I'm pretty sure that's not true. If you can follow how A leads to -> B, then that's about it all. Systems thinking is the same principle at a larger scale, with interesting side effects at times (eg network effects/group think/emergent phenomenon showing up).
I've been very interested in cybernetics and systems thinking lately — would you be able to recommend some good books? I'm not afraid of difficult academic or philosophical reading, but I'm looking for stuff that's large in scope, applies to general fields, etc.
It leans a bit more on the cybernetic side but gives an overview and has what is possibly equally important as the text itself: some 7 pages of references.
I started with openly accessible academic papers instead of books. If you find something interest there, you will surely have the direct reference to proceed further into that direction right at hand. Papers are shorter, you can switch the direction more easily. The price to pay is to miss the bigger picture a couple of times (which a book may convey) until some loose ends come together and create an aha moment.
(given what you said I would stay clear of all reinterpretations/popular science books. I would read something straight from the source, the people in the field, in whatever form it may show up.)
Thanks for that paper! It looks like a good jumping off point. I am primarily interested in the cybernetics side (feedback loops, etc) but applied to social or economic systems, as well as the intersections with Deleuze and Guattari style process philosophy (looking at how every level of the social system, including individuals, is composed of nodes in a network of flows (of desire/memesis, knowledge, material resources, ideas) that the nodes are both somewhat composed by and also transform/switch/break)
The social side is, among others, massively covered by Niklas Luhmann, the Zettelkasten guy. [1] is pure 2nd Order Cybernetics. The entry to it is a bit tough, it uses its own language. The precision presented, however, is brutal and you can’t get larger in scope, as it encompasses society as a whole, corporations, law, communication, all of it.
Oh I didn't realize Niklas Luhmann as the Zettelkasten guy! I use an org mode Zettelkasten (currently migrating to https://iwe.md for reasons) every day for my thoughts and ideas!
Question: what kind of fun you are referring to here?
Since, from the outside, it surely sounds like you get pleasure by inflicting some form of suffering on others. But that hopefully isn’t considered fun, is it?
The price, when between the seller's minimum and the buyer's maximum, is a zero sum game. So while this is definitely screwing with people, the seller gets paid more and the amount of suffering in the world shouldn't really change.
You are falling for the zero-sum fallacy and mixing categories on top of it.
Globally, wealth gets created, which leads to a positive-sum game, not a zero sum game.
On the other hand, if one quadrillionaire in a city owns all the money available in that said system except 100 currency units, the remaining 100 humans are in possession of exactly 1 currency unit. The suffering for the 100 humans is significantly higher for the 100 than for the one, even though it fulfils your premise of a balanced global suffering index.
Before the trade, the value for the seller and the buyer was zero. Whatever the trade involved, the moment the minimum of the seller gets hit, it becomes a positive-sum game.
If this would not be the case long-term rise of stocks would be impossible. That would mean a stock rise is a redistribution and you take it away from someone else . So, if the stock market were truly zero-sum, every currency unit earned would require someone else to have lost one.
I am not having zero sun fallacy. Please read what I said again. I said the exact price is zero sum within the bounds of the deal happening. The wealth creation is caused by the deal happening at all.
> if one quadrillionaire in a city owns all the money
That's a valid risk factor but on a random eBay purchase I think it's fair to say we have no idea if the purchaser or the seller gets more utility out of each dollar.
Then we actually agree on parts? Well, excuse me if I interpreted you wrong.
>we have no idea if the purchaser or the seller gets more utility out of each dollar.
Assumption: the seller opened the auction with his actual hard lower limit, he should be happy with what he gets as soon as that limit gets hit.
The original poster said that he essentially altered the bid in favour of the seller. However, the exchange of subjective equal values is based on the balance between the two parties and now gets distorted in favour of the seller and in detriment of the buyer. This should result in win/lose if I am not mistaken.
So, maybe I get you wrong, I am not sure right now.
My argument is that this distortion in favor of the seller isn't really good or bad in a meaningful way. It's just rude.
The seller is happy as long as their limit is hit, the buyer is happy as long as their limit isn't hit. How should the surplus happiness get split? I dunno. So the earlier poster sticking their finger in and shifting the surplus around isn't a particularly moral issue.
Can you give an example? I am looking for a way out.
I kind of self hosted for decades on a virtual server until I couldn’t keep up with it. So much stuff broke something in the stack, bringing the server down. Often, I had to initiate a full lock down on everything before going up again, consuming a day’s effort or two.
If it has to be Windows, just remove all the shit of Win11 yourself, set it to unattended installation with a local account, remove the hardware requirements barrier while you are at it, remove the games, controller add-ons, virus scanner and whatever else you would like to (the windows store?) and create your own LTSC.
This isn’t a solution to the problem and missing the point of the whole argument. But if it has to be Windows, I would recommend to try it.
I self hosted for 20 years, worked flawlessly, gave up because of security concerns. I would like to go back to it.
Question: How do you manage the security on such a box? Is there any simplification I missed?
I couldn’t keep up with it. So many patches, unrelated to mail, broke something in the stack, bringing the server into a critical state. Often, I had to lock down everything before going up again, consuming a day’s effort or two. These were two days without mail.
Are you talking about the “Thinking Machines” company that shut down in 1994? Took me some digging to figure it out, doesn’t seem well-known enough to be the reason - it’s just a nice (and relatively obvious) name.
Yes. Danny Hillis’ Thinking Machines Corporation, an AI company which created its own massive parallel processing supercomputer hardware.
“We are building a machine that will be proud of us” was their corporate motto. And that was in 1983.
One of those Machines is on view at the Computer History Museum in Mountain View.
Back then, they could be ordered in “Darth Vader Black”, no kidding here. You can also see a couple of them (the CM-5) as the stereotypical supercomputer in the original Jurassic Park.
It may not be a household name like Apple or Microsoft but its flagship product the Connection Machine is somewhat iconic in (super)computing history. The physical design of the machine is cool and unforgettable looking, plus recurring HN favorite Richard Feynman contributed to the original architecture.
That is an issue prevalent in the western world for the last 200 years, beginning possibly with the Industrial Revolution, probably earlier. That problem is reductionism, consequently applied down to the last level: discover the smallest element of every field of science, develop an understanding of all the parts from the smallest part upwards and develop, from the understanding of the parts, an understanding of the whole.
Unfortunately, this approach does not yield understanding, it yields know-how.
Taking things apart to see how they tick is called reduction, but (re)assembling the parts is emergence.
When you reduce something to its components, you lose information on how the components work together. Emergence 'finds' that information back.
Compare differentiation and integration, which lose and gain terms respectively.
In some cases, I can imagine differentiating and integrating certain functions actually would even be a direct demonstration of reduction and emergence.
Yeah that’s a nice addition. However, remember that reassembling is synthesis, not emergence. Emergence is what you /may/ get by reassembling, but must not. We are talking about systems, thus, in the end, you are correct. It’s just that the terms seem to be a bit muddled.
If I am not mistaken, we are already past that. The pixel, or token, gets probability-predicted in real time. The complete, shaded pixel, as you will, gets computed ‘at once’ instead of layers of simulation. That’s the LLM’s core mechanism.
If the mechanism allows for predicting how the next pixel will look like, which includes the lighting equation, then there is no need anymore for a light simulation.
Would also like to know how Genie works. Maybe some parts get indeed already simulated in a hybrid approach.
The model has multiple layers which are basically a giant non-linear equation to predict the final shaded pixel, I don't see how it's inherently difference from a shader outputing a pixel "at once".
Correct me if I'm wrong, but I don't see how you can simulate a PBR pixel without doing ANY pbr computation whatsoever.
For example one could imagine a very simple program computing sin(x), or a giant multi-layered model that does the same, wouldn't it just be a latent, more-or-less compressed version of sin(x)?
You're saying SV (& co) should convene some kind of gentleman's agreement that they should all leave a massive, profitable, legal, intellectually interesting niche with a stable customer base, because its immoral - perhaps.
Can you think of a single other industry where this worked? It seems implausible to me that it would.
Okay, here we go: No, I am not saying anything you said.
This was not about a gentleman's agreement at all - that was a rhetorical figure to demonstrate that it is not the game which is at fault, it’s both. The behaviour of the game and the behaviour of the actors in that game, and almost in any other game is not an input -> output scenario, but instead, the output of the loop will be the input of the exact same loop. That is the definition of a feedback loop. It’s all recursion.
By shifting the responsibility to the ominous “game”, which is just another term for a system, you exclude the elements of a system from being part of the system itself.
There is a whole branch of science occupied with this. System Dynamics, Cybernetics, Chaos Theory, Systems Theory and whatnot. The argument that actors in a system are decoupled from the system or the environment as in a closed system approach is factually wrong. Apart from laboratories, there is practically no closed system on this planet.
The phrase “hate the game not the player” is cybernetic nonsense with the sole purpose of giving up responsibility. It does not matter that it gets repeated more often than not. It won’t be correct, no matter how many times the figure is used.
Design Thinking is a subset of Systems Thinking (this is the polite interpretation). Design Thinking does with its sole existence what Systems Thinking tried to avoid: Another category to put stuff into, divide and conquer. It is an over-simplified version of the original theories.
Better: Jump directly to Systems Thinking, Cybernetics and Systems Theory (and if measurements are more your thing, even try System Dynamics).
I can only recommend that anyone interested in this topic take a look at the work of one of the masters of Systems Thinking, Russel Ackoff:
https://m.youtube.com/watch?v=9p6vrULecFI
This talk from 1991 is several dozen books heavily condensed into one hour.
(Russell Ackoff is considered one of the founders of Operations Research and ironically came to be regarded an apostate as he tried to reform the field he co-founded. He subsequently became a prominent figure of Systems Thinking)
My 2c. I'll show myself out.
reply