This is actually a critique of massive bureaucratic systems, not systems thinking as a practice. Gall's work is presented as an argument against systems thinking, while it's a contribution to the field. Popular books on systems thinking all acknowledge the limitations, pitfalls, and strategies for putting theory into practice. That large bureaucracies often fails to is, in my view, an unrelated subject.
This article does not begin to cover systems thinking. Cybernetics and metacybernetics are noticably missing. Paul Cilliers' theory of complexity - unmentioned. Nothing about Stafford Beer and the viable system model. So on and so forth.
The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
It seems odd to me that someone would write such a polished and comprehensive article and yet completely misunderstand the definition of the central topic.
That happens in system dynamics a lot, actually - there are many independently developed theories in many different disciplines that do not intertwine historically at all. I have met multiple people who work with systems mathematically on a professional level who had no idea about these other things.
Modernizing software systems take time because of inherent corruption in the procurement process or workings of consulting company involved. Those problems can be solved much faster and cheaper if a knowledgeable tech person was involved.
Hertz vs. Accenture: In 2019, car rental company Hertz sued Accenture for $32 million in fees plus additional damages over a failed website and mobile app project. Hertz claimed Accenture failed to deliver a functional product, missed multiple deadlines, and built a system that did not meet the agreed-upon requirements.
Marin County vs. Deloitte: In 2010, California's Marin County sued Deloitte Consulting for $30 million over a failed SAP ERP implementation. The county alleged Deloitte misrepresented its skills and used the county as a "training ground" for inexperienced consultants.
> largely outside the typical congressional appropriation oversight channels
I've seen it happen more than a few times that when software needs to get made quickly, a crack team is assembled and Agile ceremonies and other bureaucratic decision processes are bypassed.
Are there general principles for when process is helpful and when it's not?
I like this saying better: every system is perfect until people get involved. People act irrationally because they are reacting to the nonsense that pervades their reality.
I studied biology in college and this has always been obvious to me, and it shocks me that people with backgrounds in e.g. ecology don't understand that living systems are unpredictable auto-adaptive machines full of feedback loops. How a bunch of ecologists could take doomerism based on "world models" seriously enough to cause a public panic about it (e.g. Paul Ehrlich) baffles me.
Human cultural systems are even worse than non-human living systems: they actively fight you. They are adversarial with regard to predictions made within them. If you're considered a credible source on economics and you say a recession is coming, you change the odds of a recession by causing the system to price in your pronouncement. This is part of why market contrarianism kind of works, but only if the contrarians are actually the minority! If contrarianism becomes popular, it stops being contrarian and stops working.
Tangentially, everything in economics is a paradox. A classic example is the paradox of thrift: if everyone is saving nobody can save because for one to save another must spend. Pricing paradoxes are another example. When you're selling your labor as an employee you want high wages, high benefits, jobs security, etc, but when you go shopping you want low wages, low benefits, and a fluid job market... at least if you shop by comparing on price.
This essay focuses on a very narrow section of systems thinking and systems theory. There's an entire field, with many different subdisciplines beyond just the Club of Rome stuff (and which influenced them directly) that, quite explicitly also deals with systems that "fight back". In fact, any serious definition of systems thinking usually has said dynamics baked into it—systems are assumed to evolve from the start.
I'd encourage people to look into soft systems methodology, critical systems theory, and second order cybernetics, all of which are pretty explicitly concerned with the problem of the "system fighting back". The article is good, as works in progress articles usually are, but the initial premise and resulting coverage are shallow as far as the intellectual depth and lineage here goes.
This is actually a critique of massive bureaucratic systems, not systems thinking as a practice. Gall's work is presented as an argument against systems thinking, while it's a contribution to the field. Popular books on systems thinking all acknowledge the limitations, pitfalls, and strategies for putting theory into practice. That large bureaucracies often fails to is, in my view, an unrelated subject.
This article does not begin to cover systems thinking. Cybernetics and metacybernetics are noticably missing. Paul Cilliers' theory of complexity - unmentioned. Nothing about Stafford Beer and the viable system model. So on and so forth.
The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
It seems odd to me that someone would write such a polished and comprehensive article and yet completely misunderstand the definition of the central topic.
That happens in system dynamics a lot, actually - there are many independently developed theories in many different disciplines that do not intertwine historically at all. I have met multiple people who work with systems mathematically on a professional level who had no idea about these other things.
Modernizing software systems take time because of inherent corruption in the procurement process or workings of consulting company involved. Those problems can be solved much faster and cheaper if a knowledgeable tech person was involved.
Hertz vs. Accenture: In 2019, car rental company Hertz sued Accenture for $32 million in fees plus additional damages over a failed website and mobile app project. Hertz claimed Accenture failed to deliver a functional product, missed multiple deadlines, and built a system that did not meet the agreed-upon requirements.
Marin County vs. Deloitte: In 2010, California's Marin County sued Deloitte Consulting for $30 million over a failed SAP ERP implementation. The county alleged Deloitte misrepresented its skills and used the county as a "training ground" for inexperienced consultants.
> largely outside the typical congressional appropriation oversight channels
I've seen it happen more than a few times that when software needs to get made quickly, a crack team is assembled and Agile ceremonies and other bureaucratic decision processes are bypassed.
Are there general principles for when process is helpful and when it's not?
I like this saying better: every system is perfect until people get involved. People act irrationally because they are reacting to the nonsense that pervades their reality.
I studied biology in college and this has always been obvious to me, and it shocks me that people with backgrounds in e.g. ecology don't understand that living systems are unpredictable auto-adaptive machines full of feedback loops. How a bunch of ecologists could take doomerism based on "world models" seriously enough to cause a public panic about it (e.g. Paul Ehrlich) baffles me.
Human cultural systems are even worse than non-human living systems: they actively fight you. They are adversarial with regard to predictions made within them. If you're considered a credible source on economics and you say a recession is coming, you change the odds of a recession by causing the system to price in your pronouncement. This is part of why market contrarianism kind of works, but only if the contrarians are actually the minority! If contrarianism becomes popular, it stops being contrarian and stops working.
Tangentially, everything in economics is a paradox. A classic example is the paradox of thrift: if everyone is saving nobody can save because for one to save another must spend. Pricing paradoxes are another example. When you're selling your labor as an employee you want high wages, high benefits, jobs security, etc, but when you go shopping you want low wages, low benefits, and a fluid job market... at least if you shop by comparing on price.
This essay focuses on a very narrow section of systems thinking and systems theory. There's an entire field, with many different subdisciplines beyond just the Club of Rome stuff (and which influenced them directly) that, quite explicitly also deals with systems that "fight back". In fact, any serious definition of systems thinking usually has said dynamics baked into it—systems are assumed to evolve from the start.
I'd encourage people to look into soft systems methodology, critical systems theory, and second order cybernetics, all of which are pretty explicitly concerned with the problem of the "system fighting back". The article is good, as works in progress articles usually are, but the initial premise and resulting coverage are shallow as far as the intellectual depth and lineage here goes.
Any particular resource to recommend?