04/20/23
The Limit To The Machine Metaphor
notes on emergence, self-organization, and non-linearity
These are Scottish terriers – cute right?
Here they are “disorganized”
Then comes a perturbation: milk!
Every individual starts pushing in one-direction. What happens is a pinwheel – an emergent property of the interactions between puppies. Their only rule is to try to keep access to the milk, and therefore, to push in a random direction.
I promise I will come back to this in a second.
The machine metaphor in machine learning suggests that machine learning models are like machines or engines that can process and transform information to perform a specific task.
The machine metaphor has been used since Newton as a lens to make sense of our physical and social worlds, including human organizations
Our traditional views of cause-and-effect closely related to the reductionist paradigm of Newtonian science assume a linear perspective in which the output of a system is proportional to its input. This perspective is predictable and is derived from an additive model in which the system is the sum of its parts (it’s not).
This view has been useful in many domains such as manufacturing, medicine, and organizational management. However, the machine metaphor has its limitations when it comes to understanding complex systems that are nonlinear, unpredictable, and self-organizing. It fails to account for the spontaneous properties of a company’s strategy, an industry’s evolution, a human organ system’s failure, or a black-box algorithm’s decisions.
Complexity science provides a more suitable alternative to the machine metaphor in such cases. Complexity science considers complex systems to be made up of interconnected, nonlinear, and interdependent parts, and therefore, it emphasizes the need to analyze systems in terms of their emergent properties rather than their constituent parts.
It’s important to note that complex ≠ complicated. Something complicated is made of many small parts, all different, and each of them has its own precise role in the machinery. A complex system is made of many similar parts, and it is their interaction that produces a globally coherent behaviour.
Complex systems (like the Scottish terriers) have
Many interacting parts (interactions between puppies)
Simple individual rules (keep access to the milk)
Result in emergent properties (pinwheel effect) “the whole is greater than the sum of its parts”
The behaviour of the system as a whole cannot be predicted from simple individual rules.
The limits of the machine metaphor in machine learning as it relates to complexity science include:
Emergent properties of a system: Complex systems, such as a flock of birds, or the Scottish terriers above, have emergent properties that arise from the interactions between the individual agents in the system. The machine metaphor fails to capture these emergent properties because it focuses on the behavior of individual agents rather than the collective behavior of the system.
Self-organization: Complex systems can self-organize without the need for a central controller or leader. For example, ants can build complex nests and find food sources without any central coordination. The machine metaphor is inadequate in explaining self-organization because it assumes that a system's behavior is determined by a central controller.
Non-linearity: Complex systems often exhibit nonlinear behavior, which means that small changes can have large effects, and large changes can have small effects. The machine metaphor is not equipped to handle nonlinearity because it assumes that changes in the input will result in proportional changes in the output.
The machine metaphor has its limitations when it comes to understanding complex systems, and complexity science provides a more suitable alternative by focusing on emergent properties, self-organization, and nonlinearity. It explains the coming together of biology and technology. Computer technologists are using biology to create software with life-like characteristics (e.g. neural networks). The technological advancements, partly attributed to the machine metaphor, have been instrumental in enabling us to replicate nature’s fractal patterns, understand the implicit rules that allow flocks of birds to move as one, and explain the seemingly random heart rate variability in healthy humans.
Our comprehension of complexity science today is attributable to the confluence of two factors: the advancements in technology and the growing acknowledgement of the insights gleaned from biological systems. The field, however, is still in its infancy. In many ways, it’s comparable to the early stages of alchemy, which eventually evolved into modern chemistry.
Understanding complex systems as adaptable, emerging, self-organizing, and not limiting them to linearity is also key to understanding implications of AI in the “post-AI” world. How do we make policy and regulatory decisions? Who is responsible for the work that AI does? What parameters should “prompt engineers” adhere to, to make sure the results are ethical?
It is crucial that we collectively shift toward viewing economic systems, or healthcare systems, or organ systems, or computational black-box algorithms as complex adaptive systems, rather than machines. This oversimplification of their behaviour limits our ability to understand outcomes fully. Viewing them as complex adaptive systems enables us to design solutions that are more adaptive, resilient, and flexible to the dynamic environments in which they operate.
References:
https://www.napcrg.org/media/1278/beginner-complexity-science-module.pdf
https://www.youtube.com/watch?v=0Y8-IzP01lw&ab_channel=TED
https://en.wikipedia.org/wiki/Emergence
https://en.wikipedia.org/wiki/Self-organization
https://en.wikipedia.org/wiki/Collective_behavior
This exploration is 20/50 of my 50 days of learning. Subscribe to hear about new posts.