The path of complexity

A brief history of complexity

The idea of complexity as a transdisciplinary look at systems is itself an emergent phenomenon as it cannot be traced back to a single individual, study, or event, but instead emerged slowly across fields. In its broadest definition, complexity is a perspective that embraces uncertainty and the need for multidisciplinarity in the face of large interconnected systems. This idea has a long history throughout the world, from classical Eastern philosophy to pivotal figures in western science. In the final work of René Descartes, “The Passions of the Soul” from 1649, human life itself is described as many parts of a different nature interacting creating networks with emergent properties where local effects can have surprising global consequences. These concepts were not formalized then, but were used as a framework to try and wrestle complex ideas that defy reductionist descriptions.

The formalization of complexity occurred across fields in the last century. In 1962, Herbert Simon laid a road map for the study of complex systems in “The architecture of complexity”1. Herbert Simon himself was a political scientist who eventually turned to organization and artificial intelligence research. A decade later, the physicist Philip W. Anderson addressed how this philosophy clashes with the standard reductionist hypothesis in 1972 in “More Is Different: Broken symmetry and the nature of the hierarchical structure of science”2. In this essay, Anderson argues the need for multiple perspectives since “the ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe.” The deeper we go in fundamentals, the less relevant they appear to be to global human-scale problems. Similar ideas also emerged in philosophy with the transdisciplinary work of Edgar Morin and his critiques of reductionist or system theory3. Morin’s formal work on the topic arguably starts in 1978 with “Human Unity” and culminates in 1990 with “Introduction à la pensée complexe.” In his foreword to the latter4, Alfonso Montuori summarizes the paradigm of complexity as “a way of thinking that does not mutilate life… not disembodied and abstract, but rich in feeling, intuition and connection to the larger social and historical context.”

Curiosity about the remarkable complexity of living systems inspired critical developments spread across several disciplines. Having made fundamental insights in the field of quantum mechanics, Erwin Schrödinger turned to developing theory on self-replication and heritable information. In doing so, he remarked that fully understanding living systems will likely require new laws of physics5. At the same time, John von Neumann and Stanisław Ulam were developing theoretical machines (which became foundational to the future field of computer science) that could build functional copies of themselves with hopes they would eventually evolve ever-increasing levels of biological complexity6,7. Attempting to explain the origin of biological complexity, Per Bak and Stuart Kauffman both arrived at structuralist arguments centering on self-organization, but each followed a distinct intellectual path; Kauffman analyzed coarse-grained models of genetic regulatory networks8, while Bak developed the iconic sandpile model that exhibits self-organized criticality9. The success of this community reminds us that life, like any other complex system, is best studied wholly and from many perspectives. This process of intellectual progress through dialogues between fields was repeated in other disciplines. Networks of all kinds are now analyzed using theories from social sciences as well as models from physics and mathematics10. As we explore in our inaugural collection, the study of epidemics and misinformation are increasingly turning to a unified toolbox for contagions of a biological or social nature. This dialogue across disciplines and this search for unified models and theories became the ethos of complexity science over the next few decades.

Complexity as a community

Complexity might not be a science per se, arguably having no specific set of shared systems of interests or methodological tools, but it certainly is a community with a shared approach to science. With members from many disciplines, complexity is a community driven by intellectual curiosity and an openness to engage with new problems and disciplines.

In fact, over the last decade, complexity science has grown in waves, always reaching new disciplines. With roots in philosophy, economics and physics, complexity was originally a community based on abstract thought experiments and models11. Computer scientists and statisticians joined to help with computational modeling and processing of big data. Ecologists, anthropologists, or political scientists also joined with the complex systems they had been studying for decades; food web data, social support networks, governance systems, or even communication networks of trees! Neuroscientists and biomedical scientists came in and proposed that complexity science could help us better understand ourselves through complex models of our brain, microbiome, and immune systems, among others.

One driver of this growth is that the core message that “more is different” resonates with many scientists12. Across disciplines, moving from one ideal system studied in isolation to an interacting open population is extremely hard. That is how many academic fields are born after all: population biology, ecology, statistical physics. Complexity science grows by recognizing that there are lessons to be learned from all of these efforts. That message was echoed over the last decades to form the complexity community of today. Especially in recent years, as we celebrated the Nobel prize of Giorgio Parisi which highlighted the importance of letting curiosity and real-world serendipity guide even theoretical and fundamental scientific inquiry13.

Interdisciplinary, not any-disciplinary

Research that transcends disciplinary borders faces unique challenges in the traditional publishing system, which is built on disciplinary foundations. The goal of npj Complexity is therefore to publish work contributing to dialogues occurring at the edges of disciplines. Our editorial team is well aware of the traditional challenges in this space and aims to embrace new, weird, and creative perspectives. This will hopefully result in a curated venue where members of our community, regardless of fields, can listen to emerging voices or discover new ideas and fields of study.

Currently, research in complexity often has to choose between two imperfect options. Studies can be published in disciplinary venues, from physical, biological, or social sciences, which requires tailoring the project and the text to a specific disciplinary audience at the risk of losing part of the identity of the research. Alternatively, studies can aim for multidisciplinary journals, which more often than not are large journals that publish from any disciplines, rather than journals focused specifically on interdisciplinary work. Offering a home for research that transcends disciplines or build bridges across them is at the core of the mission of npj Complexity. It is not designed to be the journal of the future. It is after all founded in partnership with a traditional publisher and relies on the current open-access standards. Yet, the journal aims to help fill a longstanding and important gap in the publishing venues for interdisciplinary work.

As a community, including the newcomers interested enough to read this, we aim to push knowledge in new directions at the edge and intersections of many disciplines. This goal means that we at times solve big problems with a unique perspective, and at times reinvent the wheel in a new setting. To distinguish the two, it is critical for complexity scientists to surround themselves with a diversity of experts and listen to knowledge from new disciplines. Failing to do so, complexity science risks becoming yet another discipline with a funny name, with its own jargon and problems. It is thus a core requirement of npj Complexity that manuscripts published by the journal be readable to its broad target audience in order for pre- and post-publication peer review to transcend disciplinary boundary.

To quote the great Murray Gell-Mann, physicist turned complexity scientist14, at his 1969 Nobel Prize speech: We are driven by the usual insatiable curiosity of the scientist, and our work is a delightful game. At npj Complexity, research needs to engage with scientists from any discipline by appealing to the inherent curiosity of complexity scientists. It is a difficult and subjective goal, but one that is at the heart of complexity science.

The path of complexity

Complexity science is at times described as weird and unique, but it has many cousins, such as systems theory, cybernetics, ecology, political science, and any other fields interested in systems composed of many parts interacting at multiple scales or through diverse mechanisms. The value of using the term “complexity” is in part to embrace the openness of the community through the vagueness of the term. It is therefore hard to formulate a concrete mission statement for the journal. Yet there is a dire need for a holistic approach to complexity research: From theory, to experiments, to applications, including the philosophy and ethics thereof.

And npj Complexity aims to be such a home for complex systems, including but not limited to:

  • network science,

  • artificial life,

  • computational social science,

  • systems biology,

  • data science,

  • ecology & evolution,

  • dynamical systems,

  • economics & finance,

  • and social complexity.

Spanning across these domains and more, we find that the most pressing problems facing humanity are cross-disciplinary in nature: emerging pandemics, misinformation, climate change, rising global inequality, human right movements, adaptations to new technologies and the nonlinear interactions that arise among all of these challenges. None of these problems can be tackled in isolation, they require complex thinking and disciplines working in unison. Research along this path can be challenging for standard peer review practices as it involve dialogues across fields and expertise, or new language and perspectives. Efforts to rise to these global challenges while embracing their complexity deserve their own venues.

Related Articles

Effects of network connectivity and functional diversity distribution on human collective ideation

Human collective tasks in teams and organizations increasingly require participation of members with diverse backgrounds working in networked social environments. However, little is known about how network structure and the functional diversity of member backgrounds would interact with each other and affect collective processes. Here we conducted three sets of human-subject experiments which involved 617 university students who collaborated anonymously in a collective ideation task on a custom-made online social network platform. We found that spatially clustered collectives with assortative background distribution tended to explore more diverse ideas than in other conditions, whereas collectives with random background distribution consistently generated ideas with the highest utility. We also found that higher network connectivity may improve individuals’ overall experience but may not improve the collective performance regarding idea generation, idea diversity, and final idea quality.

Understanding learning through uncertainty and bias

Learning allows humans and other animals to make predictions about the environment that facilitate adaptive behavior. Casting learning as predictive inference can shed light on normative cognitive mechanisms that improve predictions under uncertainty. Drawing on normative learning models, we illustrate how learning should be adjusted to different sources of uncertainty, including perceptual uncertainty, risk, and uncertainty due to environmental changes. Such models explain many hallmarks of human learning in terms of specific statistical considerations that come into play when updating predictions under uncertainty. However, humans also display systematic learning biases that deviate from normative models, as studied in computational psychiatry. Some biases can be explained as normative inference conditioned on inaccurate prior assumptions about the environment, while others reflect approximations to Bayesian inference aimed at reducing cognitive demands. These biases offer insights into cognitive mechanisms underlying learning and how they might go awry in psychiatric illness.

A severe local flood and social events show a similar impact on human mobility

While a social event, such as a concert or a food festival, is a common experience to people, a natural disaster is experienced by a fewer individuals. The ordinary and common ground experience of social events could be, therefore, used to better understand the complex impacts of uncommon, but devastating natural events on society, such as floods. Based on this idea, we present a comparison — in terms of human mobility — between an extreme local flood that occurred in 2017 in Switzerland, and social events which took place in the same region, in the weeks before and after the inundation. Using mobile phone location data, we show that the severe local flood and social events have a similar impact on human mobility, both at the national scale and at a local scale. At the national level, we found a small difference between the distributions of visitors and their travelled distances among the several weeks in which the events took place. At the local level, instead, we detected the anomalies (in time series) in the number of people travelling each road and railway, and we found that the distributions of anomalies, and of their clusters, are comparable between the flood and the social events. Hence, our findings suggest that the knowledge on ubiquitous social events can be employed to characterise the impacts of rare natural disasters on human mobility. The proposed methods at the local level can thus be used to analyse the disturbances in complex spatial networks and, in general, as complementary approaches for the analyses of complex systems.

Constructing future behavior in the hippocampal formation through composition and replay

The hippocampus is critical for memory, imagination and constructive reasoning. Recent models have suggested that its neuronal responses can be well explained by state spaces that model the transitions between experiences. Here we use simulations and hippocampal recordings to reconcile these views. We show that if state spaces are constructed compositionally from existing building blocks, or primitives, hippocampal responses can be interpreted as compositional memories, binding these primitives together. Critically, this enables agents to behave optimally in new environments with no new learning, inferring behavior directly from the composition. We predict a role for hippocampal replay in building and consolidating these compositional memories. We test these predictions in two datasets by showing that replay events from newly discovered landmarks induce and strengthen new remote firing fields. When the landmark is moved, replay builds a new firing field at the same vector to the new location. Together, these findings provide a framework for reasoning about compositional memories and demonstrate that such memories are formed in hippocampal replay.

Configural processing as an optimized strategy for robust object recognition in neural networks

Configural processing, the perception of spatial relationships among an object’s components, is crucial for object recognition, yet its teleology and underlying mechanisms remain unclear. We hypothesize that configural processing drives robust recognition under varying conditions. Using identification tasks with composite letter stimuli, we compare neural network models trained with either configural or local cues. We find that configural cues support robust generalization across geometric transformations (e.g., rotation, scaling) and novel feature sets. When both cues are available, configural cues dominate local features. Layerwise analysis reveals that sensitivity to configural cues emerges later in processing, likely enhancing robustness to pixel-level transformations. Notably, this occurs in a purely feedforward manner without recurrent computations. These findings with letter stimuli successfully extend to naturalistic face images. Our results demonstrate that configural processing emerges in a naíve network based on task contingencies, and is beneficial for robust object processing under varying viewing conditions.

Responses

Your email address will not be published. Required fields are marked *