Why I Am a Pluralist
Dystopias of social collapse or subjugation to centralized artificial superintelligences could give way to an embrace of the power of diverse cooperation.
I increasingly use the terms “pluralist” to describe myself and “Plurality” to describe the direction I want to see technology go in. 1 This essay, following up on my “Why I am not” series 2, tries to articulate what I mean by this and where it might lead us.
The piece filters through my academic-technologist lens many things I have learned since Eric Posner and I published Radical Markets, but more than anything what I have learned from my fellow RadicalxChange board members Danielle Allen, Audrey Tang and Christopher Kulendran Thomas.3 While the piece raises many more questions than it answers, my hope is that these will be precise enough, in a way they have not been in the past, to help stimulate a coherent Plurality agenda for technology.
Partly for this reason, I suspect much of this essay will feel slow and belabored to non-technologists. For those not focused on the development of formal institutions and protocols, many of the principles of pluralism will seem natural and not to require such detailed exposition, analysis, or research. Yet the presently dominant directions of information technology development (Artificial Intelligence on the one hand and the cluster around Web 3 and the Metaverse on the other hand) are typically imagined in ways that are in deep tension with pluralism, representing instead totalism and atomism. As obvious as many of the arguments may seem to those trained in fields and modes of thinking that are instinctively sympathetic to pluralism, there is an urgent need to develop a coherent technological agenda for pluralism (which I will call Plurality) to match these opposite tendencies if pluralism is to have a chance to survive the technological transformations of our age.
I begin with definitions and logical contrasts, emphasizing two sides to pluralism: in the design of institutions and patterns of thought. I then illustrate these two forms with a variety of concrete examples and divine from them a series of quasi-formal principles for Plurality. Next, I argue that despite concerns that plural thinking may be too vague to yield any concrete insights on institutional design, in fact it provides an ideal basis for justifying at least one core element of Plurality: the principle of cooperation across diversity. I then turn to practical challenges in making plural institutions work and why I think they can probably be surmounted. I conclude by calling out important areas for research, shaping them towards a technological agenda of Plurality that has the potential to become even more ambitious and transformational than AI or Web 3 — and far more attractive.
Definition
I understand pluralism to be a social philosophy that recognizes and fosters the flourishing of and cooperation between a diversity of sociocultural groups/systems.
I see two sides to pluralism, institutional and epistemic.
Institutional pluralism is a contrast with a broad range of social philosophies that might be described as “monist” (I ironically called these ALONE or Atomized Liberalism and Objectivist Naïve Epistemology in a related piece). Monist philosophies tend to focus either on isolated individuals and/or on a unitary/universal structure in which these individuals reside. The atomistic ideology predicated on the isolated individual is often used to justify capitalism, because it emphasizes groups as much as individuals. However, for institutional pluralism, groups are not mere vehicles for individual interests but are of fundamental interest. Meanwhile, the centralistic ideology predicated on a unitary or universal structure is often used to justify populist statism and nationalism. In contrast, institutional pluralism denies the centrality of any one group/collective, such as the nation state, global humanity, etc.
Epistemic pluralism is a contrast to traditions that seek out unitary, commensurable ways of knowing (most prominently technocracy). It does so by denying that any single rational logic or meritocratic scheme can select for optimal social ordering. Instead, it emphasizes the importance of a diverse range of incommensurable collective entities and cultures of knowledge that intersect and collaborate.
Some examples are in order (see “Plural Institutions” and “Plural Thinking”)’, naturally drawn from my previous contrasting pieces. But before providing these, below I outline a few additional qualifications.
Qualifications
Pluralism is an element of multiple other philosophies that I do not myself embrace but have learned from. It plays an important role in much conservative, religious and reactionary thought. It also plays an important role in postmodern, cultural relativist and nihilist thought. To clearly delineate my thinking from these schools that I mostly reject, it is important to emphasize three other elements of my thinking that will appear here but which for brevity I do not append to pluralism above.
-
The first is what might be called “dynamism” or “liberalism”, namely that I view evolution of the set of social groups as critical to the success of pluralism. This contrasts with conservative and much religious and consociational thought, where the plurality of groups constituting society is often seen as static.
-
The second is “self-government” or “democracy”, namely that I think it is crucial for the pluralistic groups to be accountable to the individuals who make them up, without putting those individuals above the groups in fundamental significance. This contrasts with reactionary thought (such as fascism and corporatism), where groups usually have an authoritarian, personalistic and hierarchical structure.
-
The last is what might be called “cooperation”, namely that we should aim for cooperation and roughly coherence/consensus across social groups towards common goals, which contrasts with much postmodernism, relativism and nihilism.
One might therefore say I am a cooperative liberal democratic pluralist, but the middle two words are quite loaded with a variety of meanings, so I will use shorter terms and hope this clarification suffices.4 It is also important to highlight that pluralism is a much more capacious and ill-defined position than most monist positions (e.g. “majority rule” and “capitalism”), which while not fully prescribing institutions give a sharp sense of what they are after. While this obviously makes the position I am advocating a bit vague and potentially confusable with things I don’t want to defend, I hope the rest of the essay will give enough sense of direction to clarify what I mean.
Plural Institutions
Put this way, vague “cooperative liberal democratic pluralism” just sounds like a bromide. Doesn’t everyone love that?
I want to therefore turn to some sharp contrasts that I hope will illustrate how — according to the definition and qualifications outlined above — different plural institutions would be than both those used in many practical circumstances and those advocated by most reformers in the present political landscape. To make matters concrete, let me illustrate with some existing institutions with explicitly pluralist features.
-
The US constitution is famously federalist both in its division of powers between states and the national government and in that representation nationally, in both the Senate and Electoral College, by giving weight based on both states and population.
-
So-called “consociational” regimes explicitly represent and seek to balance power along lines of historical divisions, whether political, ethnoracial or linguistic. Examples include Belgium and Switzerland (linguistic), pre-Abiy Ethiopia (ethnic), South Africa (racial) and Colombia and Holland (political). Perhaps the best-known examples are “confessional”, where representation is based on religious affiliation, all positions of power have explicit and constitutionally-derived allocations by confessional group and checks and balances strongly encourage cross-confessional cooperation to achieve political action. Lebanon, Iraq and Northern Ireland are leading cases.
-
Corporatist political theory has emphasized pluralism along lines of material/sectoral/occupational experience, expertise and interest. These ideas, which have deep roots in classical (e.g. Platonic) and religious (e.g. Catholic social thought) political theory, have influenced a wide range of political institutions from German corporate co-determination schemes to the Irish senate. Yet they were probably most ambitiously attempted and thus closely associated with Fascist and Fascist-inspired governments during the first half of the twentieth century. Most prominently known for his work on the Gini coefficient, social scientist Corrado Gini led the design of a system of functional representation, primarily by industry and factor of production, that replaced the national parliament under Mussolini’s rule.
-
Most international organizations and confederations (e.g. United Nations and European Union) combine representation of constituent countries based on different principles, including both underlying population and representation of the constituent nations and civil groups that bridge across them. The EU, for example, has a parliament based roughly on population proportional representation, a Council based primarily on national representation and formal representation of a range of social, civil and interest groups. These forces coexist in a rich set of checks and balances.
Such plural institutions often get a cool reception from broad liberal (in the general philosophical sense) and old left philosophical and political circles I often travel in. There is little respect for the Senate or Electoral College and an almost automatic view that it would be better to replace them with some form of directly proportional one-person-one-vote election. Consociational regimes are seen as oppressive historical relics and corporatism is generally a term of abuse, only a half step from Fascism itself. Many dream, in their heart of hearts, of replacing federal and confederal structures with some sort of direct international democracy. And there is a general sense of shame and disappointment among those who are drawn to systems thinking about the creeping of plural elements (such as interest group politics and even political parties) into “clean” democratic systems.
Of course, there are critical issues driving this skepticism. It is far from clear that the minority interests protected by American institutions (primarily small states) are among those most deserving attention in an era when socio-political cleavages run more along religious, racial, and intra-state (rural v. urban) geographic lines. Yet if you scratch just a bit deeper, you’ll quickly see that advocates of such monist reasoning almost all lack deep conviction in such reasoning: most will acknowledge that one-person-one-vote can, often does, lead to disastrous outcomes if not tempered by a range of minority protections. Even the European parliamentary democracies such advocates typically most admire nearly all have substantial pluralist elements, including internal federalism (e.g. United Kingdom, Germany, Italy, Spain) and both implicit and explicit rules for power sharing and consensus across political parties, ranging from proportional representation and coalition formation to the stricter informal Polder systems in the Netherlands. Most also have at least some sympathy for principles of subsidiarity in some form, for explicit non-individualist protection of the rights of certain groups, etc.
The natural question is then: Why resolve this tension by eliminating plural institutions and reverting to monism? Why not address it by leaning into pluralism and proliferating the set of recognized cleavages? While this might seem tremendously complex, it is precisely this sort of complexity that modern social network theory and the algorithms derived from it are intended to cope with.
Plural Thinking
Yet before illustrating how this might work, it is important to turn to the other side of pluralism, namely epistemics. As above, I want to both highlight the prevalence of pluralism in practice, the resistance to it and the reasons why I believe this resistance can and should be overcome with a more developed and thoughtful understanding of pluralism.
The reality of plural thinking is perhaps even more apparent than that of plural institutions. Again, consider a few examples:
-
Religions: Most religions, at least in their pure form, claim to be exclusive accounts of fundamental metaphysics. Yet given the diversity of firmly held religious convictions among reasonable civilizations, most successful religions have come to a plural understanding with other faith communities that respects their convictions and seeks points of cooperation and learning — especially in the ethical and political domains — while maintaining their communal commitments. Although the history of religion boasts endless examples of new “syncretic” faiths attempting to unify and eclipse this diversity (e.g., Hinduism, Christianity, Islam, Unitarian Universalism, Bahá’í), the most successful of these have at best added to the diversity of faiths; there does not appear to be any steady peaceful flow of existing faiths into a universal attractor. On the other hand, political ideologies and ethical views that have managed to justify themselves in terms native to a variety of faiths have been critical to the development of contemporary nations and international law.
-
Languages: 91 languages grouped into more than a dozen language families are spoken by more than 10 million people and massively more are spoken by fewer people each. Whether one subscribes to versions of the Sapir-Whorf Hypothesis, according to which some thoughts are unthinkable in some languages, or softer accounts, clearly some languages are better adapted to expressing certain thoughts than are others, so these linguistic differences matter in some ways and reflect the conditions of the societies that host them. Attempts at linguistic unification have largely been performed by colonization or other forms of coercion, rather than by pursuits of rational transcendence; there is little if any case to be made for the “rationality” of the currently most dominant languages such as Chinese and English. At the same time, translation (often aided by automation) becomes more accessible every day and the resonance of a text in well-executed translation in many languages has become both a marker of its profundity and increasingly critical to its success. Furthermore, even as some linguistic unification has proceeded, many dominant languages continue to evolve and fragment sub-culturally (e.g. Standard White v. Standard Black English). Even in new and nerdy areas, such as coding languages, competing frameworks continue to proliferate and feature trade-offs.
-
Academic fields: Academic fields, at least within the very broadly defined sciences, are widely imagined pursuing some absolute notion of truth using some universal “scientific method”. Yet in practice, fields and often subfields (and subsubfields!) are usually founded upon premises that are mutually inconsistent. Let me focus on examples of where I have worked in or closely adjacent to the field. Economics is founded on scarcity of private goods, rationality and methodological/institutional individualism; sociology is founded on collective goods, semiotics and methodological/institutional collectivism. Within economics, the field of public finance essentially rules out market power, while the field of industrial organization rules out inequality and general equilibrium. While attempts to bridge fields and subfields do occur, they are both a small fraction of total activity at any time, underfunded and, most importantly, typically result not in the supersession of the connected fields but rather in the birth of a new field or subfield (e.g. data science from statistics and computer science, behavioral economics from economics and psychology). At the same time, most significant impacts of academia on practice (or at least those that have received a charitable historical reception) have required collaboration and assent across fields (e.g. the birth of the internet and personal computing, the most effective responses to the pandemic).
While such pluralism is a core and persistent feature of these domains and many others (e.g. sports, technology, industries) discontent with such heterogeneity is persistent and attempts to overcome it are common. With regard to religion any atheists view (ostensibly “inconsistent”) religious diversity as a proof of their own faith in the power of singular “reason” (itself highly elusive as noted above) to explain all. Regarding the plurality of languages, many look back with mourning at the story of Babel and imagine a day when all will speak a common, universal and “rational” language. And regarding academic plurality, interdisciplinary agendas often veer into aspirations to “theories of everything” that will e.g. reduce economics to psychology or replace special relativity and quantum mechanics with a greater string theory.
Yet in all these cases, as with aspirations to methodological monism, attempts at grand unity have not simply failed historically: it is hard to imagine what a coherent attempt at success would even look like. Unless one simply wishes, as (especially neo-) atheists do, to discard and disrespect all metaphysical aspects of religion, what could it mean exactly to have a religion that encompasses all past religions and is not an alternative to them?
Attempts to rationally formulate the problem of language as an optimization (e.g. information-theoretic analyses of verbal parsimony over the distribution of concepts typically discussed), can yield some interesting observations, but are almost laughable as potential foundations for designing actual languages at least in the near future. First, attempts thus far are obviously absurd oversimplifications of the underlying problem, which involves dozens of other factors from ease of articulation and memory to adaptation to the vast diversity of contexts. Second, even if the problem could be formally defined, finding even an approximately optimal solution over the set of all possible languages is almost certainly computationally very hard in any reasonable period and almost certainly involves both a range of non-convexities and flat regions that mean it would output itself a multiplicity of languages. Third and most fundamentally, there is almost certainly no optimal solution independent of context and thus any even approximately optimal solution would involve at least some degree of plurality of languages, though perhaps not as mutually unintelligible as languages currently are. All this means that a “rational” or approximately optimal language is almost impossible to imagine and the few attempts at achieving this (such as Esperanto, which I used to speak and be quite an enthusiast of) have achieved limited success both in practical adoption and in achieving anything like a demonstration of optimality or even preferability that are widely accepted to existing languages.
While the scientific tradition of rationalism makes the unification of fields of scientific knowledge seem less obviously absurd than these previous two examples, I believe this is largely a mirage and that all the factors militating against a unification of languages or religions are all the stronger in fields of academic study. Imagining a coherent field of analysis in which people can be meaningfully trained that subsumes, for example, computer science, economics, sociology, anthropology, and analytic and continental philosophy into a single set of standards and foci is at least as preposterous on its face as doing this for languages. And taking seriously the values embedded in each of these fields without erasing the commitments of the others involves many of the challenges a unified religion would.
Thus, again, a natural epistemic project presents itself, parallel to institutional pluralism. Rather than aspire to the consolidation of all fields into a single theory, or at least progress towards such a universal truth, why not aspire to the increasing speciation and differentiation of knowledge as well as to active investment in the bridging across such specialties to develop specific applications and technologies? That is, why not imagine knowledge not as a hunt for a single, ultimate and universal truth, but in the spirit of ecology: a gradual evolution of stably coexisting diversity that speciates and complexifies as it develops?
Principles for Plurality
Hopefully the parallels between plural institutions and epistemics are now coming into view. To sharpen the parallel, I’ll distill it into a collection of principles applicable in both cases:
-
Recognition: Recognize (formally or intellectually) social groups/schools of thought and accord them epistemic or institutional status beyond the sum of members/individual ideas that make them up.
-
Subsidiarity: Activities and topics that primarily concern a particular social group, its members or the ideas it focuses on are delegated to the collective control of or intellectual authority of that group, rather than directly structured by a broader level of organization or analysis. Meanwhile, broader levels of organization/analysis to a significant degree concern themselves with interactions among narrower ones rather than directly with the behavior of individuals/specific analysis.
-
Neutrality: A set of widely acceptable formalisms and analytic practices should govern the interaction across social groups/analytic fields (and individuals), such that all have rough consensus that the rules of the game under which they operate do not inherently favor one group/field over another.
-
Cooperation across diversity: Actions and views that receive assent across a range of social groups deserve greater credence, even when holding fixed any profile of individual degrees of assent.
-
Adaptation: The set of recognized groups and fields should adapt to natural changes in the set of groups that are broadly understood to be important to social and intellectual intercourse and encouraging dynamic formation of new patterns of social affinity and intellectual investigation is a goal of the social system.
Most of these features are self-explanatory and appear in most pluralist traditions, whatever their differences. The more controversial elements would likely be 3 and 5. There is a significant minority of pluralist thought/applications in which neutrality is not a criterion and the attitude towards some social groups is explicitly one of toleration rather than equality. I will not focus on incorporating such biases, since excluding some groups from a pluralist conception is straightforward, once one has a neutral formalism in hand. It thus seems to me that in some sense the central task is determining what a neutral pluralist formalism might look like; those who apply it may decide some forms of bias are appropriate.
Perhaps a majority of pluralist philosophies and institutions (especially those with a conservative bent) would tend to reject adaptation and focus instead on some fixed set of social groups. While this would seem to place me at odds with most other pluralists, it seems to me this is less a design desideratum and more of a necessity if pluralism is to answer its critics discussed above and survive in the face of other more dynamic social philosophies. As such I view this criterion, despite its contrast with other pluralist philosophies, as necessary to answer the existential questions posed to pluralism above.
A Plural Case for Plural Institutions
The question then naturally arises of whether plural thinking can support plural institutions. Is plural thinking too broad and divergent to support plural institutional designs? I will try to illustrate why not, to show convergence of a range of perspectives on plural institutional design. I do this by picking out one idea from the principles, that of cooperation across diversity, and illustrate how plural thinking can support it. Because this essay is primarily intended to inspire and persuade formal technologists, I will focus on formal models as ways of structuring justification/derivation of institutions, though a full formal analysis is outside scope. I will show how relaxing the highly simplistic and implausible assumptions about social structure (that even the strongest advocates of typical monist models and conclusions would agree are only simplifications) will tend to support plural institutions in a reasonably convergent form as discussed above. To that end, I’ll focus on seven examples that are far from exhaustive but hopefully are representative:
-
Statistical epistemics: The “wisdom of crowds” argument is commonly used for both to justify market institutions (like prediction markets) and democratic institutions. This dates at least to the celebrated Jury Theorem of the Marquis de Condorcet (1785): the idea is that if many individuals have independently and identically distributed signals of the truth, equal weighted voting by all is more likely than that by a subset to deliver the truth. The strength of this result as a justification for democracy has always been overstated; the same logic implies that those with a greater precision signal should be given greater weight, unlike the standard one-person-one-vote rule. Yet a much richer and more generic variation that does not require any inegalitarian hypotheses is that signals will not generally be independently distributed. One of the few things that almost everyone in contemporary politics would accept is that there are different “tribes” with clusters of biases that correlate their views (and likely their errors).
While the precisely optimal estimator will depend on the precise structure of statistical dependence, there is a strong and straightforward statistical intuition that greater weight should be given to the accumulation of relatively independent signals than those with correlated errors. This principle is often referred to as “consilience” in the philosophy of science and is quite broadly accepted. It suggests that a course of action supported by socially disparate groups that are unlikely to be correlated deserves greater relative credence than any symmetric/exchangeable function of the credence of individuals would indicate This tends to support the principle of cooperation across difference. It is also broadly consistent with neutrality, as it seems likely there can be reasonably broad social agreement about correlation structures even if there is little about precisions. Finally, it seems easily adaptable as it is based on general statistical properties rather than specific historical circumstances treated as immutable.
-
Statistical power: Penrose (1946) analyzed the voting power of an entity with many votes relative to one with a single vote. In the case where individuals have preferences drawn independently and from identical distributions (iid), one individual with M votes has the same chance of changing the outcome as do M2 individuals together with a single vote each, because, on average, iid voters tend to cancel each other out. Penrose thus argued that delegations should have voting power equal to the square root of the population they represent, a rule approximately implemented in the Nice Treaty that forms the basis of voting in the European Union.
As with the previous example, the assumption that every vote is either drawn iid or part of a perfect coalition rarely holds. Instead, political science teaches us that a variety of social memberships tend to correlate voting preferences, but rarely perfectly so. In such a setting, equalizing effective voting power across various parties and groupings would be quite a bit more complicated than the simple square root rule suggests, but would likely follow similar qualitative patterns. Thus, votes from tightly correlated groups should be down-weighted relative to population size when compared to the votes of very socially uncorrelated groups, leading to relatively greater strength for choices with socially broad support relative to population support size.
-
Game and incentive theory: My background is primarily in mechanism design, founded on game theory. Most mechanisms are designed from the assumption of individual self-interest, denominated in units of money-equivalent value. Quadratic Voting (QV) is an approximately optimal collective decision procedure for large groups based on this formulation. In it, individuals use some currency (often equally allocated voice credits) to express assent for or opposition to various propositions or candidates. However, the total support received is not the linear sum of the individual contributions, but also adds “interaction” components so that limited contributions from many people is more powerful than a large one from a single. This effect is quite dominant: one token from each of 1000 individuals equals the same as a million from one individual. This solves the free-rider problem, where individuals that are small parts of large projects under-contribute.
However, as before, the assumption of atomized individual self-interest is widely understood as narrow. Socially close individuals tend to have altruistic concern for one another’s interests, concern that can directly solve the free-rider problem. Failing to account for this can lead QV to “oversolve” the problem and thus significantly skew outcomes in favor of this affiliated group.
Buterin (2019) shows that this problem can be overcome if the quadratic terms related to the interaction between collections of individuals with altruistic feelings (possibly revealed by their actions in the mechanism itself) are scaled down. This suggests that approximately optimal voting may be possible with a procedure that combines linear and quadratic elements, depending on the social closeness of the relevant individuals. This will tend to favor diversity as the interaction terms between socially distant individuals will be preserved, while that between socially close individuals is negated, allowing causes with socially diverse support to triumph over those with more homogeneous support.
-
General equilibrium economics: The canonical framework for understanding economies wholistically is “general equilibrium”. The theory of general equilibrium shows that under certain conditions, capitalist economies with appropriate initial endowments generate all desirable outcomes. However, two critical conditions are that all goods are private (are consumed by a single individual) and that production of these goods exhibits “decreasing returns” (the amount produced by a collection of resources is at most that which could be produced by each of those acting separately). These conditions rule out most phenomena of relevance to modern society, such as technological innovation, network effects, infrastructure, cities, etc. Groves and Ledyard (1977) show that a simple, global public good can be added to this setting and optimality preserved using a quadratic mechanism like QV.
More generally, individuals must participate in funding the costs of infrastructure and public goods that are not covered by marginal cost pricing and voluntary contributions. Because the support of these shared goods benefits all those who share their value and many contribute to supporting them, even selfish individuals will typically care about the material position of others who are close to them in the network of co-consumption of shared goods. This generates effectively altruistic concern as part of “enlightened self-interest”. This in turn should, as in 3) above, require mechanisms that are a mix of quadratic and linear (or quadratic in some rotated space) necessary to achieve efficiency, an effect Immorlica et al. (2019) explore. For similar reasons, these mechanisms should effectively subsidize collaboration across “social” (co-participation in common goods) distance.
-
Security and trust: In a canonical model of network security, every link in a graph has a weight representing the value of a relationship that can flow across that link (“trust”) without leading to a failure. Relationships can flow across nodes that are not directly connected through “friends of friends” relationships, with the maximum trust such connections can facilitate being determined by the weakest link along the path, and the total strength of relationships can be calculated by “max flow” through the network.
If we consider the value that can be attributed to a collaboration across more than two individuals, the max flow problem must be solved jointly. If the paths of trust to a collaborator are independent across members of the coalition, the total trust that can be achieved by the coalition will be closest to addition of trust across members. However, if these paths intersect, then the total trust will be sub-additive as trust routed to one member will congest the pathways of trust to another. If, for example, all coalition members are ultimately connected through just a single link, no more trust can be routed to any coalition than to that one link. Thus, a coalition will have greater trust to the extent its paths to the subsidizing collaborator are socially diverse.
-
Security against attack: Another security model, closely connected to those used to study/justify blockchains, considers a case in which validator nodes may be compromised/attacked and classifies systems in terms of the fraction of nodes that need to be compromised for an attack to succeed. Yet these statistics implicitly assume that the chance of a validator being compromised is independent across validators, which seems a very special case given the way viruses and other attack vectors usually work. Instead consider a model in which there is a network among validators, where every weighted edge represents the probability that an attack infecting one node spreads along that edge to an adjacent node. In this case, the compromise of a cluster of connected nodes may be quite common, while simultaneous compromise of many distant nodes will be very uncommon. Systems that are resilient unless many independent clusters are compromised will therefore be much less likely to fail compared to those with guarantees purely based on number of nodes (or computational power) compromised. Differently put, such guarantees will translate into requiring many more independent points of attack.
This in turn suggests that if a system has visibility onto the network structure but not onto which nodes have been attacked, requiring consensus across diversity will be much more effective in deterring attacks than thresholds of computational resources or nodes. A similar analysis would be relevant in supply chain planning anticipating war or disease that may spread across a social network.
-
Checks, balances and cancers: One of the most common concerns in the design/analysis of complex systems is stability/robustness. Biological systems that do not successfully eject cancers die and, as James Madison wrote in Federalist No. 10, a similar argument applies to the avoidance of domination by political factions. The design of biological, political or other institutional checks-and-balances is typically based on the avoidance of such an outcome. For example, Madison argues that the differences in selection principles of various bodies in the American Constitution makes it unlikely that all would be simultaneously captured.
To formalize this logic, consider a network representing the probability or intensity of political coordination (or cancerous spread) between two agents (or physical loci). Avoidance of domination by faction (or cancer) will require checks and balances to be related to network distance, so that network-close clusters cannot simultaneously capture many institutions. In such systems, major changes will require cooperation across diversity.
While I have focused primarily on theoretical frameworks that can be potentially used to derive institutional designs, several loosely related empirical frameworks from a range of fields tend to point in the same general direction. Examples include:
-
A large part of work in science and technology studies, including the quantitative analyses of James Evans and his many collaborators, which shows that scientific innovation generally emanates from the bridging of previously non-communicating scientific fields. An excellent recent review of this work was published in Science.
-
Literature in organizational and business studies has proliferated in recent years showing the benefits of diversity along a variety of dimensions from cultural to cognitive in teams and organizations. Some excellent recent reviews are by Scott Page and McKinsey.
-
David Graeber and David Wengrow’s The Dawn of Everything highlights the paramount importance over the long sweep of human history of human cultural, organizational and political diversity in producing modernity and progress, to the extent such things are even meaningful ideas to them.
Challenges for Institutional Pluralism
Merely demonstrating the internal coherence of pluralism, even if the sketch above could be completed, is only a first step. There are many natural critiques that monists might level against pluralism. As above, I will only briefly respond to them; a full and persuasive response would not just require too much space but more importantly, as I’ll turn to next, would require much more research and development than exists today. Nonetheless, I believe there are good reasons to be hopeful these problems can be surmounted. These are critiques I take very seriously, as they are ones I have leveled myself against other social philosophies, as I will highlight below.
-
Is Plurality underspecified? While lots of thinking seems to support plural institutions of some form, these are far from converging on some single optimal graph protocol and in fact this whole line of argument gives grounds for pessimism that such will ever be possible. This raises some basic problems for Plurality. First, do pluralists really stand for anything? Is there enough they can agree on and hope to commonly defend? Second, doesn’t this flexibility give a great deal of room for elites with technology knowledge to design apparently neutral pluralist protocols that in fact privilege their narrow perspective and/or interests? This second concern has been particularly prominent in my own writing in the past.
I am cautiously optimistic that many provably optimal or approximately optimal protocols to address specific models above will turn out to be robustly approximately optimal in several of the models or at least to greatly outperform the monist alternatives. If this hope is validated it should be possible to maintain epistemic pluralism without undermining the success of Plurality. If pluralists can agree that there is at least a class of plural institutions that all of them prefer over monist alternatives, reasonable diversity in which are applied can coexist with a movement supporting the broad (and epistemically pluralist) deployment of plural institutions. And if this class consists of a relatively small number of reasonably robust protocols, the space for parametric manipulation to narrow interests can be contained.
-
Is Plurality too risky? Identifying practically useful and not merely theoretically supported institutions will require significant social experimentation with their use. Many of these institutions will be novel and require significant social digestion to work to their potential. Performing this at the level of the sort of society that I used to motivate the problem seems incredibly risky, especially at a time when various technology-mediated social experiments are already wreaking havoc. Is there a reasonably safe pathway to the adoption of plural institutions?
Luckily, we have seen the emergence in the last decade of a rich and growing space for experimentation with new social technologies. Two leading examples are the “Web 3” community and the new cluster of “digital democracies” led by Taiwan but also including the extended Baltic countries and significant pockets of East Asia and Australasia. A wide range of new modes of social organization are being explored in these spaces to significant success and both the Taiwanese case and significant pockets of the Web 3 community share an orientation to pluralist values. This suggests natural laboratories for relatively safely — but with real social engagement and stakes — experimenting with these new social technologies. Furthermore, the very multilevel conception of social organization pluralism embraces naturally opens space for experimentation, as pockets of a range of organizations (churches, corporations, city governments, labor unions, etc.) are natural places for experimentation to both proceed and eventually scale.
Cities politically exemplify a scale of organization that seems maximally ripe for such experimentation, as there are roughly as many cities in the world as people in a typical city, but all significant organizations have some similar level where experimentation is natural and problems pluralism can naturally solve. Consider corporations, where divisional differentiation is widely seen as necessary and as a fundamental source of some of the worst problems facing businesses; plural institutions seem a natural solution space.
-
Is Plurality too centralized? Another concern is that the algorithms above seem to rely on a panoptical perspective on social relationships, performing intricate calculations on the entirety of a social graph. Such a panoptical view is both unrealistic and even if it could be achieved would be undesirable for any entity to have because of its intrusive nature.
While I again strongly agree with the motivations behind this concern, I am skeptical that it necessarily need apply in this case. Certainly some implementations of the protocols above would depend on a panoptical perspective, but a wide range of protocols on graphs can be run in a highly decentralized (or at very least polycentric) and low-to-zero-knowledge fashion, with limited harms to performance. Given the extensive literature and development of protocols in this vein, it seems to me that concerns about centralization can and should be a (soft and flexible) constraint as we seek to design plausible pluralist social technologies, but it seems unlikely that this constraint will eliminate the most attractive candidate solutions.
-
Is Plurality manipulable? A closely related concern is that the pluralist mechanisms above seek to “subsidize” or otherwise support bridging ties and unlikely consensus across previously uncooperative social groups. In seeking to do so, they appear to relatively “tax” strong social relations, providing cause for these relations to be suppressed or hidden from the eye of the system.5
At the same time, there are important principles of incentive design that Plurality embraces and extends. The first, often called “monotonicity” among economists, is the idea that systems should, when their goals align with those of agents, defer to the preferences of those agents. Subsidiarity may be seen as the extension of this principle to groups which pluralism recognizes as standard economic theory does agents.
A second principle is “countervailing incentives”; that is if a system creates an incentive for an agent to exaggerate some variable, there should be an offsetting incentive to understate this variable. A classic example is the Harberger Tax, where the usual incentive of a property owner to overstate the price they’d require to sell their property is offset by making a tax dependent on the price they commit to sell at. While again the details may vary by specific algorithm and it is reasonable to focus on the value of this principle as a constraint on designs, it seems natural that well-executed Plurality would tend to precisely involve such countervailing incentives.
In an ambitious implementation of Plurality, one would expect money owned by individuals (a single variable connoting universal social value/esteem) to be replaced with some higher dimensional social currency connoting esteem held by individuals within a variety of social communities or relationships. Agents would tend to have incentives to overstate these in the same manner they would like to have greater wealth in a standard capitalist economy. But this incentive would tend to offset the incentives in Plurality for understating social relationships to receive “subsidies” for forging them. Similar logic would apply to community incentives for overstating the strength of interest in some topic, the degree of solidarity and the strength of internal institutions in order to maximize subsidiary delegation of topics to that community: incentives to maximize delegation offset against those to maximize subsidies for collaboration.
While this may all seem quite abstract, it has counterparts in ordinary affairs that give me some real hope that it would work out in formal systems. We rarely see, for example, white men choosing to try to pass as, say, black women despite the alleged incentives that “affirmative action” would tend to create for doing so for the simple reason that being a white man is a source of social power that few would want to give up for whatever the limited benefits of “affirmative action” might be. In fact, we even see limits to attempts to “pass” in the other direction, given the social costs even to marginalized communities of undermining the social bonds they have forged. This is not to say there are no social groups that try to hide their solidarities, especially when they are heavily persecuted, but in a world where social capital is king (our world clearly, and even more so in a world governed by plural institutions), it seems unlikely that the primary problem would be people trying to undermine their holdings thereof.
-
Is Plurality legitimate? Perhaps the most serious challenge to Plurality is that it may be too rich/complicated in their design to be socially digested and legitimated. This seems like a real challenge to me and one that inevitably involves some degree of trade-offs between optimality and legitimatability. No doubt the adoption of mechanisms of such richness will have to proceed through a range of experimentation, artistic exploration, exposition, education and perhaps even through new technologies of understanding, such as virtual reality simulations.
While I have little doubt that this will be a genuine problem and put real and meaningful limits on how much can be achieved, there seems little argument for avoiding ventures into Plurality entirely on these grounds. Quite the contrary: given the wide range of possibilities it seems quite likely that some, likely significantly suboptimal, plural institutions will be possible that will have excellent user interface, be easy for people to get a sense for etc. There is much precedent for this in the history of computer technology. Such “human factors” deserve paramount attention as they have received in the best phases of the development of personal computing and the internet and not the sense of being an afterthought, they often are within the Artificial Intelligence community. But at the same time they don’t strike me as necessarily insurmountable barriers to getting started. Furthermore, there are many protocols that seem just as byzantine and hard to legitimate (e.g. Proof of Stake or many black box statistical methods) that aren’t very plural and have nonetheless achieved significant success and legitimation recently. In short, I hope the critical question of legitimation will constantly be posed about Plurality as a scalpel to shape them, but not as a cudgel to prevent their development in the first place.
In fact, there is a sense in which I think Plurality may not only be legitimatable but may be some of the only reasonable directions for broadly legitimate and ambitious reform of our foundational social institutions. After all, many of the most important and transformative moments in defining the contemporary political landscape have relied on plural institutions that must have seemed fairly opaque at the time and arose from compromises between competing groups (e.g. the US constitution). Given the commitment of large chunks of the public in liberal democracies to some form of pluralism (especially among conservatives and the new left) and the commitment of other segments to formalism and technology, Plurality may be the most plausible ground of convergence, our best hope for significant fundamental social change.
Open Questions
While I believe these grounds give good hope that the challenges for Plurality can be surmounted, doing so will require a variety of developments. Some of the most important direct questions raised by the above discussion include:
-
What are optimal mechanisms in various models and their parameter settings? How can they even be parameterized in the first place and how much overlap is there across these?
-
How well do optimal or provably approximately optimal mechanisms for one setting or model perform in other settings or models? Are there designs that can perform tolerably well across several contexts?
-
What computational structures and substrates are needed to implement these mechanisms? How can they be decentralized? How can they be conveyed to and understood by their users?
-
What are the best minimal viable experiments with Plurality? Where could they add the most immediate recognizable value?
Because of the plural thinking underpinning these questions, answering these will require participation across a range of fields of study (e.g. sociology, economics, computer science, statistics) and practice (e.g. community organizing, business, blockchain communities). Yet even great progress in these areas would be only the beginning. Many other questions in the development of Plurality would clearly follow, and likely many lie just beyond the horizons of these.
-
How can flexible, dynamic and adaptive subsidiarity work and how can it combine with the mechanisms of plural cooperation I highlight above?
-
How can Plurality be extended beyond allocations of presently quantified power like those I highlight above, to semiotic domains, where speech, meaning-making, cultural capital and representation are the relevant concerns?
-
How can new technologies of meaning making, such as virtual reality, sustain and enhance pluralism, both cultural and institutional?
Onward Plurality
Throughout this essay I have used the terms “pluralism” and “Plurality” almost interchangeably. I want to now emphasize why I see the need for a separate term Plurality. I believe that what I have outlined above is not simply a political or social philosophy; instead it constitutes a coherent direction for the future of digital technology, explicitly tied to a political philosophy. I believe this is critical because other primary directions of technology today have underlying but often unstated political values that tend to undermine pluralism.
As I’ve argued elsewhere, the most prominent discourse in the field of AI is an aspiration towards a singularity of “general” intelligence that aims to transcend the diversity of intelligences and replace the messiness of plural cooperation with optimization towards ends “aligned” to human desires, sometimes called “coherent extrapolated volition”. Even beyond these most ambitious visions, AI focuses on maximizing the autonomy of systems and their capacity to replicate human abilities, rather than on systems that facilitate human communication and collaboration, which contrasts with plural institutions, and to see such AI develop as the primary/singular direction of technology development, in contrast to plural thinking. In this sense, AI is a profoundly centralizing and monistic vision of the future of technology.
As I’ve argued elsewhere, the most prominent single narrative around Web 3 focuses on the liberation of “sovereign individuals” from the constraints of social institutions, leveraging “decentralized” protocols, particularly distributed ledgers, to act as the primary neutral substrate for computation and transactions. These ledgers and the specific, formalist, and quite narrow substrates for culture, sociality and coordination are seen as sufficient for the primarily capitalist market-based interactions sovereign individuals (or tightly homogeneous groups in some cases) need to have with each other. Other forms of social organization are generally viewed as oppressive “legacy institutions” to be at best ignored or often actively undermined. This contrasts with plural institutions as it locates the individual (or at best a highly homogeneous community) rather than a subsidiary network of gradually more distant communities as the ultimate “customer” and because it focuses on the sufficiency of a single immutable ledger rather than facilitating plural technical modes, as in epistemic pluralism.
In contrast to both, the development of Plurality, technology supporting pluralism, will require an architecture basically different from these, though leveraging tools developed along the way of both trajectories. It will need to build closely on the subsidiary tradition that was core to the imagination of the internet as a “network of networks” with diverse networks local to physical and organizational (academic, government, etc.) communities agreeing to interoperable protocols to enable their heterogeneous networks to interoperate. The thin protocols developed in this first wave will need to be extended to empower a broader range of fundamental rights such as association, property and personhood, in the digital realm through decentralized by social systems of identity, data sharing, payment and so forth. Many of these protocols may harness blockchains for some purposes, but the emphasis of their decentralization will be localization and subsidiarity, as well as networking and interdependence, instead of pure redundancy and prevention of attacks. They will need to explicitly capture, represent, and incorporate social structure, rather than merely guard against “collusion” or “attack”.
They will need to harness statistical techniques that underpin machine learning less to map and replicate individual human intelligence (e.g. through neural networks) than as a way of making sense of and facilitating collective intelligence, allowing groups to reach rough consensus while honoring, empowering, and even proliferating persistent difference (though tools like neural nets might well be applied in that process).
They will need to harness virtual reality technologies to forge stronger and deeper human connections and creative imagination that can strengthen the real world, rather than to escape or establish an alternate reality.
To summarize poetically, I turn to the poetic job description of Taiwan’s Digital Minister Audrey Tang:
- When we see “internet of things”, let’s make it an internet of beings.
- When we see “virtual reality”, let’s make it a shared reality.
- When we see “machine learning”, let’s make it collaborative learning.
- When we see “user experience”, let’s make it about human experience.
- When we hear “the singularity is near”, let us remember: the plurality is here.
Such an agenda is no doubt ambitious, but it is more closely aligned with both the political traditions of a large part of the world and with the origins of today’s most celebrated technologies, such as personal computing and the internet, than are the primary directions of political and technological travel today. It is also aligned between its political and technological aspects. Perhaps it is unworkable. Perhaps it is undesirable. Or perhaps it simply requires more social forces to collaborate than other directions and thus has been slower to be realized. Let me close by imagining, hoping that this last is the case and that Plurality might be feasible.
If so, we have a far brighter future ahead of us than the past few decades have given us reason to imagine. We can harness digital technologies to transform, improve and proliferate our social institutions even more than those technologies have corroded and undermined them to date. With three orders of magnitude less investment than has been poured into Web 3 and AI, Taiwan has in the space of a few short years harnessed its initial foray in this direction to develop arguably the world’s most technologically sophisticated and vibrantly democratic society, with widely shared growth even through a pandemic that cost its population of 22 million less than a thousand souls.
Scaled up, such investment could mean the 21st century becomes a golden age of democratic citizen engagement empowering cooperation to tackle the many existential risks that face us while expanding meaningful freedom to choose and shape the communities that give our lives meaning. In such a future, dystopias of social collapse or subjugation to centralized artificial superintelligences could give way to an embrace of the power of diverse cooperation. A flowering of interdependent, intersecting, cooperative and diverse complexity, a lush rainforest of many cultures, beckons.
-
Other terms that are accurate but less accessible include “polycentrism” and “polycentric collectivism”. ↩
-
See “Why I am not a Market Radical”, “Why I am not a Statist”, “Why I am not a Capitalist”, “Why I am not a Nationalist”, “Why I am not a Technocrat”. ↩
-
Other influences include Hannah Arendt, W. E. B. du Bois, Liam Kofi Bright, Edmund Burke, John Dewey, James Allen Evans, Henry Farrell, Marion Fourcade, Pope Francis, Corrado Gini, Jaron Lanier, Pope Leo XIII, J. C. R. Licklider, Georg Simmel, Anne-Marie Slaughter, Alexis De Toqueville, Annibal Quijano and Deva Woodly. ↩
-
My thinking is in many ways closest to that growing out of the line of progressive and pragmatist thinking associated with scholars like Georg Simmel, John Dewey, W. E. B. Du Bois, Iris Marion Young and Deva Woodly. However, even there, I have less exclusive focus than many of these thinkers do on specific cleavages and pluralism associated with historical injustice by the contemporary American left and a greater emphasis on seeking for broadly consensual formalisms capable of capturing and adjudicating which cleavages should have greatest emphasis, including both those historically marginalized and those historically protected. ↩
-
This concern is closely tied to the centralization of the system, as centralization of information and the concealment it tends to encourage are mirrors of the “incentive misalignments” with which economists are often concerned. ↩