key: cord-0045950-rhw9fv2o authors: Demerath, Loren; Reid, James; Dante Suarez, E. title: Teaching About the Social Construction of Reality Using a Model of Information Processing date: 2020-05-25 journal: Computational Science - ICCS 2020 DOI: 10.1007/978-3-030-50436-6_48 sha: 191f29a5f2c63b810344c8f00d0bba5e8f26a8d2 doc_id: 45950 cord_uid: rhw9fv2o This paper proposes leveraging the recent fusing of Big Data, computer modeling, and complexity science for teaching students about complexity. The emergence and evolution of information orders, including social structural orders, is a product of social interactions. A simple computational model of how social interaction leads to emergent cultures is proposed for teaching students the mechanisms of complexity’s development. A large-scale poker game is also described where students learn the theory and principles behind the model in the course of play. Using the model and game can help instructors achieve learning objectives that include understanding the importance of diversity, how multiple realities can exist simultaneously, and why majority group members are usually unaware of their privileges. The computational capabilities of the twenty-first century have upended the way we do research. Whether it is in the modeling of the rings of Saturn, or in the use of Big Data to predict our next online purchase, computer simulations now represent an integral part of most state-of-the art research in a broad range of disciplines. The premise of this paper is that while this scientific revolution has changed how we think, conceptualize and analyze social phenomena at the highest levels of scholarship, these methodologies have not trickled down to the undergraduate classroom. We argue here that computational modeling technologies offer both new teaching techniques and new content. We attempt to marry some of the insights of traditional social science with those of complexity studies and systems thinking, proposing an agent-based model of computational sociology to serve as the cornerstone of study in an undergraduate course (or module), and a game that illustrates the model's principles. The wave of Big Data that we reference concerns our continuous interaction with computers and the internet to generate increasingly large amounts of information on our behavior, providing researchers with data sets larger than anything seen before. Big Data allows us to precisely measure human interactions and preferences, whether in the "real world" or online. This huge amount of heterogeneous data, integrated with an appropriate modeling methodology, forces us to reconceptualize the possibilities of the scientific endeavor. Rather than recommending the unfiltered use of traditional Big Data techniques uncoupled from solid theoretical foundations, the multidimensional nature of reality requires us to teach the ideas and modeling techniques in a way that is both rooted in the established theoretical understanding of disciplines, and also in the holistic paradigm offered by complexity science. Tolk et al. (2018) propose human simulation as a lingua franca for the computational social sciences and humanities, and we heed their call to bring these techniques into the undergraduate classroom, in an intuitive format. Much of the development in this field has centered on agent-based models, where agents are explicitly modeled as independent computer programs that interact with each other to produce aggregate behavior that may include unexpected results. These approaches claim that the emergent upper level phenomena can be understood as a direct reflection of the interactions of the lower-level agents. Big Data, computer modeling, and complexity science are now fusing into a paradigm in which a new type of social science can be built. These ideas, however, are not yet accepted universally. In this regard, the completion of this scientific revolution may be currently stymied by the lack of imagination by scholarly leaders and those enforcing the status quo in academia. Our hopes would then lie with a new generation of researchers-no doubt, that includes current undergraduates-who may conceive of a radically new paradigm for the social sciences, rooted in part in a new kind of experimental methodology where every online human movement is measured and catalogued. Traditional social sciences were not developed with these types of capabilities for experimentation in mind. Experimenting with humans was, until relatively recently, simply not a possibility in macroeconomics or sociology. In contrast, we now face a reality in which a company such as the ride-sharing Uber can let a computer algorithm loose on the task of making its non-centralized drivers most efficient. As Gelfert (2016) points out, models are not only neutral abstractions of our world, they are the guiding light with which we conceive, mediate, contribute and enable our scientific knowledge. Just as language shapes the world we live in, scientific models both constrain and contextualize our perception of our reality. The premise on which we base our proposal is that a common factor in some of the Big Data applications that abound today is a lack of sufficient attention to the underlying theory relating the model to the observed behavior, or to the versatility of the model and its range of possible applications. A simulation that is produced by a datamining process may contain multiple topological aspects that are particular to the way that the model is built, and therefore not a true reflection of the real-world phenomenon at hand. With these common missteps in mind, we propose the use of a simple but interesting agent-based model of social interaction to serve as a connecting tool of instruction, in a course that teaches the benefits of model-oriented thinking, complex adaptive systems, and a holistic and interdisciplinary understanding of reality, particularly as it refers to the social sphere. As we describe in further detail, this can be a course for mathematics and computer science majors, but it can also be designed for those students rooted in the social sciences. The course may explicitly teach concepts of complexity science, or simply offer the view afforded by any agent-based model, and contrast it to linear methodologies more often discussed in an undergraduate setting. The course can be taught by a specialist in any of the functional areas considered, or cotaught by more than one professor-each possessing a limited understanding of the interdisciplinary issues addressed in the course. Finally, the course has enough material to cover a regular semester curriculum, but it can also represent a module within a larger class. Computational Sociology in an Undergraduate Classroom One of the most important concepts in complexity studies is that of emergence, often referencing how the aggregate is more than the sum of its parts, and allowing us to understand a human mind as more than a collection of neurons, or a society as more than a collection of people. If we accept the concept of emergence, we must conceive of a world where multiple realities coexist simultaneously (Suarez and Demerath 2019) . Pessa (2002) and Tolk (2019) demystify the concept of emergence and separate it into two categories: epistemological and ontological. Epistemological emergence refers to situations where the system follows generalizable rules, but it is irreducible to one single level of description because of conceptual reasons. A system may therefore be epistemologically emergent if it follows rules that in principle are knowable; the unpredictability is a product of humans not being able to fully grasp these laws in their entirety. Ontological emergence, on the other hand, is where emergent properties are truly novel and cannot be explained solely by the components and their interactions. This is similar to Crutchfield's (1994) conception of intrinsic emergence as that which is independent of the external observer. Until the second half of the twentieth century, science had no choice but to be linear. To continue advancing, most disciplines opted for the simplification of interacting agents, commonly describing them as homogeneous, independent and altogether exogenous. Linear science had to conceive of agents unrealistically in order to define and describe them independently. In doing so, one of the most important aspects of the system, the context in which the agents do interact, is erased. A linear model of reality that does not consider interaction between the forming parts will fail to adequately describe the structure of the system, and how that structure can change. But how a structure forms and changes is often what needs studying. It underlies what we do when we analyze the cohesion of an army unit, the coordination of a football team, the culture of an enterprise, or the checks and balances of a government. Repeated interaction gives rise to certain structures that deserve more attention than other pre-existing ones. The cells or atoms that make up a government are far less meaningful for one's analysis of it than social factions and patterns of events. In this way (though not necessarily in all ways), a complexity perspective can be seen as antireductionist. Aggregates are more than the sum of their parts, and we refer to such aggregates as emergent. To understand them, we cannot simply reduce the system's aggregate behavior to its minimal components, study them in isolation, and then rebuild the system by multiplying the behavior of the average piece by the number of pieces in it. In the emergent, nonlinear, irreducible world of complexity, nature exists in many non-orthogonal levels, with each level potentially being governed by different laws, granularities and structure. If we believe in that proposition, then we should expect to find a world full of emergent phenomena, with distinctive levels of interaction that have unique kinds of agents, laws and granularities. The current state of affairs in this scientific revolution rooted in complexity science and Big Data offers us the possibility to combine the strengths of both to create a truly holistic vision of reality, a vision backed by the data and models of computer scientists and supported by the bolder, more comprehensive theories offered by social scientists. The social actors that sociology has considered for decades are now within the realm of possibilities of a simple agent-based model; a model that is now freely available for curious undergraduate students. It is in this context that we offer the simplest possible model that the authors could construct that reflects this type of emergent behavior. The goal of a simulation-particularly one presented to novice students-must be to capture the aspects of reality that are computable, whether these are linear or emergent, and to minimize aspects of the model which are not compatible with reality. Naturally, no model can ever be perfect, and thus it rests on the researcher to be upfront and straightforward about a model's limitations and applicability. In light of these precautions, our recommendation is to provide an introduction to modeling to undergraduates that stresses the aspects of reality that can be captured in the simplest possible model, proceeding with more complicated models only when all aspects of a simple model have been fully explored and relatively well understood. The model we propose for introducing computational sociology to undergraduates illustrates a theory of social emergence, as represented by simple agents interacting to create emergent social structures. Those structures are increasingly distinguishable from each other by the shared information that agents use within those structures. The model centers on two variables that affect rates at which agents self-organize from the "bottom-up" into social structures. Those variables are agent inequality in quantities of social information (which creates stability), and agent diversity in qualities of subjective information (which allows for connectivity). The model also includes variables that provide different representations of the information decay that varies with any environment, and which determine the rates at which structures emerge and then organize interactions "top-down." The model is able to show how agents process information by compressing that which is shared in the course of interaction through the mechanisms of quantitative social reinforcement (that could be called, "echo effects") and qualitative meaning orientation ("diversity dividends," perhaps). Overall, the model can produce characteristic behaviors of complex systems, including the emergence and evolution of both meaning and structure, reproduction, and the suboptimization of an environment by overly dominant agents (Demerath, Reid, and Suarez, in progress) . The theoretical foundations of the model stress that novel information is found by agents interacting with some degree of mutual information and some degree of novelty, providing the probability that the interaction will be "meaningful" (Demerath 2012) . The amount of mutual information determines the probability that the exchange will be comprehensible, while the relative total information of an agent determines its reliability. Together, the amount of shared and total information involved in the interaction help determine the sensitivity of the agent to novelty. This principle explains how those who know a lot about something are sensitive to nuances that are invisible to others, or how calibrating a system to be highly sensitive to input and precise in its output requires much information. The interaction topology and mechanisms mean that if information decays too rapidly, agents eventually have no shared information, and interaction becomes meaningless. Alternatively, if information does not decay rapidly enough, agents eventually consume all novel information in the system, leading them with no prospects for meaningful interaction. At this point, all activity stalls. But inequality keep things going. Agents of information are oriented and motivated by the information they have relative to the overall field of information as embodied by other agents (Bourdieu 1984) . While that accounts for the "social gravity" of larger agents, that preference is balanced by the fact that less energy is needed to interact with "smaller" agents (those with less information). This is based on a premise of the theory that processing information requires energy, explaining how homophily is balanced by the need for novelty. Such is the situation we simulate in our model, where agents chose to interact based on the probabilities they calculate for meaningful information. As certain pieces of information become compressed over the course of interactions, "cliques" of agents emerge around their use of that information. Agents in such cliques have more energy to spend on acquiring new information, and more space for holding it. As such, agents that are part of homogeneous communities can, paradoxically, better explore for novelty, and such communities emerge and evolve accordingly. The model offered here is an agent-based model that successfully produces simulations of socially distributed information processing, endogenous agency, and steadily more complex social and meaning structures. We intend this model to meet the requisites of generative emergence described by Cederman (2005) in the simplest possible topological construction. The model also illustrates specific causal mechanisms of emergence. These are not, though, mechanisms that exist at the level of the individual, as theorized by Hedstrom (2005) and modeled by Lane (2018) , or to an infra-individual level as advocated by Sperber (2011) . Instead, these mechanisms are interactional, as predicted by Sawyer (2011) and Demeulenaere (2011) . However, our model depends on two "fields" of information: one is the social distribution of agents, and the other is distribution of meanings in their environment. We interpret the way our model functions as supporting the view that systems function by achieving a degree of operational closure (Luhmann et al. 2013 ) that allows them to be nested, interpenetrating, and interdependent (Bryant 2011 , Cilliers 2001 , Byrne and Callaghan 2013 . Our model blends the advantages of Bourdieu's (1990) conception the reproduction of social structure through habitus, with Archer's (2003) account of reflexivity called for by others (e.g., Elder-Vass 2010). Details of the model are as follows: 1. Each simulation starts with at least one ambient phase space or "information space" for that simulation. This makes the knowledge of agents operating within the space comparable, and interaction possible. A population of agents is distributed at the beginning of a simulation, each agent inheriting certain knowledge of the information space: a. bottom-level "facts," defined by their relations with each other, and b. higher-level "issues," defined by their relations to both facts and other issues. 3. Agents are programmed to interact with each other to reduce the entropy of their knowledge of the information space, by increasing its comprehensiveness, which they do by attempting to exchange with other agents for increasingly abstract and powerful issues in place of lower-level issues and facts. a. the carrying capacity of all agents is limited, and is operationalized by limiting the number of facts or issues agents can hold. b. the information decay rate is the same for all agents, and operationalized by having agents randomly "forget" some element of their knowledge. 4. As simulations proceed, over the course of agent interactions, we are able to observe: a. the development of "culture" as shared knowledge, and b. the development of "social structure" as agent interaction patterns. To show how interactions become increasingly structured over a simulation, we can plot the population of agents interacting over time through the information space. Such plots illustrate long-established theoretical claims and recent empirical findings. Namely, that agents organizing according to certain shared meaning, thus illustrating a duality of culture and structure, such as argued by Archer (2003) ; the use of probability judgements to negotiate shared understandings that order interaction, reflecting the pragmatist philosophy of Pierce, James, and Dewey, and the sociological symbolic interactionism of Mead and Blumer (Snow 2001) ; and finally, the recent approach in neuroscience (e.g., Friston et al. 2017) , that finds empirical support for the view that uncertainty reduction is our motivation to think and interact with the world. This model illustrates those claims by producing evolving cognitive and sociological orders. The model is set up in multi-agent systems framework where agents are able to share information with each other, with the following features: • each agent can store information • there is a maximum amount of information that an individual agent can store • agents can interact and share information with each other • sharing means that the information is copied from one agent to another • related pieces of information can be combined to form a new piece of information • combining information decreases the amount of space the agent needs to store the information (referred to as "compressing the information") • an agent does not have access to any information other than what it currently stores Information is conceptualized abstractly. Each piece of information is a discrete object that takes up exactly one unit of storage, and is copied from one agent to another through interaction. We also assign each piece of information a point value (i.e. a positive integer) to reflect the degree to which it can compress information. Meanwhile, to help visualize the diversity of information, two distinct information structures can be used simultaneously with varying degrees of overlap for any agent. To observe those differences and how interactions can end up integrating the structures, information from each structure is colored blue and red respectively. Any information structure used by the model is variable in several ways. While each piece of information is a discrete object that takes up one unit of storage, the "capacity of agents" to hold information is variable. For example, if the agent's capacity is 6, then the agent can only store 6 pieces of information at any given time. In the Net Logo implementation we offer of this model, one can use the capacity slider to adjust the capacity the agents (every agent has the same capacity). To represent the degree to which agents can compress information, we use a tree data structure. A "tree" consists of nodes (circles) and edges (lines connecting the circles), and has one root node (at the top of the tree). For our purposes, each node is one piece of information. See the illustration below as an example. Notice that each node (except those at the bottom-level) has exactly three children. We call this the branching factor of the tree. If you start at a node at the bottom level and follow the edges (i.e. the arrows) to the root node, it takes two moves to get there. We call this the height of the tree. The number in the node is the quality point value of that information. When the type is set to be "height-based" (vs. the options of "random," or "file"), the point values increase when we move towards the root node (higher levels give higher points). During the model's setup phase, each agent is randomly assigned pieces of information (up to their capacity) from the bottom level. As noted above, the NetLogo implementation of the model allows two trees to be used, colored red and blue. Together, these trees form the information space of any simulation, and the basis for compressing information through information sharing. We can now discuss how agents go about sharing information with each other. Each agent performs the following three-step process: 1. choose another agent, 2. select a topic, 3. share, learn, and forget. In every turn, an agent needs to first choose another agent, so that she can interact with it. The agents are attracted to each other through two values: similarity and novelty. This means that for every other agent, the agent must calculate a similarity and a novelty value. Both of these values are real numbers between 0.0 and 1.0. Let us consider the interaction between two agents, which we can refer to as Alice and Bob. Assume Alice is thinking about interacting with Bob. Similarity represents how similar Bob is to Alice from Alice's point of view. Alice does not know what information Bob is storing. So, we define similarity to be: (score of information both Alice and Bob know) / (Alice's total score). Notice that this value is relative to Alice's score. Bob's similarity score for Alice might be different because we would divide by Bob's total score instead. In particular, if Bob knows everything Alice knows, plus some additional information, then Alice's similarity score for Bob will be 1.0 (the highest possible). However, in this case, Bob's score for Alice will be less than 1.0. Novelty represents how much new information Alice could gain from interacting with Bob. We define novelty to be: (score of info Bob knows that Alice doesn't know) / (Bob's total score). For example, if Alice and Bob have no information in common, then Bob's similarity would be 0.0 and novelty would be 1.0. On the other hand, if Bob knows everything Alice knows plus some additional information, then similarity would be 1.0, but novelty is bigger than 0.0. Finally, we need to combine these values to assign Bob a sim-nov value. In our model, Alice prefers to interact with agents with higher sim-nov values. The model is set up to do this in two different ways: "min" and "linear". In min mode, we define the sim-nov value for Bob to be whichever value is smaller (similarity or novelty): sim-nov ¼ minðsimilarity; noveltyÞ: In min mode, then, Alice prefers agents with similarity and novelty values that are both as high as possible. She is less attracted to agents with a high similarity value but a low novelty value (and vice-versa). Alternatively, in linear mode, we can set an alpha value (a number between 0.0 and 1.0) that represents how much weight Alice puts on the similarity value. 0.0 means she only cares about novelty, 1.0 means she only cares about similarity, and 0.5 means she treats each value equally. The sim-nov value for Bob is defined to be a linear combination of similarity and novelty: sim-nov ¼ alpha à similarity þ ð1 À alphaÞ Ã novelty: Once Alice has a sim-nov value for each agent, she puts them in order from highest to lowest sim-nov. Then she uses p = agent-choice-probability to determine the probability she takes the agent at the top of the list. If she does not get an agent that is high enough on the list, she repeats this process until she chooses an agent with which to interact. In the agent-network visualization, an arrow is drawn from Alice to all agents above the one chosen in the list (including her choice). This represents her preferences in the network. Notice that if agent-choice-probability = 1.0, then she will always choose an agent with the highest sim-nov score. The model only allows agents to share information if they share the same parent node in the information space. Once Alice has chosen Bob, she may choose any information node that is the parent of an information node. The agent knows this choice to serve as the "topic" about which to exchange information. Topics that are not yet known to an agent are referred to as topics on the frontier, since they represent the nodes she can potentially get to via compression. As agents motivated to process information, they prefer topics on the frontier. In our example, the topic selected for Alice and Bob is chosen uniformly at random from the intersection of Alice's frontier topics and Bob's frontier topics. If they have no frontier topics in common, the agents look at the following two sets: 1. the intersection of Alice's topics with Bob's information space 2. the intersection of Bob's topics with Alice's information space. Agents then choose a topic uniformly at random from one of these sets. If these sets are both empty, then no interaction can take place. Once Alice has chosen a topic, she must choose a piece of information to share with Bob. As explained in Step 2, she can choose any information that is directly below the topic in her information space. In addition to the information sharing process, information can change by agents randomly forgetting or finding pieces of information. The forgetfulness slider represents the chance that an agent will randomly forget a piece of information. A value of 0.0 means that agents will never forget and 1.0 means that they will randomly lose a piece of information after each iteration. The fact-find-chance slider represents the chance that an agent will randomly discover a piece of information at the bottom-level of the information space. The agent will only learn this information if they are not currently at capacity. This is the only way an agent can learn new information without sharing with other agents. A value of 0.0 means the agent will never randomly learn information. In terms of the information space parameters of the proposed model, there are three types of trees that can be selected: "random," "height-based," and "file." In addition, the blue tree can be turned off by selecting "none." 1. random: a tree is generated with random quality values for each node. The values are integers between min and max inclusive (i.e. red-min and red-max for the red tree) 2. height-based: a tree is generated with quality levels based on how far each node is from the bottom-level. The bottom-level are quality 1, next-level up are quality 2, and so on 3. file: a tree is read-in from a file. Shown below are some of the patterns the model has produced thus far, with the first pattern showing the control settings and accompanying graphs as well: These snapshots of simulations show social structural patterns in both classes and cliques of agents. They also show how cliques vary in color, which could be considered "cultural" in amounting to differences in shared meaning, where they reflect unique aspects of the information structure, apart from their social structure. A question that can be taken up with this model, then, is under what conditions will cultural segregation take place, and how can it hurt, or help, a society's health, or, in terms of this theoretical perspective, it's capacity to process information. Following Gredler (2004) , and Davison and Davison (2013) , we see games as potentially useful ways of illustrating computational models in the classroom. We have designed a card game that is a sort of hybrid "Go Fish" and poker for students to play as a means of becoming familiar with the basic premises of this model. It has an interactive dimension, where mutual and novel information matter, and a betting component, where the social distribution of information matters as well. The game best played with four to five players, and the rules are as follows: each player is dealt ten cards; five of the cards are face-up, for all to see, representing potentially mutual information as all players, since they will be able to get the cards if they have similar cards of their own. The other five cards are dealt face down, only that player seeing them, and representing potentially novel information for the other players. The cards are arranged to display potential information "trees," placing their face-up cards vertically in front of them. Behind the face-up cards they place any facedown cards if they match by number (then vertically), or by suit (horizontally). Facedown cards with no match are put aside as "unavailable for trade," until the situation changes. At each turn a player can ask for a card from another player if they share a number among their face-up cards, and for a matching card by suit if two of their faceup cards share that suit. If they have no matches, they take a card from the remaining deck. They then bet to end their turn. As players accumulate cards they order them into poker books and place them on the table to be counted. The game is won by a player being able to put all of their cards into books, or when the deck is used up, and the winner has the most points in books. The winner takes the winnings that have been bet. Players can fold at any time during the game to save their money. As the game unfolds, there are two fields of meaning that evolve. One field is the comparative success of players over the course of hands, where successful bidding in previous hands mean they have more resources for bluffing, or calling the bluffs of others. This is akin to the social network dimension of meaning that evolves in the course of interaction. How the good fortune of being dealt good hands leads to resource advantages over time can itself be a lesson for students in how unearned advantages in resources can be reinforced, leading to increased inequality. The other field of information that evolves in the course of play is the comparative order of players' hands. The cards are meant to represent each agents' "facts," and hands of cards are variable in their "fitness," as the degree to which they are valuable in terms of the range of possible poker hands. As in many card games, such as poker and gin-rummy, players increase the order of their hands by creating sets of cards by rank or sequence. Over multiple hands, students learn the importance of mutual information in determining how some hands are better than others in their possibilities for order. And, in processing of building such order, students learn a more intimate lesson: the pleasure they experience in ordering and reducing the entropy of their hands reveals their own nature as information processing agents. And part of the processing is social. It is true social interactions are motivated by the "original" way information is valued in our game as the fitness of hands. However, the unpredictable outcomes of those interactions in social structural resources that provide unique contexts to the interactions is the other way information is valued. The social distribution of cards, as perceived by the players, is reflected in their interactions (strategies for asking for cards, betting patterns, etc.). As anyone familiar with poker knows, players make inferences about the hands of other players by observing how they play. They then act on those own inferences through their own betting. As such, betting patterns reflect players' estimates of the social distribution of information, and those patterns manifest a social structure of classes and cliques of actors that emerge and evolve. The result of our go-fish poker game is that students can get a picture of how information is ordered along two different dimensions. One is the dimension of meaning in an objective reality of the card game and its rules that determines a hand's "general fitness." The other is the social network of agents of information, the distribution of which determines a hand's "social fitness," one might say. Moreover, students are able to see how the fitness of the information they develop is dependent on social contexts. Many a poker player has bemoaned the waste of a good hand when it coincides with the poor hands of others, who then fold too early to bet. Students learn that when interactions depend on shared information, the social distribution of information is a factor independent of its "objective fitness." This essay has proposed a simple agent-based model and a card game as means of teaching the complexity mindset to undergraduates. The model can be used to describe the usefulness of modeling, and to introduce students to computational sociology. In the context of teaching complexity, referencing computational science is a necessity, as the development of its ideas have depended on computational and modeling techniques. Those have led to research avenues that were impossible a few years ago, and forced researchers to reconsider basic premises of the reductionist paradigm on which traditional linear sciences-such as neoclassical economics and individual selection theory in evolutionary biology-are built. The growing paradigm of complexity science and computer modeling has offered a new way to explore the emergent properties so central to the social sciences, and in particular to a discipline such as computational sociology. The pedagogical methodology we are proposing can be used in team-taught courses, or for more disciplinary-based modules of instruction. For example, the authors of this article come from different disciplines, and this project has required us to find common ground and more precisely defined terminologies. It is precisely because of the rich diaspora of disciplinary and intertwined bodies of knowledge that this computational sociology model benefits interdisciplinary work and shows how modeling is a valid methodology for understanding diverse social realities. Both the model and the game show students how aggregate structures emerge and evolve through social interaction. To what extent is the model a metaphor of reality, and to what degree is it describing an intrinsic aspect that the model and reality share? One of the subtle points this paper makes is that, in teaching students about the usefulness of modeling, it has to introduced with simple, rather than complicated models. The simplicity lets students see the inner workings of the model, illuminating how interactions can bring about emergent behavior and structures, and allowing a deeper discussion of the fabric of social reality. Using a complicated model to have such a discussion is almost invariably mired in the ad hoc aspects of such a model, thus missing the essential point that reality -however complex-can be better understood as the interplay of simpler components. We do believe that this model shares intrinsic aspects that reflect the way our social reality is constructed, and it is because of this belief that we chose a bold title for this paper. Obviously, whether or not social reality is actually created in this emergent fashion is debatable; it represents an academic issue that may not be settled in the literature of computational sociology for years to come. The authors have their own scholarly battles to wage with peers who may agree or disagree with this view in the computational and social sciences' literatures. The benefits to the students seeing the creation and value of interdisciplinary research in action, however, are undeniable. Participating in a discussion about the way in which models can help researchers better understand reality is an experience that no student should graduate without. Structure, Agency and the Internal Conversation Distinction: A Social Critique of the Judgement of Taste The Logic of Practice The Democracy of Objects Complexity Theory and the Social Sciences: The State of the Art Computational models of social forms: advancing generative process theory Boundaries, hierarchies and networks in complex systems The calculi of emergence Games and Simulations in Action Explaining Culture: the Social Pursuit of Subjective Order A model of emergence featuring social mechanisms of information compression Causal regularities, action and explanation The Causal Power of Social Structures: Emergence, Structure and Agency Active inference: a process theory How to Do Science with Models: A Philosophical Primer Games and simulations and their relationships to learning Politics, Sociology and Social Theory: Encounters with Classical and Contemporary Social Thought Dissecting the Social: On the Principles of Analytical Sociology Hidden Order: How Adaptation Builds Complexity Emergence: The Connected Lives of Ants, Brains, Cities The emergence of social schemas and lossy conceptual information networks: how information transmission can lead to the apparent "emergence" of culture Introduction to Systems Theory From factors to actors: computational sociology and agent-based modeling Methodology of transdisciplinarity What is emergence Conversation as mechanism: emergence in creative groups Collective identity and expressive forms A satisficing, negotiated, and learning coalition formation architecture A naturalistic ontology for mechanistic explanations in the social sciences Distributed agency The Cyber Creation of Social Structures Semiotics, entropy, and interoperability of simulation systems: mathematical foundations of M&S standardization Human simulation as the Lingua Franca for computational social sciences and humanities: potential and pitfalls Summer of Simulation -50 Years of Seminal Computing Research. Simulation Foundations, Methods and Applications