key: cord-274219-nh2t1qsl authors: Harwood, Stephen; Eaves, Sally title: Conceptualising technology, its development and future: The six genres of technology date: 2020-08-30 journal: Technol Forecast Soc Change DOI: 10.1016/j.techfore.2020.120174 sha: doc_id: 274219 cord_uid: nh2t1qsl One approach to developing futuristic views of technology is to draw upon experience and expertise. However, this becomes increasingly speculative as one moves to more distant timelines and visionary technological forms. This raises the question of whether it is possible to rationally predict how a technology development trajectory might unfold into the future, perhaps to some ‘ultimate form’, that is accessible, surfaces the necessary technological features for development as well as considers the implications for human–artefact relationships. The proposed approach is conceptually grounded in a parsimonious framework that examines different configurations of human–artefact relationships, revealing ‘Six Genres of Technology’. This suggests how the shift from human-human to artefact-artefact and the increasing autonomy of the artefacts (technological beings), introduces specific features to each of the six Genres. Four features are identified in the later Genres that in combination, could be construed as, or indeed pose a threat: autonomy, intelligence, language, and autopoiesis. This paper advances the debate about future technological developments by using the proposed framework to structure an argument about the key issues that should be discussed today - so that the developments of tomorrow can be more reflectively considered, appropriately debated and knowingly pursued. Rapid acceleration of technological development is leading to technological forms that were unimaginable just twenty years ago (e.g. autonomous vehicles, deep learning, intelligent robots). Technologies that once were in the realms of science fiction have become an everyday reality and, along with benefits, are creating predicaments and raising concerns about implications for the future of work (Dellot and Wallace-Stephens, 2017) alongside potential malicious use in areas such as warfare (FLI, 2015) . Beyond this, new technologies are provoking anxiety for the very future of humanity itself. As Stephen Hawking stated: "The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded" (Cellan-Jones, 2014, n.p.) . However, this alarmist view is just one possible perspective about how technology might unfold, with dreamers anticipating a world of leisure, incrementalists foreseeing gradual change and sceptics being dismissive of newer innovations (Dellot and Wallace-Stephens, 2017) . Indeed, Facebook's Mark Zuckerberg critiques 'naysayers' arguing that they are irresponsible for drumming up doomsday scenarios (Zuckerberg, 2017) . Thus, we are faced with the challenge of how to view the unfolding future of technology. In thinking about the future, it allows us to anticipate possibilities and consider their implications in terms of preparation for and mitigation of their impact. How can we move beyond speculation, hype, hope and fear to informed discussion and decision making, enhanced forecasting and, thereby, the shaping of this technological future for the benefit of humanity? To answer this question, there is firstly the challenge of anticipating the manner with which technological developments might unfold longterm. This elicits approaches from the technology futures domain, though this domain is characterised by different views about approaches (Lu et al., 2016) . In a comprehensive review of technology pervasive nature of expert biases (Bonaccorsi et al., 2020) which can be found in such approaches as Delphi (Flostrand et al., 2020) . Moreover, prognostications are critiqued for being either too short-term (i.e. unworthy of being viewed as futures) or projecting into the distant future and hence being highly speculative (Helmer, 1999) . Further, if a technology appears that was not forecast, especially if is disruptive, then this can have serious implications (Kott and Perconti, 2018) . Nevertheless, Fye et al. (2013) reveal that expert views are more likely to anticipate whether an event will occur, whilst short-term quantitative methods have the most likelihood of being accurate in terms of predicting when an event will occur. One novel approach to thinking about how the complexity of emergent technologies has reached new levels over time is presented by Aunger (2010) . This explores the relationship between artefact and the creature (e.g. animal, bird, insect) that is using it, extending the notion of artefact use beyond humans. Whilst its focus upon the artefact helps explain the growing complexity of technology, this is suggestive of a potential predictive capability. To complement this, Gibson's (1966) notion of affordance draws attention to the artefact and how it affords possibilities for a creature's action, this being subsequently developed by Norman (1988) who utilises the concept in a design context, invoking a relationship between artefact and human. Since possibilities for what happens emerge from the relationship between the artefact and human, then this raises the question of whether the notion of artefact-creature relationship can be developed to forecast a technology development trajectory leading to some ultimate end-point such as envisaged by Stephen Hawking. One option is offered here, providing a holistic exploration of different configurations of the human-artefact relationship. This presents an argument that reveals a technological development trajectory transitioning from the past to the future and unfolding to reveal a sequence of six functional states, each state offering a frame ('Genre') with its own characteristic attributes. It is assumed that there are no technical or time constraints, thus allowing some eventual endpoint to be envisaged. Consequently, the six Genres explain how technology might unfold with growing autonomy and connectivity into some ultimate form -'an autopoietic technological being' existing in communities of similar beings -and the consequent implications for the relationship between humans and artefacts. The Genre is characterised by affordances specific to that Genre (e.g. homeostasis, autonomy, reproduction) enabling their characteristics to be explored. This argument is grounded in the language of cybernetics and technology studies. Positioned within the TFAMWG (2004) taxonomy, this approach is soft, exploratory and perhaps fits within the Scenarios family. This paper contributes to the debate about future technological developments by offering a conceptually grounded framework to structure an argument about one possible technological development trajectory, one that focuses upon the increasing autonomy of technological beings, to the increasing redundancy of humans, thus not only validating the claim of Stephen Hawking, but illuminating how his predictions might be achieved. The value of this approach is that it focuses attention upon the impact of the development and assimilation of different functional capabilities, placing into perspective 'what if' debates and inviting responses about what needs to be done to achieve, shape or, indeed, prevent the development of these Genres. Thus, it provides a unique informed conceptual argument to stimulate thinking (Coates, 2000) and challenge mind-sets (Schoemaker, 1995) about the salient issues and hence highlight why it is important to question contemporary technical developments and ask what might be their consequences. This contributes to a toolset of arguments that provides an informed perspective about the future developments (Avin, 2019) . The argument commences with a reflection about what is technology and how it can be conceptualised, thus clarifying the focus of this paper. It is followed by a synopsis of how technological development can be conceptualised to understand the mechanism for how development might take place. The proceeding section is methodological, appraising approaches to how the future of technology is conceptualised and then leads into the Six Genres, which comprises the main portion of the discussion. The paper concludes with a reflection upon the underlying message from this exposition and its implications. The entity of interest in this paper is that which humans use to enable or augment what they do. The manifestation is typically a physical artefact, but a general term to label this is technology. This raises the question of what is technology? There are many definitions as illustrated by the variants offered in the Oxford English Dictionary. These reveal that not only has the term technology evolved in meaning over time, but that contemporary definitions vary considerably in content (Fleck and Howells, 2001) . For example, Rip and Kemp (1998: 329) draw attention to the everyday use of the term, that it is the artefact: "the idea of technology as artifacts (gadgets and gizmos) is still widespread in our culture", a view reinforced by Kline (2003) . In contrast, Schon (1967: 1) defines technology as "any tool or technique, any product or process, any physical equipment or method of doing or making by which human capability is extended". This draws attention to both the artefactual and processual nature of technology. Alternatively, Dosi's (1982) definition of technology distinguishes between the embodied and disembodied, encompasses a problem context, knowledge, perception, process and as well as the artefact itself, it is: "a set of pieces of knowledge, both directly "practical" (related to concrete problems and devices) and "theoretical" (but practically, applicable although not necessarily already applied), know-how, methods, procedures, experience of successes and failures and also, of course, physical devices and equipment Existing physical devices embody -so to speak -the achievements in the development of a technology, in a defined problem-solving activity. At the same time, a "disembodied" part of the technology consists of particular expertise, experience of past attempts and past technological solutions, together with the knowledge and the achievements of the "'state of the art". Technology, in this view, includes the "perception" of a limited set of possible technological alternatives and of notional future developments." (Dosi, 1982: 147-148) This latter definition, which is adopted in this article, starts to capture the essence and multi-dimensional nature of technology, which is exclusively a human phenomenon. McGinn (1990) identifies eight dimensions, these being: material form, fabricative process of bringing into being, purpose, resource requirements, technical knowledge, method of bringing into being, the sociocultural-environmental context in which the preceding takes place and lastly the practitioner's mental set (e.g. attitude, views, values) . This invites a more extensive framework such as the Technology Complex proposed by Fleck (2000) (Fig. 1) to surface not only the artefactual and technical aspects of any specific technology, but also the manner in which technology is used alongside contextual issues such as economic and social-cultural dimensions. Underpinning this view is the relational nature of technological artefacts (both material and non-material) with people, captured by such concepts as 'assemblages' (Larkin, 1969; Landstrom, 2000) , 'sociotechnical systems' (Trist, 1981) , 'seamless web' (Hughes, 1986) , 'sociotechnical constituencies' (Molina, 1990 (Molina, , 1997 , 'mangle' (Pickering, 1993) 'ensembles' (Bijker, 1995) , 'entanglement' (Orlikowski, 2005) , 'sociomaterial assemblage' (Suchman, 2007; Orlikowski and Scott, 2008) and 'imbrication' (Leonardi, 2011) . These concepts infer the conjoined relationship between the artefact and its user, though the nature of the relationship dynamics is unclear. Nevertheless, it can be argued that once an artefact is brought into being, it has a distinct existence or 'biography' (Kopytoff, 1986 : 66) of its own. In other words, the artefact can be modified over time and it need not be attached to a single user, but can have multiple users and uses until its end-of-life. This shifts attention to two aspects of artefacts from the perspectives of both design and use. One, reflecting the essentialist view, is that the artefact's properties, underpinned by the laws of nature, determines what is possible (Fleck and Howells, 2001) . In this sense, the artefact is deterministic in terms of what can be done (Winner, 1980) . In contrast, is the anti-essentialist view, which is concerned with how meaning is ascribed to the artefact and how embedded inscriptions are read (Grint and Woolgar, 1997) . A third aspect is offered by Actor Network Theory (e.g. Callon, 1986; Latour, 1990 Latour, , 1996 . This argues for agnosticism (i.e. impartiality), symmetry (i.e. all viewpoints) and free association (i.e. no a priori views) in how the relationship between humans and non-humans is viewed (Callon, 1986) . They are all actants with agency (Latour, 1990) , although it is unclear how this agency functions. In contrast is the notion of 'affordance', introduced by Gibson (1966 Gibson ( , 1979 . Technological artefacts afford possibilities for use, which can arise through design (Norman, 1988) . This focuses attention upon the dynamics of the relationship, that is absent in the previous terms and draws upon the notion of properties of artefacts and how these are perceived. However, this concept is itself problematic as there are different views about what constitutes an affordance, for example, is it merely pertaining to perceptions or is there a cognitive component (Harwood and Hafezieh, 2017) ? Underpinning this human-artefact relationship is the view that there is a homeostatic balance among all relations that maintains stability. First presented by Cannon (1926 Cannon ( , 1929 to explain how the body maintains a steady state, the notion of 'social homeostasis' has been developed (Cannon, 1932; Wiener, 1948; Bateson, 1972) and extended to the relationship between human and machine (Ashby, 1952) . Indeed, homeostasis underpins the notion of cyborg presented by Clynes and Kline (1960) . In summation, the notion of technology is as a human-artefact homeostatic complex, with each offering some form of agency. It is relational, multi-dimensional, both embodied and disembodied as well as intertwined and dynamic, yet is essentially stable. It affords possibilities about what can be done, but is underpinned by knowledge about how to make the human-artefact relationship work. To talk about technology, therefore, is to shift beyond the artefact and embrace the complexity invoked. Since technology is characterised by complexity then this invites questions about how it has developed over time, how we are informed about technological developments and how we can conceptualise the process of technological development. Whilst humans, as biological entities, have evolved slowly over time, technology appears to have developed at an accelerating rate. Indeed, 2019 marked the 50th Anniversary of Technological Forecasting & Social Change and provides a fitting frame to put this transformation into context. While paper was invented in China around 2000 years ago, the Gutenburg printing press in the mid-fifteenth century, and the first digital computers in the early 1940s; in the last 50 years alone, man has landed on the moon (1969), a sheep has been cloned (1996), computers have become common-place in the home (from 1980s), social media (2000s) has become omnipresent and 5G global rollouts are commencing (2019) During the 2020 COVID-19 pandemic, advances in high performance computing power and computing capacity have come to the fore with this harnessed to accelerate the research curve to better understand the virus and at a pace never seen before, to develop treatment regimes, testing and ultimately a vaccine (HPC Consortium 2020). With ways of working, learning, collaborating and indeed living changed due to widespread global lockdowns, technology has quite literally needed to take the load: on 11th March, 2020, data centre services provider Deutsche Commercial Internet Exchange set a new world record in Frankfurt -achieving data throughput at over 9.1 Terabits per second (DE-CIX, 2020) . Putting this into a wider context, newer forms of technology have now emerged that were either not conceived, or not anticipated to be actualisable twenty years ago, including hybrid and multi cloud computing, deep learning-based predictive analytics, brain-computing interfaces, distributed ledger technology including blockchain and at a nascent stage, quantum computing. The scale of the impact and iteration of these newer technologies has perhaps not been fully anticipated, particularly in terms of the emergent opportunities and threats arising from their integration. This is well illustrated by the rapid evolution of the smartphone, which has had profound implications for people's behaviour, from shaping the nature of experiences in sectors such as tourism (Wang et al., 2014) to enabling meaningful personalised services at scale across health, home, retail and mobility enabled by location-based tracking, artificial intelligence (AI) and related sensor technologies. It has also contributed to a move away from the duality of being either online or offline with an increasing transition towards hybridity (Eaves, 2020) , whereby we are frequently unthinking in our flip between our material and digital selves (Simunkova, 2019) . But there are also implications such as health concerns arising from problematic smartphone use, including the 'fear of missing out', the need for touch arising from the sensation experienced from holding a smartphone and the impact of continued use upon anxiety and depression (Elhai et al., 2016) . With the advance of 5G enabled devices becoming mainstream during late 2019 / early 2020 and bringing enhanced download speeds, reduced congestion, lower latency, less interference and more efficient capacity, the development of the smartphone is about to start another transformation in a global ecosystem of connected sensor rich devices -the realization of what is referred to as the Internet of Everything (O'Leary, 2013; Eaves, 2020) . But we have to ensure that this benefit is achieved securely and is inclusive and available to everyone -as an example, women in low and middle-income countries are still 8% less likely than men to own a mobile phone (GSMA 2020). Being offline has significant social and economic impacts from social exclusion to a limiting of career mobility (Capgemini 2020) . The scale and depth of transformation can have unintended consequences with the gift of opportunity not equally distributed to all. Focusing on a specific development area, the impact of cloud computing (public, private and increasingly hybrid multi cloud) to transform the IT industry (Nieuwenhuis et al., 2018) provides a good example, enabling enhanced, agile and integrated optimisation of resources on demand and becoming a contributory factor in the rise of home / mobile working and learning even before COVID-19. The adoption of cloud solutions continues to accelerate but its consumption is evolving -it is not a linear path or one route fits all panacea (Bailey and Eastwood, 2018) . As an example, we are likely to see some repatriation of workloads to on-premise given the rise of Edge Computing -namely the relocation of computing resources closer to the point of use (network periphery). In this way, data aggregation and processing occurs at the edge so that only the results of processing are sent to the cloud -not data -reducing bandwidth costs and providing a strong partnership for Internet of Everything solutions (Eaves 2020) . This highlights the speed of change within even the most recent technological developments, alongside the impact of technology integration as a catalyst for innovation at scale. Additionally, the availability and analysis of Big Data is creating a tension between opportunity and risk. On one hand, providing the capacity to gain nuanced insights to inform decision making and offer personalised services, products or care; and on the other, raising many social and ethical issues around usage, privacy and ownership (Newell and Marabelli, 2015; EESC 2017) . This is especially the case given the lack of trust that has accelerated globally over the last decade. Levels of trust inequality between the informed public and more sceptical mass population and also trust inequality by gender, has created divergent levels of confidence, and most recently, a shift towards localized trust centred on relationships perceived to be within people's control (Edelman Group 2019) . These examples raise the question of how quickly future developments will emerge and what issues will arise. The narrative about the development of technology also matters. During the last 50 years, public attitudes towards computers and technology have been largely shaped by the printed press and more recently online and social media. Tuchman (1978) has described how different forms of mass media are a significant contributor to the social construction of reality. Early media reports related to technology are explored by Martin (1993 Martin ( , 1995 ) who reveals how the use of headlines, exciting imagery and metaphors suggests that developments are dramatic rather than incremental. However, when our expectations are not met, our early enthusiasm is displaced by disillusionment and perhaps distrust. In more recent years, there has been a growth in multi-media forms especially social media, with increased accessibility to different audiences via a range of mediums and types of devices -but the use and impact of headlines, imagery and metaphor continues. There is a strong narrative around the 'dark side' of technology focusing on what it may take way rather than enable, most notably around the impact of robots and AI on employment and the future of work (Eaves, 2018) . Headlines can impact the way we think (Surber and Schroeder, 2007) and even subtle misinformation can have significant impacts on a reader's memory, inferential reasoning and behavioural intentions (Ecker, Lewandowsky, Chang and Pillai 2014) . It is also argued that deterministic headlines about technology are also unhelpful as they can narrow dialogue and debate rather than invite different perspectives to come together. In their recent paper, Bhullar et al. (2019) discuss specific opportunities that can be attained through academia-industry collaboration. Such collaboration could also be relevant to better informing the media narrative. Whilst the metaphor used to shape the way that we perceive, think, and act can be powerful (Martin, 1995/6) , metaphors also can be employed to represent, explore, clarify, structure and provide a bridge to understanding in developing fields (Bazeley and Kemp 2012) . For example, one of the current widely used metaphors for emergent newer forms of technology is the notion that they are 'disruptive'. These technologies may be labelled disruptive since they 'break asunder' (OED, 2018) and thus transform what preceded, especially in terms of associated social arrangements as revealed by Fleck (2000) . This contrasts with Bower and Christensen's (1995) view of disruptive technologies as being those that are oriented to new applications or markets and tend to be initially of inferior quality / performance. Irrespective, they are the outcome of both incremental and radical developments of their discrete elements (Bower and Christensen, 1995) . Indeed, it is perhaps not these individual developments that have given rise to disruptive technologies, but novel configurations. The importance of configurations is that they draw attention to what emerges but which cannot be anticipated. A specific technology can have a technology developmental trajectory that maintains a performance improvement rate through both incremental and radical developments (Bower and Christensen, 1995) . This trajectory, constituting the history or ontogeny of structural changes, preserves the identity of that technology (Maturana and Varela, 1975) . However, a configurational technology offers different possibilities arising from the interpretive flexibility (Bijker, 1995) of what is possible. Configurations can also be viewed as stratified, for example, components are configured into modules which are configured in turn into devices then working systems (Rip and Kemp, 1998) . From these possibilities, recurring configurations may emerge and stabilise or 'crystallise', perhaps leading to contingency-specific generic systems (Fleck, 1992: 8) . These generic systems are then more likely to have identifiable technology development trajectories (Fleck, 1992) . One established metaphor that provides insight into how we understand the process of how technology unfolds over time, is that of evolution (e.g. Rip and Kemp, 1998; Bowonder et al., 1999; Fleck, 2000; TFAMWG. 2004; Devezas, 2005; Lipsey et al., 2005) . Ziman (2000) explores this metaphorical usage comparing models of biological evolution with technological evolution. Underpinning the biological model are genomes (i.e. DNA), genotypes (i.e. gene sets) and phenotypes (i.e. traits), the former constituting hereditable genetic code. Central is the debate about whether there is transmission of a phenotype's attributes or changes that result from engagement with the environmental context that co-exists with the phenotype into the next generation. Jablonka (2000) makes the distinction between the Weismanian doctrine, which denies this transmission, and the Lamarkian doctrine, which provides for it. The Darwinian mechanism of natural selection of heritable differences arises from mutations of the genotype, though it is unclear how these arise. Assuming the relevance of the metaphor, this introduces two issues. First, do artefacts have a genetic code and second, how do artefacts develop; are the Weismanian or Lamarkian models of development appropriate for artefact development? Fleck (2000) argues that to assume a comparison between biological and technological evolution is misleading. The biological model invokes a replication mechanism (underpinned by the DNA and its stability) and interaction with outside (the environment). This becomes problematic when applied to technology. Fleck (2000) asks whether there is a technological equivalence to DNA. The ability of biological forms to maintain (adapt) and produce (replicate) highlights the uniquely autopoietic nature of biological forms (Maturana and Varela, 1975) . Indeed, Fleck (2000) argues that artefacts are different from biological forms in that, whilst genetic code is embedded within the biological form, it is external to the artefact and is passed on through humans. Thus, the ability to replicate or reproduce involves knowledge about, for example, design techniques and production methods, with effective replication requiring a composite of factors, as identified within the 'technology complex' (Fig. 1 ). Fleck (2000) proposes that the basic unit (c.f. gene) of technological development is the 'artefact-activity couple', an amalgam comprising of artefact, knowledge and organisation, which constitute the essentially necessary but sufficient elements to permit 'self-contained replicating capability' and comprises those most immediate human activities (i.e. production, operation and maintenance) required to develop and use a specific technology. Generational differences arise from technical developments or adaptations to operational contexts. A Lamarkian model is invoked. These may take the form of incremental or radical changes, these perhaps analogous to mutations. However, this insight into what technology is and how it develops is inadequate for foretelling how technology might unfold in the future. In conclusion, we can perceive not only the accelerated rate of technological development but also scaled integration, particularly with newer forms of digital technologies. However, our individual views about what is happening appear to be very influenced by how we are made aware of developments, with hype preceding disillusionment. If we wish to think about how technology is developing then the metaphor of 'evolution' remains useful, despite the challenges of how we make it fit. The challenge of anticipating the manner with which technological developments might unfold long-term invites approaches from the technology futures domain (Lu et al., 2016) . Indeed, there are a proliferation of approaches as revealed in TFAMWG's (2004) comprehensive review of technology futures analysis (TFA). They identify 51 different methods, ranging from statistical to creative, which they categorise as hard (quantitative), soft (qualitative), normative (endpoint orientated) and exploratory (extrapolative). Techniques include statistical analysis, science fiction analysis, focus groups, road-mapping and scenarios. Their diversity draws attention to the contrasting roles of expertise and creativity as well as analysis and description, invoking that they each have different emphasises and strengths. As such, TFAMWG (2004: 291) proposes the complementary use of multiple and mixed methods. As an example, Hussain et al. (2017) combine expert opinion with creative thinking in their scenario-driven road mapping approach. Moreover, even within a specific approach there is diversity, as revealed in the review of scenarios by Lacroix et al. (2019) . However, forecasting how technology will develop and which technologies will prevail is a challenging and contestable domain (Albright, 2002) . This is a view shared by Phillips and Linstone (2016) , who foreground its magnitude with the word 'struggles'. A variety of concerns include the related issues of the time horizon of studies and the accuracy of the approaches employed. Indeed, one of the challenges is the pervasive nature of expert biases (Bonaccorsi et al., 2020) , which can be found in such approaches as Delphi (Flostrand et al., 2020) . As such, Ayres (1999: 12) appears dismissive of Delphi and related approaches: "I do not have a high regard for expert judgments, taken as a class. Let me leave it at that". Further, prognostications have been critiqued for being either too short-term (i.e. unworthy of being viewed as futures) or projecting into the distant future and hence being highly speculative (Helmer, 1999) . In contrast, is the inaccuracy and serious consequences that can arise from failing to forecast (false negative) a technology that does appear, especially when this has a disruptive impact (Kott and Perconti, 2018) . Nonetheless, short-term quantitative methods have the most likelihood to be accurate about when an event will occur, whilst expert views are more likely to anticipate whether the event will occur (Fye et al., 2013) . It is also evident that forecasting how technology develops into the future invites novel approaches. As an example, Aunger (2010) demonstrates how the complexity of emergent new technologies reaches new levels over time. This work reveals how creatures (e.g. beavers, spiders, bees) have engaged with artefacts, progressing to how the 'human' creature has developed the artefact to increasingly more complex levels of sophistication (e.g. modern production). Whilst this focus upon the artefact helps explain the growing complexity of technology, this is suggestive of a potential predictive capability. Indeed, the notion of the artefact-creature (human) relationship can be developed to explore how technology can be developed into the future, culminating in some end-point. By considering possibilities in the human-artefact relationship, this offers a mechanism for establishing how one might consider the unfolding of technology into a scenario like one envisaged by Stephen Hawking. Stephen Hawking's prediction about how AI would take off on its own invites the question of how this might happen. Indeed, should we not assume that it is an embodied AI, since we might also assume that any AI would want to take responsibility for the physical domain which it inhabits. Thus, it is a technology that comprises both tangible (e.g. physical) and intangible (e.g. knowledge) elements and is shrouded in complexity (Section 2). It will have a developmental trajectory which is rooted in human history, is emergent and so is unpredictable. Its speed of evolution is faster than that of the biological evolution of humans, with the possibility of many generations emerging within one human generation (Section 3). This might be the stuff of science fiction, but it is possible to map out a trajectory which, while making another assumption that all technical constraints can be overcome, reveals how such a technological entity or community of autonomous autopoietic technological beings might emerge. The argument is grounded in a simple framework based upon the notion of the human-artefact relationship, which has also found form in the Aunger (2010) creature-artefact evaluation (Section 4). The transition from human-human to human-artefact-artefact-human to artefact-artefact reveals a developmental trajectory that comprises of six frames (Genres) (Fig. 2) . Each Genre has unique functional mechanisms that define the nature of the metamorphic impact each Genre has upon the developmental trajectory. For example, Genre 3 draws attention to the recursive nature of artefacts produced by humans, which are used to produce artefacts for human use (e.g. the mass production line assembling subassemblies, assembled in a production line using components produced in a production setting). Likewise, Genre 4 focuses upon the growing autonomy of artefacts (e.g. the shift from mechanical cars to self-driving vehicles), Genre 5 upon autonomous interaction between artefacts (e.g. smart technologies) and Genre 6 upon the ability of artefacts to produce themselves autonomously in technological collectives (e.g. the narrative of science fiction). The systematic unfolding of functional mechanisms reveals the process underpinning the developmental trajectory. By focusing upon functional mechanisms, this escapes the constraints imposed if focusing on specific technologies to develop the argument. Instead, specific technologies are used to illustrate the existing embryonic nature of what is proposed, suggesting that the narrative of each scenario is plausible, assuming that technical challenges can be overcome. Moreover, in view of the accelerating pace of technological development proposed in Section 3.1, then, whilst no attempt is made to predict when our scenario will unfold, it can be assumed that significant progress in technological advances will be achieved even within the next 20 years. Indeed, these issues provide topics for possible debates about what needs to be done to achieve, shape or, indeed, prevent the development of these Genres. A more detailed account of each of the six Genres follows. The notion of a technology free world consisting of humans or forbearers in whatever stage of evolution, is one that is hard to conceive. However, it draws attention to the social dimension which forms the essence of what it is to be human. As social beings, there has emerged such phenomena as norms, culture, values and politics. Aunger (2010) suggests that the technologies of humans are rooted in its forbearers. However, if artefacts are to be introduced into the relationship between humans or its forbearers, then it is initially likely to be of an opportunistic nature (e.g. the use of a fallen tree to bridge a river). These artefacts can range from those whose form offers the convenience of usage (e.g. the fallen tree), to those that are purposefully shaped (e.g. stripping a stick of foliage). Nevertheless, this is not confined to humans, with other biological forms recognising the intermediary possibilities of artefacts (e.g. chimps using a stick to penetrate an ant nest). Aunger (2010) suggests this can involve social learning, as observed amongst primates. However, the emphasis is upon the affordances offered by the artefact to serve a specific purpose. The recognition that artefacts can be used to make other artefacts (e.g. flint arrowheads) was perhaps the first explicit recognition of the tool-product relationship. Whereas the artefact used to do something is an instrument, when used to produce other artefacts it can be viewed as a second order instrument (Aunger, 2010) . It is within this Genre that much of human endeavour has been concentrated. Indeed, this has given rise to a variety of categorisations of the phases in human development, such as stone, bronze and iron; classical, medieval and modern; jet, atomic and space. Such categorisations suggest that specific periods in time can be identified and characterised, for example, the 'Age of Steam and Railways' (Freeman and Perez, 1988) . Toffler (1980) distinguishes between agricultural, industrial and a 'Third Wave', this embracing renewable energy and computer intelligence and giving rise to new industries such as electronics, aquaculture and genetic engineering. Likewise, Grinen et al. (2017) propose three production revolutions: agrarian, industrial and cybernetic, this later being characterised by self-regulating systems. More recently, the concept of 'Industrie 4′ has been proposed to provide vision for the development of 'smart' domains, these constituting 'cyber-physical 'systems (Kagermann et al., 2013; Reischauer, 2018) . These more recent developments are invocative of the subsequent Genres, especially as they become intelligent (Genre 4) which marks the latest transition in the shift from mechanical to digital devices that characterises Genre 3. Irrespective of these categorisations, the development and use of technology draws attention to four interconnected issues. First, is the manner in which a technology is taken up and embedded into the everyday experience of use. Second, are the broader organisational, economic, social, cultural and governance implications as revealed by Fleck (2000) . Third, are the requisite conditions for the ongoing functioning of technologies -the infrastructural issues which tend to be invisible until a problem arises, this highlighting 'infrastructure inversion' (Bowker, 1994) . Fourth, is the special case of the more intimate nature of the human-artefact relationship manifesting in the cyborgisation of the human being. At the immediate level of the human-artefact relationship is the way in which the product is taken up and used. Implicit is the embedding of the artefact into the organisation of the everyday -its domestication (Harwood, 2011) . This is not a passive activity but may result in changes to both the ways things are organised and relations with others. Further, this requires knowledge about both the technicalities of the artefact and the situation and, thus, how the artefact can be introduced into context specific situations (Fleck, 1992) . The emphasis is upon the situatedness of practice (Suchman, 1987) . The focus is local. More generally, technology is a phenomenon of society and its development. It has evolved from informal tinkering and disciplined experimentation into organised industrial functions (e.g. research and development, design for manufacture and market research). This has been conceptually explained with such terms as innovation (Schumpeter, 1939) , diffusion (Rogers, 1962 (Rogers, , 2003 and innofusion (Fleck, 1988) . The scale of adoption has also given rise to consumerism and commodification. This scaling-up of demand has transformed local craftsmanship into geographically distributed complex mass production systems comprising such features as procedures, skills, work organisation, factories, supply chains, industries and international trade. This has involved the development of automated artefacts (e.g. CNC machines, robotics) that perform tasks for humans (e.g. mass scale replication of products), serving human needs (e.g. the activities of the everyday), but requiring human intervention (e.g. maintenance). These technologies can also produce information, creating transparencythey 'informate' (Zuboff, 1988) . Coinciding with this is the development of computational devices (e.g. mainframes, personal computers, laptops), enabling more efficient process management (e.g. stock control, management accounting). Michael (1962: 6 ) labels this computerautomation configuration as 'cybernation'. More recently the Internet has given rise to new forms of social engagement, from the physical to the virtual, with it changing social practices and behaviours such as purchasing. With each newer generation that grows up alongside associated newer forms of technology, especially as physical reality transfers into the digital domain and hybridity emerges in their fusion, questions are being asked about the implications. As an example, 'Generation Z', the digital natives born circa 1995-2010 who have grown up with online and mobile technologies, multiple realities and social networks, are argued to be different from previous generations because of this association (Turner, 2015; Schroth, 2019) . This has created debate (e.g. Schroth, 2019) as to the work-place, commerce, education, social and cultural implications for this newer generation, but also raises the question about what can be inferred for subsequent generations as we move into an age of smart interconnected technologies. The acceleration of adoption of the Digital across everyday living, working and learning, catalysed by COVID-19 and the changes in approaches to engagement, experience, patterns of consumption and buying behaviours observed as a result, offers insight into the future, especially as this has impacted cross-generationally at a scale never seen before. The OED (2018, n.p.) definition of 'infrastructure' is "A collective term for the subordinate parts of an undertaking; substructure, foundation; spec. the permanent installations forming a basis for military operations, as airfields, naval bases, training establishments, etc." Infrastructure is in the background, invisible and often taken for granted, but essential in its enablement of everyday activity. It only becomes visible when there is a problem, this being the phenomenon of 'infrastructure inversion' (Bowker, 1994) . There are two aspects to infrastructure. The first (the material dimension) is concerned with habitation (e.g. facilities for home, education, health, work and leisure). The second (the spatial dimension) focuses upon supply-delivery (e.g. of people, data, energy, materials, services) and draws attention to modes of transportation / communication (e.g. ships, aircraft, auto-vehicles, radio and blue-tooth). They are not mutually exclusive. Within Genre 3, the technologies have shifted from mechanical to automated, with attendant improvements in efficiencies and increasing convenience in the home and work-place. However, what distinguishes these developments from newer developments is the absence of AI, this denoting Genre 4. There is a special form of human-artefact relationship, which is more intimate -involving the cyborgation of humans. The cyborg is a concept first conceived by Clynes and Kline (1960) , to capture the notion of a 'cybernetic organism' which could exist in unnatural environments (e.g. space travel). It is defined as a: "self-regulating man-machine system… [an] exogenously extended organizational complex function as an integrated homeostatic system unconsciously" (Clynes and Kline, 1960: 27) . This is a fusion of the organic with the inorganic, underpinned by homeostasis as the mechanism to regulate stability in the relationship. The cyborg has gained in prominence through Science Fiction notably Dr Who's Daleks (Nation, 1963) and Cybermen (Pedler, 1966) and in films such as RoboCop (Neumeier and Miner, 1987) and Seven of Nine (Star Trek, 1997) . Indeed, the mass media narrative around cyborg technological developments is still often conveyed with all the attributes and often risk, drama or fear factors associated with science fiction (Eaves, 2020) . However, the actuality manifests in technologies that are attached to (e.g. prosthetics, glasses, self-tracking wearables, exoskeletons), penetrated-into (e.g. dental treatments), and embedded within the body (e.g. chips, pacemakers) (Haddow et al., 2016) , with a degree of permanence. Additional variants include ingestion (e.g. medicines, self-acting or biomarker devices), involving absorption, and immersion (e.g. VR, AR, ML), whereby the body's senses are immersed within the blended worlds of virtual, augmented and actual reality. As an example, the world's first large scale NHS trial to test immersive VR Therapy for psychosis and other mental health conditions commenced in June 2019 (Oxford VR, 2019) . The different types of integration afforded by the development of the cyborg therefore also reflect an opening-up of new choices and tangible opportunities for technology to be harnessed for good. Following from the above, these new possibilities are further illustrated by Klugman (2001) , who differentiates between enhancement of the body and replacement of body parts, as well the distinction between an embodied but networked brain and an independent brain that can be transferred into other receptacles. Klugman suggests that one should look again to science fiction for the unimaginable possibilities that could one day become a reality. Indeed, advanced forms of nanotech and biotech developments pave the way to such speculations being actualized. Beyond enhancing physical capability past levels of natural performance, the possibility of brain-computer connectivity (Lotte et al., 2007; Martins et al., 2019) to enhance brain capability arises. Further and controversially, is the establishing, ethics and acceptability of human companionship and intimacy with android partners (Song, 2018) . This is especially relevant given global issues such as aging population and isolation/loneliness, but also raises such questions as to what constitutes 'friendship', especially if the companion has been present since birth (Dautenhahn, 2013) . Following from the above is the imaginative view of future cyborgs being wirelessly interconnected (Martins et al., 2019) , raising questions about whether their individualism is suppressed for the benefit of the 'collective' (cf. Star Trek's Borg). Cyborgisation offers a significant domain for discussion about how it might develop as an alternative to the proposed fourth to sixth Genres (e.g. Barfield, 2015) . For example, one controversial view is the transference upon biological death of human brain content into digital content (Chalmers, 2010) . One elaboration of this is that it becomes absorbed into the digital cloud, with identity being both preserved and authenticated by an advanced form of blockchain, or other form of Distributed Ledger Technology, and with the right permission key, perhaps downloaded into an autonomous technological being giving it 'life' (Genre 4). A recent metamorphosis is that of the transition from automated technological artefact to autonomous technological being, with some form of intelligence. This, in its definitive form, is where an artefact is embedded into a context with no further engagement required -it is forgotten about and will serve out its purpose, maintaining and adapting itself until its end-of-life. This can be viewed in terms of the convergence of existing technologies, for example, for perception (sensors), data storage (cloud), processing (high performance computing), calculation and sense-making (AI, Machine Learning), authentication (blockchain) and action (autonomous vehicles, robotics, drones). This is perhaps the era denoted as the Cybernetic Revolution (Grinen et al., 2017) . Nevertheless, this notion of autonomy is open to debate, raising the question of what is an autonomous technology and what issues may be implied or foreseen? Context is first provided from the much debated notion of weapons that are autonomous. The US Department of defense defines an 'autonomous weapon system' as: "A weapon system that, once activated, can select and engage targets without further intervention by a human operator" (DoDD, 2012: 13-14) . However, this allows human intervention to override its operation. A more insightful view of autonomy is provided by DoD (2011). This reveals four levels of autonomy: 1 Human Operated: no decision making capability, but is able to provide feedback to sensed data -these might relate to inanimate and animated technologies. 2 Human Delegated: automation (de-)activated by humans -these follow prescribed routines. 3 Human Supervised: multi-tasking automation based on directives but contained within permitted scope. 4 Fully Autonomous: acts on goals without human intervention, but open to human intervention if necessary. Whilst the former appears to relate to manually manipulated technologies, the latter invokes a process that permits both the recognition and categorisation of an object, as well as the ability to be alert to and accommodate the situation when the object changes status (e.g. from enemy to friend). This multi-level view has been designated as Humanin-the-Loop, Human-on-the-Loop and Human-out-of-the-Loop (HRW, 2012). A similar framework has been developed for autonomous vehicles with the 5th level not requiring a driver (Skeete, 2018) This draws attention to the distinction between 'automation' and 'autonomy' (Vagia et al., 2016) . Automation invokes a self-acting process whilst the latter is self-regulating. The distinction is perhaps marked by the manner in which a disturbance is handled. Is the process able to adapt and introduce a change that was not part of its initial programming? The CNC machine, as long as it is fed material can keep on producing parts despite the drill bit becoming worn. However, these parts are defective. Alternatively, a sensor may detect this wear and initiate an action that replaces the drill bit, preventing defective parts from being produced. However, the CNC machine will be unable to rectify the situation if its servo-motor fails. This requires human intervention. In contrast is the notion of the autonomous vehicle that has been designed to function in a smart transport infrastructure system with minimum of human intervention once it has been brought into being (Bissell et al., 2020) . Further, they are not confined to the ground as illustrated with experiments in Dubai with autonomous aerial taxis (Mohamed et al., 2018 ). An autonomous vehicle can re-energise itself, for example using cryptocurrency to cover the transaction, as well as maintain itself at specialised docking points. One purpose is to transport, particularly people, between any two points. On land, the vehicle can sense road conditions and adapt to scenarios such as maneuvreing in crowded traffic. In this sense, it is self-regulating. However, as revealed by the Moral Machine platform which gathers a human perspective on moral decisions (MIT, 2019), if placed into a situation where the autonomous vehicle had to choose between the lesser of two evils, for example killing five teenage pedestrians crossing on a red light or two elderly passengers in the vehicle, what should it do? As seen with the first actual pedestrian fatality involving a 'self-driving' car (Gibbs, 2018) , this has raised significant discussion around autonomy and who has control, especially concerning trial situations with human 'pilots' alongside unforeseen consequences (Stewart 2018) . This also brings to the fore the importance of 'responsible innovation' as described by Ribeiro et al., (2017) . To conclude, the distinction between automation and autonomy is blurred and sometimes complex. Indeed, Vagia et al. (2016) highlight the multifarious and interchangeable use of the two terms and draw attention to taxonomies of 'automation' which recognise that there are multiple levels to how automation can be viewed. The disruption to both human thinking and doing requires a combination of augmentation through responsible and explainable AI and automation through tools such as a Robot Process Automation (RPA). If the distinction is to be maintained between automation and autonomy, then it is perhaps based upon a view of autonomy in terms of the level of self-regulation and adaptation that is permitted to an entity, within the constraints of the system in which it functions. Higher levels of autonomy therefore indicate independent entities that are resilient and have long-term viability. This invokes a context within which there is ongoing interaction. This, in turn, calls upon the entity's awareness of what is going on and the ability to make sense of this, anticipate what is likely to happen next, then generate an 'appropriate' response that serves the being's need to survive within this context. Data is processed, which results in something happening. This faculty for humans to apprehend and understand foregrounds the concepts of 'intelligence' and 'knowledge'. In a machine this introduces the interrelated concepts and applications of 'artificial intelligence', 'machine learning' and 'deep learning'. Aside from debates about what is 'intelligence' (e.g. Maturana and Guiloff, 1980) and 'artificial intelligence' (e.g. Galanos, 2018) , this notion that machines can have intelligence is not new. Turing proposed in 1951 that machines, through learning, could simulate human intelligence (Turing, 1996) . More significantly, that at some point "machine thinking… would not take long to outstrip our feeble powers" (Turing, 1996: n.p.) . This point of 'Singularity', a concept attributed to von Neumann (Ulam, 1958) , has prompted much debate (e.g. Vinge, 1993; Chalmers, 2010; Magee and Devezas, 2011; Eden et al., 2012) . This is aside from its mathematical interpretation (Magee and Devezas, 2011) . Good (1966) speculates about the creation of such an 'ultra-intelligent machine', that it has an artificial neural network, learns from experience and will be man's last invention, "provided that the machine is docile enough to tell us how to keep it under control" (Good, 1966: 33) . If this is the case, can control of AI be attained / maintained, or, drawing upon the science fiction metaphor, will ultra-intelligence outwit even enhanced human capability, as illustrated in the film 'Ex Machina' (Garland, 2014) , when the intelligent android Ava escapes containment. Simon (2019) presents four scenarios that explore such possibilities suggesting an inevitability in the dominance of AI. Irrespective of the 'science fiction ' connotations, in 1955 ' connotations, in , McCarthy et al. (2006 proposed a study to investigate the possibility of AI. Perhaps the first articulation of the features required by an intelligent system was by McCarthy (1958) about 'Advice Taker', which included the underlying principle, "In order for a program to be capable of learning something it must first be capable of being told it" (McCarthy, 1958: 78) . Indeed, 'being told' underpins the Virtual Personal Assistant (VPA), highlighting the association between AI and some form of sensing. In this case the ability to recognise human speech, which involves natural language processing, is a form of AI. The artefact's voice interface (e.g. Apple's Siri) or text alternative (e.g. Google Assistant) alongside recognition capability, understands what the human user says / inputs and interprets this to provide a response. Although relatively straightforward for simple queries such as 'what is the temperature?', it becomes much more demanding for queries that rely on something that was expressed previously and / or needs contextual awareness. This raises the technical challenge of how to develop the artefact's capability to handle the context, nuances, complexity and variability of the spoken language (Collins, 2018; Eaves, 2020) . It is this 'making of meaning' in a more complex setting that remains challenging. Taking this further, findings from UNESCO (2019) and Loideain and Adams (2019) around the almost exclusive and default use of female voices in VPAs suggests this may be reinforcing negative stereo types about women or even contributing to a gender gap in digital skills. Collectively, examples such as this build in impact and contribute to a lack of diversity of experience in the tech sector (Eaves, 2020) . A particular issue has been the 'submissiveness' and discriminatory nature of responses when virtual assistants have been required to react to questions and suggestions that are inappropriate or insulting, reflecting usecases beyond those envisaged for the systems during design (something now being addressed by leading providers). This raises questions about how bias in AI could be neutralised. Beyond cultural implications, it is also important to consider that VPAs have access to both more personal and a broader range of data than even a typical search engine. Indeed, the accumulation of data gathered from sensors as well as the 'digital traces' (Hedman et al., 2013; Hand and Gorea, 2018) we leave behind, enables the digital cloning of everyone, in terms of all aspects of our everyday, resulting in every person having their own 'data body' (Critical Arts Ensemble, 1998) , with its own memory of the past. Integrated with, for example, facial recognition into the appropriate infrastructure this can have more sinister implications, including who owns the data and what are implications for privacy and / or discrimination (see Section 5.4.3). Whilst the issues of meaning, bias and unwelcome use taint the view of AI serving the well-being of humans, another more encouraging view of AI is in terms of the possibilities arising from the notion of the 'Digital Twin' (DT). Although definitions vary with context, a DT can broadly be described as the "assembled aggregation of data captured by other tools" (Roske, 2019: n.p.) which can enable the creation of a digital functional model of a physical asset -from a component or product -to a process, system or even city. This increasing 'hybridity' (Simunkova, 2019) or bridging of the physical and digital domains makes it possible to obviate the need for physical prototypes (Glaessgen and Stargel, 2012) and model the behaviour and lifecycle of assets; developing scenarios in advance, enhanced by AI, to optimise outcomes and help create repeatable and scalable experiences (Microsoft 2019 ). In manufacturing for example, this can negate issues of bottlenecks and data isolation, stagnation and fragmentation (Tao et al., 2018) . Another benefit is the capacity to continually update and enhance digital twin modelling with real-world information collected via drones, sensors or other IoT tools and apply advanced big data analytics, artificial intelligence and machine learning to acquire real-time insights about the physical artefact's operation, performance or profitability. Applications are numerous, from smart manufacturing to smart city development -as an example, Rennes in France is building a complete DT of its metropolitan area to improve urban planning (Doyle, 2019) . This shift into the virtual domain clearly provides a supplementary dimension to the physical and can be a mutually enhancing relationship, as illustrated by the reciprocal continual improvement of the digital twin -physical artefact. Increasing blurring of the digital and physical domains was already anticipated (Eaves, 2020) and indeed this transformation has accelerated due to COVID-19. This has brought to the fore the need for interoperability of any business asset, from people to technology, alongside having realtime visibility across both organisational boundaries and supply/demand chains. Taking this acceleration to an extreme, will a digital twin enable an intelligent autonomous technological entity to continuously improve itself without humans being aware of what is happening? In sum, autonomy grounded in AI and integrated with other newer forms of technology, is evolving well beyond informating and computation to embrace an immersive, interactive, insight-rich, real time hybrid world environment. This highlights the potential of AI, in terms of diagnostics, insights and informed action taking, but also concerns associated with how bias (e.g. gender) is inscribed, as well as issues of privacy, discrimination, data ownership and dependency and their implications for changes in accepted norms of behaviour. Nevertheless, AI developments are deemed to be of a narrow intelligence nature, with singularity and the emergence of artificial general intelligence or superintelligence, possibly being many decades away (Bostrom, 2014) . The integrated nature of developments and applications of, for example, sensors, wireless connectivity, cloud storage and the processing -analytical capability of AI reveals the growing fusion or hybridity between the material and digital technologies. One benefit is that existing infrastructures such as electricity grids, gas pipelines and water mains can be made more efficient with these smart technologies (Kumar et al., 2018) . Indeed, by viewing AI from an integrated perspective, then this integrated AI complex itself constitutes an invisible infrastructure. The invisibility of physical structures, is complemented by the lack of transparency of the digital structures associated with AI and the data gathered. The emergent integration of sensors with AI via the Internet of Everything, increasingly facilitates detection and consequent activation of automated processes, extending beyond the mundane (e.g. doors, lighting, air-conditioning) to the personalised (e.g. re-ordering replacement items and tailored recommendations to predicted future needs). Indeed, facial recognition, voice activation and mood detection are altering the relationship between human and artefact from activepassive to passive-active. Moreover, integration of different technological forms allow new possibilities for engagement. The 'cloud' provides continuous and increasingly real-time data access and backup. 5G connectivity is giving rise to complex real-time 'smart' networks of engagement with global reach (Chettri and Bera, 2019) . For example, a surgeon using mixed reality can, at a distance, interact with a robot to perform surgery (Vávra et al., 2017) . One potentially disruptive application is that of autonomous vehicles, not isolated from each other, but connected with each other as well as with roadside sensors and traffic management centres through a geographically distributed integrated infrastructure. This synchronises and reroutes vehicle journeys and hence, reduces collisions and congestion (Elliott et al., 2019) . The need for human intervention diminishes with increased capability of this digital complex. However, one cannot overlook the dark side, with malicious intrusion creating concerns about cybersecurity and misinformation. Further, such questions as to who owns the data, how it is processed and used, and by whom (Newell and Marabelli, 2015) draws attention to the potential of data as a form of control (Marciano, 2019) . Taking the example of VPAs, a significant majority of this data are effectively in the control of a small number of technology multinationals creating dependency. To put this into context, in January 2019, Amazon reported that it had sold over 100 million of its Alexa-powered VPAs (CNET, 2019). However, inherent bias, whether intentional or otherwise, can cause individual harm through unfair or discriminatory practices or collective social harms such as loss of opportunity, economic loss or social stigmatization. For example one study of facial recognition systems for gender and skin type, found that darker-skinned females were the most misclassified group (Buolamwini and Gebru 2018) . In the United States, concerns around AI bias, privacy and discrimination have prompted a governance intervention with the proposed Algorithmic Accountability Act of 2019 (Barbanel 2019). Moreover, with the pervasive use of sensors (e.g. facial recognition) and AI in smart cities, policing may become an embedded feature of the infrastructure with automated penalties being imposed upon violators or perhaps worse for dissenters thus quelling freedom of speech (Joh, 2019) . Most recently, this has been brought to the fore in the surveilling of protesting crowds, from Black Lives Matter to George Floyd, and also determining whether people are observing appropriate social distancing. How can health and safety concerns, rights such as freedom of speech and individual privacy, and promotion of innovation all be balanced? Whilst human intervention may diminish, questions are raised about how these digital complexes will shape communities and, more generally, society, as well as, who / what controls this? The time is now for open dialogue, voluntary governance and legislation to address not just what is possible through AI, but what should be prohibited too The scenario for this fourth Genre is about autonomous artefacts that have been brought into being by humans using design and production technologies. These have the potential to become intelligent, resilient and agile technological beings with sensory and analytical capability. This is translated into dexterous, manipulative and mobility competencies. Their ability to endure draws attention to their ability to firstly sense what is going on, underscoring the importance of sensors with diverse capabilities (e.g. geographical positioning, object / activity recognition and property / status detection); secondly to analyse (invoking a form of AI or Machine Learning) and, thirdly to self-maintain (invoking the ability to manipulate self physically and/or digitally, using their digital twin). This is in addition to normal functions, such as scheduling, monitoring and adapting their activity, alongside co-ordination where appropriate. However, whether they develop sentience is questioned as they are fundamentally machines (Dautenhahn, 2013) . Whilst these entities co-exist with and support humans as in the case of human companionship, social mediation and service provision (Dautenhahn, 2013) , their autonomy is most likely to be best demonstrated when serving in remote or hazardous environments. Such applications include underground mineral extraction, satellite detection-communication services and disaster management -a notable usage being the probing of radioactive fuel at Japan's Fukushima plant (Fackler, 2017) . Marine exploration also presents many challenges, with endurance, reliability and adaptability imperative, exemplified by the recent launch of the 'Mayflower' Autonomous Vehicle aiming to cross the Atlantic Ocean. This demonstrates the capacity of 'Augmented Intelligence' which is continually cognisant, sensing its environment to inform decision making and operations real time -with no reliance on human capacity to intervene if issues arise (IBM, 2020). Space exploration provides another example with Campa et al., (2019) suggesting that, due to the hazards of space travel to humans, fully autonomous robots are the more likely contenders to colonise space. Perhaps the most infamous futuristic vision is presented in the classic film '2001: A Space Odyssey' (Kubrick and Clarke, 1968) , with the computer HAL9000 running a spaceship that is transporting humans to Jupiter. However, it turned rogue becoming autonomous serving its own interests rather than that of humans. Is this the science fiction narrative and associated metaphors taken too far? On one hand, Baum (2018) urges a cautious approach to the misinformation around superintelligence, its hype and its potential consequences, whether massively beneficial or catastrophic. Campa et al., (2019) however hypothesise that automation will eventually develop into an 'alien' superintelligent being. Today, the complexity of software is such that unanticipated or unintended consequences can happen, such as a system shut-down (Somers, 2017) . Indeed, it can be asked what would stop a superintelligent entity from intentionally shutting-down human life-support systems. The next Genre emerges with connectivity between the autonomous technological beings created by humans. This extends the being's capability through collaboration with other beings. This implies the coordination of the activities between entities and invokes a linguistic domain that enables this co-ordination (Maturana, 1988) . Thus, physical engagement between entities translates into some form of mutually recognisable means to communicate about each other's activity (e.g. making distinctions using body language). This first order linguistic domain characterises human-animal co-ordinations. At a higher level, these distinctions are converted into a symbolic form that manifests as language, a second order linguistic domain. This ability to make distinctions within language gives rise to conversations (Maturana, 1988) . This presence of linguistic domains permits communication between autonomous technological beings and hence mutual adjustment and co-ordinated activity. Since this is artefact-artefact communication it is less likely to be dependant upon natural language processing capability and instead emphasises the development of machine language capability. However, this moves beyond human generated interoperable communication systems enabling, for example, autonomous vehicle connectivity. Rather, machine learning suggests that digital code can be self-generated and, moreover, if this is permitted, the opportunity to move from a fixed lexicon to one which is creative, leading to new linguistic terms and consequent actions and revealing an adaptation mechanism. This is illustrated with the reported case of two Facebook chat bots that, in the process of negotiating, created their own language (Lewis et al., 2017) . The presence of a linguistic domain draws attention to the collective manifestation of those (technological beings) engaging in language. If Fleck's (2000) Technology Complex reveals the artefact being embedded within a multi-dimensional human social context, then would there be an equivalence for the collective of technological beings? This raises such questions as • how might they be organised for tasks, both in terms of 'communities' and spatially distributed? • would they adopt specialist functions and hence appropriate forms? • would their relationships be equitable or not? • what would be the implications for governance, whether centralised, decentralised or distributed? and • if there is inequity, would this create tensions between competing entities / groups of entities? Whilst learning characterises autonomous technological beings, interconnection permits the sharing of this learning and the growth in community knowledge. This, in turn, creates the opportunity for adaptation of all entities within this community and hence adaptation of the community. Unlike the human quality of resisting change for whatever reason, there may be reconciliation to ensure that all requirements are met. This invokes a collective form of AI and a collective form of behaviour. Critical aspects that are missing include purpose and values, however defined. Purposeful behaviour characterises human behaviour, including deviance from societally accepted norms / values about what is acceptable. However, whether autonomous technological beings can have values and how these might arise invokes a significant debate. The notion of cybersecurity suggests that the technological form can be intruded upon in a malicious manner (e.g. infected or spied upon). Focusing on the collective, an 'infected' (rogue) technological being might corrupt the pure 'thinking of the collective, which can result in a schism that introduces conflict. However, if this takes place in a world in which humans still have engagement, it might be assumed that humans can stop further production of these beings, as well as terminate existing beings or negate their influence. Nevertheless, since these technological beings can be in conflict with each other, the potential also arises for conflict with humans. Moreover, if these technological beings are superior to humans in terms of capability, then this may lead to the eradication of humans. It may also lead to the emergence of the sixth Genre. The sixth Genre is the realm of science fiction and is potentially devoid of a need for humans (cf. Stross, 2008) . The critical metamorphosis into this Genre is the ability of autonomous technological beings to produce other technological beings, whether as clones or distinctively different -they are autopoietic (Maturana and Varela, 1975) . The notion of autopoiesis introduces the capabilities of self-maintenance and the ability to (re)produce itself. Whilst an autopoietic technological being is perhaps still to be realised, this concept has been given credibility by John von Neumann in his writings about automata that are self-reproducing (von Neumann, 1966; Burks et al., 1946) . Further, the development of these beings can be envisaged in terms of the convergence of existing technologies. For example, production requires a specification of what is to be produced (e.g. DNA in the form of a 'digital twin') and a mechanism for reproduction (e.g. robotics, additive manufacturing). Possible design improvements might be achieved with AI (e.g. generative design). Whilst autopoiesis may still be in the realms of science fiction, Rabani and Perg (2019) discuss current developments in molecular nanotechnology as representative of the most basic functional requirements for autopoiesis to occur. This is a space in which technological beings have expanded beyond earth to colonise other planets. They live in interconnected communities and it is also proposed that rather than being homogenous beings, that they each have a specialism. Communities therefore have a purpose related to the locality (e.g. mining, exploring, producing, transporting) and individuals within each collective has a specialism. They could be organised in the form of a collective, communicating with others through the collective network, self-adjusting to the collective will, perhaps in a manner illustrated in the science fiction of Star Trek's Borg Collective? An aligned vision has been proposed by Schmidhuber (2017) who envisages 'self-replicating robot factories' distributed in space producing AI entities of wide diversity, but with each form surviving though adaptation, collaboration and competition with each other. Their ability to reproduce is grounded in each being having its 'digital twin' embedded into its memory. Reproduction is Lamarkian in the sense that each successive generation has the genotype of its predecessors with enhancement based on this generation's community experiences. Thus, each successive generation is a technologically superior being to its predecessor. Once a being has served its purpose of being a functional member of the community and has transferred its learning to its offspring; which is limited to two unless circumstances require otherwise, the being either terminates itself, to be recycled, or transports itself to communities which have a high incidence of damage infliction due to externalities. If humans exist then they are not needed by these technological beings and are perhaps reduced to curiosities and kept in special reserves along with other biological forms. Indeed, the issue of accountability to humans becomes irrelevant. Likewise, are concerns about the sustainability of the planets since these beings are not dependant upon environmental conditions, unlike humanity. This scenario invites many possibilities and questions that benefit from cross-sectoral open dialogue. This exposition presents a conceptual exploration of how technology might develop in the future. It is grounded in a framework based upon the human-artefact relationship and the transition to autonomy and autopoiesis for the emergent technological being. What perhaps distinguishes this approach to thinking about technological developments is its holistic nature, viewing different forms of technology, not in isolation but as configurations of distinct functional capabilities to afford some purpose. Six Genres are identified, each with their own underlying generic features. This is important as it places into perspective 'what if' debates about development, thereby inviting responses about what needs to be done to achieve, shape or, indeed, prevent the development of these Genres. It commences with a view of how technology can be conceived, revealing many ways to consider this, and critically, that it is not just a physical artefact but embraces purpose, practices and knowledge, with implications for organisation, governance, social relations and culture. This is succinctly captured in Fleck's (2000) Technology Complex. Implicit is the relationship between that which is the technology and the human, manifesting in such terms as 'socio-technical' (Trist, 1981; Molina, 1990) and 'mangle' (Pickering, 1993) to capture this fusion. It affords possibilities for use, can exhibit agency and exists in an essentially homeostatic relationship with humans. Thus, any discussion about technology needs to appreciate this complexity. There follows a brief consideration of how technology develops. This draws attention to the potentially disruptive nature of technology in terms of how it 'breaks asunder' what precedes. This arises, not necessarily from incremental developments, but from the emergence of new configurations which cannot be anticipated. The metaphorical view of development as 'evolution' introduces the comparison between biological and technological evolution. However, these are different with the former assuming a Weissman model of inheritance, with a Darwinian mechanism, whilst the latter can be viewed as Lamarkian, incorporating lessons learnt from the previous generation. This fundamental distinction possibly explains the asymmetrical pace of evolution and the acceleration in speed and proliferation of technological developments over time. Having recognised the complexity associated with the artefact and the asymmetrical nature of development favouring the artefact over the human, the foundations are laid to explore the six Genres and the emergence of the interconnected autopoietic 'technological being'. The first Genre is one in which there is no artefact -this being a period involving ancestors to humans, transitioning to the second Genre, whereby objects are exploited for their affordances, and then the third Genre, which perhaps characterises much of human existence including the present, with the subsequent Genres already manifesting in somewhat embryonic forms. A synthesis of Genres 3 to 6 is presented in Table 1 . The third Genre is one characterised by a multi-millennia symbiotic relationship involving the mutual shaping of human and artefact. Commentators have proposed different phases, such as 'Age of Steam and Railways' (Freeman and Perez, 1988) . Each phase has its own characteristics, distinguishing it from others. Critical aspects include, first, the manner in which the artefact becomes domesticated into the everyday (Harwood, 2011) drawing attention to the situated nature of usage (Suchman, 1987) . Second are the broader contextual issues, as revealed with Fleck's (2000) Technology Complex, highlighting the mutually shaping nature of the human-artefact relationship at all levels of human organisation. Third, are the invisible infrastructural aspects which only become visible if there is a problem (Bowker, 1994) . Finally, is the special case of the cyborgisation of the human, whereby there is a fusion between human and artefact. From a governance perspective, legislation is usually after the event, inviting questions about whether it is desirable and possible to have legislation in place in anticipation of technological developments and their possible implications. This must be complemented by open dialogue and voluntary governance that is industry led. It is with more recent technological advancements that insights are gleaned into how their configurations might perceivably create new possibilities for the development of the artefact in subsequent Genres. The fourth Genre appears to mark a transition, breaking the prominence of the third and establishing the artefact as a 'technological being' with its own agency, drawing upon the notions of autonomy and intelligence. However, it exists within an infrastructure that comprises an integration of sensors, wireless connectivity, hybrid multi cloud storage and the processing / analytical capability of AI, which is not only invisible, but also often lacks transparency and explainability in the decision making that results. Moreover, there is growing human dependency upon this technology, raising the question of what happens when the infrastructure fails. Issues include whether a fully autonomous technological being has the same rights as humans and who / what owns the data that are assembled about any specific objecthuman or non-human. Further, who owns any resultant IP? The fifth Genre, involving interconnectivity, introduces a linguistic domain, enhancing autonomous behaviour but with the possibility of collective rogue behaviour, which can lead to the demise of humans. Indeed, there is growing separation between human and technological collectives as the latter start to develop collective autonomy. This raises questions about how to preserve human well-being, identity and ultimately survival? The sixth Genre is one in which technological beings have the capability of self-maintenance and reproduction. They are autopoietic. Further, this final stage renders humans redundant. Whilst the fused human-artefact in the form of the cyborg has been introduced in the third Age, compared to the trajectory offered here, it offers a potentially different one as explored by Klugman (2001) and Barfield (2015) . However, as previously mentioned, if singularity leads to ultra-intelligent technical beings, will they eventually outwit humans as illustrated in the film 'Ex Machina' (Garland, 2014) , when Ava escapes containment, even if humans have enhanced cognitive abilities? Invoked in the overall argument underpinning the six Genres is the question of how to control future developments for the benefit of humanity, balancing advancement of innovation with appropriate safeguarding and dialogue. Each Genre has its own characterisation. However, the issues raised here for debate are technological developments in the domains of autonomy, AI, language and autopoiesis. It is proposed that it is not developments in the individual domains that pose a threat, but the manner in which they combine. Moreover, it is not the possibility of their technical achievement that is the issue, but the functional, operational or social consequences if achieved. Indeed, the technical possibilities are perhaps no longer in the domain of science fiction but can be conceptually conceived in terms of existing technologies as embryonic forms of future possibilities. Furthermore, since time, as a dimension, is reducing as a factor in development, for reasons attributed to a Lamarkian model of evolution, then, what might be construed as impossible today may well be possible in twenty or even ten years from now. In summation, this raises questions about what debates should be happening in the immediate future and who should be involved in them? Are the issues raised something that industry should consensually self-regulate? Alternatively, is this a matter for politicians and policy makers to regulate upon? This paper contributes to the debate about future technological developments by offering a conceptually grounded framework to structure an argument as to one possible technological development trajectory, one that focuses upon the increasing autonomy of technological beings and the increasing redundancy of humans. From a policy and public debate perspective the view offered invites discussion which can only benefit the trajectory of digital transformation. Many questions can be raised, notably: Is this mere science fiction or is there a real threat to humanity as stated by Stephen Hawking? Is there another possible trajectory continued through the cyborg that preserves human well-being and identity? Whilst nano-and bio-technologies have not been explored as these are perhaps part of the world of the cyborg, how would they fit in the world of technological beings? Might technological beings want to incorporate biological material into their inorganic composition? Indeed, many issues are surfaced concerning our relationship with the growing capability and autonomy of technologies that both complement what we do yet can displace us. This has ramifications, not only for the future of work, but also for how we educate people about how to live in an increasingly technology enabled hybrid world. For example, what does it mean to trust in technology, as we might with personal assistants and robots that help in our homes? Likewise, what will it mean to manage technology? The framework proffered has made one fundamental assumption. This is that there is a shift towards the emergence of the autopoietic technological being with all technical constraints being overcome. In conclusion, an argument is presented, which examines the relationship between humans and artefacts. It reveals a science fiction scenario whereby humans become redundant in a domain where autopoietic technological beings reign supreme with the capability to do what they want and go anywhere, unconstrained by hostile environments. It identifies four areas for development: autonomy, intelligence, language, and autopoiesis -these are currently technically feasible, and with the speed of future developments, how soon will these scenarios be realised? Artificial Intelligence AR Augmented Reality It is difficult to establish how to differentiate our respective contributions. The original idea was that of Stephen Harwood, but its development, embodiment and articulation was the combined outcome of much discussion arising from both authors complementary knowledge about this area and Sally Eaves's applied experience within emergent technologies. What can past technology forecasts tell us about the future? Design For a Brain Types of technology Exploring artificial intelligence futures What have we learned? Cloud Repatriation Accelerates in a Multicloud World A Look at the Proposed Algorithmic Accountability Act of 2019. International Association of Privacy Professionals Cyber-Humans: Our Future With Machines Steps to an Ecology of Mind: Collected Essays in anthropology, psychiatry, evolution, and Epistemology Mosaics, triangles, and DNA: metaphors for integrated analysis in mixed methods research Research Article: The impact of academia-industry collaboration on core academic activities: Assessing the latent dimensions Of Bicycles, Bakelites, and Bulbs: Towards a Theory of Sociotechnical Change Autonomous automobilities: the social impacts of driverless vehicles Expert biases in technology foresight. Why they are a problem and how to mitigate them Superintelligence: Paths, Dangers, Strategies Disruptive Technologies: catching the wave Science on the Run: Information Management and Industrial Geophysics At Schlumberger, 1920-1940 Gender shades: intersectional accuracy disparities in commercial gender classification Preliminary Discussion of the Logical Design of an Electronic Computing Instrument. The Institute for Advanced Study Some Elements of a Sociology of Translation: domestication of the scallops and the fishermen of St Brieuc Bay Why space colonization will be fully automated Physiological regulation of normal states: some tentative postulates concerning biological homeostatics Organization for physiological homeostasis The Wisdom of the Body. Kegan Paul, Trench The Great Digital Divide -Why Bringing the Digitally Excluded Online Should Be a Global Priority Stephen Hawking warns artificial intelligence could end mankind The Singularity: A philosophical analysis A comprehensive survey on Internet of Things (IoT) Towards 5G wireless systems Cyborgs and space Amazon has sold more than 100 million Alexa devices Artificional Intelligence: Against Humanitiy's Surrender to Computers Flesh machine: cyborgs, Designer babies, and New Eugenic Consciousness The Interaction Design Foundation DE-CIX sets a new world record: More than 9 Terabits per second data throughput at Frankfurt Internet Exchange. 11th The Age of Automation Evolutionary theory of technological change: State-of-the-art and new approaches Unmanned Systems Integrated Roadmap FY2011-2036. Office of the Undersecretary of Defense for Acquisition SUBJECT: Autonomy in Weapon Systems. Department of Defense, United States DIRECTIVE Technological paradigms and technological trajectories: a suggested interpretation of the determinants and directions of technical change Siblings make sense of smart cities Disruptive Technologies for Good -TEDx Talk Tech For Good The effects of subtle misinformation in news headlines 2019 Edelman Trust Barometer Reveals 'My Employer' is the Most Trusted Singularity hypotheses -An overview, introduction to: singularity hypotheses: a scientific and philosophical assessment The Ethics of Big Data: Balancing economic Benefits and Ethical Questions of Big Data in the EU Policy Context Fear of missing out, anxiety and depression are related to problematic smartphone use Recent advances in connected and automated vehicles Six Years After Fukushima, Robots Finally Find Reactors' Melted Uranium Fuel. The New York Times Innofusion Or Diffusation? The Nature of Technological Development in Robotics Configuration: Crystallising Contingency Artefact↔ activity: the coevolution of artefacts, knowledge and organization in technological innovation Technology, the technology complex and the paradox of technological determinism Autonomous Weapons: an Open Letter from AI & Robotics Researchers. The Future of Life Institute (FLI) The Delphi technique in forecasting-A 42-year bibliographic analysis Structural crises of adjustment: business cycles An examination of factors affecting accuracy in technology forecasts Artificial intelligence does not exist: lessons from shared cognition and the opposition to the nature/nurture divide Uber's Self-Driving Car Saw the Pedestrian But Didn't Swerve -Report. 8th The Senses Considered as Perceptual Systems The Ecological Approach to Visual Perception The digital twin paradigm for future NASA and U.S. air force vehicles Speculations concerning the first ultraintelligent machine The Machine at Work: technology, Work and Society The Mobile Gender Gap Report 2020. GSM Association Implantable smart technologies (IST): Defining the 'sting'in data and device Digital traces and personal analytics: iTime, self-tracking, and the temporalities of practice The domestication of online technologies by smaller businesses and the 'Busy Day'. Inf Affordance' -what does this mean? Digital traces of information systems: sociomateriality made researchable The past and future of futures research HPC Consortium (2020) The COVID-19 High Performance Computing Consortium Losing Humanity: The Case Against Killer Robots The seamless web: technology, science, etcetera, etcetera Scenario-driven roadmapping for technology foresight IBM (2020) The Mayflower Autonomous Ship amarckian inheritance systems in biology: a source of metaphors and models in technological evolution Policing the smart city Securing the Future of German Manufacturing Industry: Recommendations for Implementing the Strategic Initiative INDUSTRIE 4.0. Office of the Industry-Science Research Alliance Philosophy of Technology: The Technological Condition: An Anthology From cyborg fiction to medical reality The Cultural Biography of Things: commoditization as process Long-term forecasts of military technologies for a 20-30 year horizon: An empirical assessment of accuracy 2001: A Space Odyssey Moving towards smart cities: solutions that lead to the smart city transformation framework Multiple visions of the future and major environmental scenarios The ontological politics of staying true to complexity, review of: 'Actor Network Theory and After The possible shapes of things to come Technology is society made durable On actor-network theory: A few clarifications When flexible routines meet flexible technologies: affordance, constraint, and the imbrication of human and material agencies Deal or no deal? Training AI bots to negotiate Economic Transformations: General Purpose Technologies and Long Term Economic Growth From Alexa to Siri and the GDPR: the gendering of virtual personal assistants and the role of data protection impact assessments A review of classification algorithms for EEG-based brain-computer interfaces Development trajectory and research themes of foresight How many singularities are near and how will they disrupt human history? Reframing biometric surveillance: from a means of inspection to a form of control The Myth of the Awesome Thinking Machine ENIAC: the press conference that shook the world Human brain/ cloud interface Reality: The search for objectivity or the quest for a compelling argument Autopoietic Systems: a Characterisation of the Living Organization The quest for the intelligence of intelligence Programs With Common Sense. Symposium on Mechanization of Thought Processes A proposal for the Dartmouth summer research project on artificial intelligence What is Technology? Cybernation: The Silent Conquest. Center for the Study of Democratic Institutions Azure Digital Twins -A service for building advanced IoT spatial intelligence solutions Moral Machine -Human Perspectives on Machine Ethics Unmanned aerial vehicles applications in future smart cities Transputer and Transputer-based Parallel Computers: sociotechnical constituencies and the build-up of British-European capabilities in information technologies Insight into the Nature of Technological Diffusion and Implementation: the perspective of sociotechnical alignment The Daleks. BBC, London second serial, first season, Doctor Who Strategic opportunities (and challenges) of algorithmic decision-making: A call for action on the long-term societal effects of 'datification The shift to Cloud Computing: The impact of disruptive technology on the enterprise software business ecosystem The Psychology of Everyday Things. Basic Books Big Data', The 'Internet of Things' and the 'Internet of Signs Material works: exploring the situated entanglement of technological performativity and human agency Sociomateriality: Challenging the separation of technology, work and organization Start of world-first, large-scale NHS trial testing VR Therapy for serious mental health conditions is a milestone endorsement for Oxford VR's business model and scale-up capacity The Tenth Planet. BBC, London second serial, fourth season, Doctor Who Key ideas from a 25-year collaboration at technological forecasting & social change The mangle of practice: Agency and emergence in the sociology of science Demonstrably Safe Self-replicating Manufacturing Systems Industry 4.0 as policy-driven discourse to institutionalize innovation systems in manufacturing A mobilising concept? Unpacking academic representations of responsible research and innovation Human Choice and Climate change: an International Assessment Diffusion of Innovations, 1, 5 ed How the Digital Twin and Common Data Environment (CDE) are Changing Construction Falling walls: the past, present and future of artificial intelligence Technology and Change: The New Heraclitus Are You Ready for Gen Z in the Workplace? Business Cycles: a Theoretical, Historical and Statistical Analysis of the Capitalist Process Will computers revolt? Being hybrid: a conceptual update of consumer self and consumption due to online/offline hybridity Level 5 autonomy: The new face of disruption in road transport The Coming Software Apocalypse Art, science, and technology of human sexuality Star Trek, 1997. Scorpion. Star Trek Voyager Why People Keep Rear-Ending Self Driving Cars Saturn's Children. Ace books Plans and Situated action: the Problem of Human-Machine Communications Human-Machine Reconfigurations: Plans and Situated Actions Effect of prior domain knowledge and headings on processing of informative text Digital twin-driven product design, manufacturing and service with big data Technology futures analysis: Toward integration of the field and new methods The Third Wave The evolution of socio-technical systems Making News Intelligent machinery, a heretical theory Generation Z: technology and social interest John von Neumann 1903-1957 I'd Blush If I could -Closing Gender Divides in Digital Skills Through Education A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed? Recent development of augmented reality in surgery: a review The coming technological singularity: how to survive in the post-human era Cybernetics: Or Control and Communication in the Animal and the Machine Technological Innovation as an Evolutionary Process In the Age of the Smart Machine: the Future of Work and Power Live grilling in my backyard having held roles including change agent and company director, as well as more recently, an academic, currently holding the post of Lecturer and Head of Joint Undergraduate Programmes at the University of Edinburgh Business School. His research interests span a number of areas, in particular the application of newer forms of technology drawing conceptually upon the domains of Cybernetics and the Sociology of Technology She specializes in the application of emergent technologies, notably AI, Blockchain, Cloud, Cybersecurity, IoT & 5G for both business and societal benefit at scale. An international keynote speaker and author, Sally was an inaugural recipient of the Frontier Technology and Social Impact award, presented at the United Nations and has been described as the 'torchbearer for ethical tech' -founding Aspirational Futures to enhance inclusion and diversity in the technology space and beyond