key: cord- -x kl bx authors: lee, connal; rogers, wendy a. title: ethics, pandemic planning and communications date: - - journal: monash bioeth rev doi: . /bf sha: doc_id: cord_uid: x kl bx in this article we examine the role and ethics of communications in planning for an influenza pandemic. we argue that ethical communication must not only he effective, so that pandemic plans can be successfully implemented, communications should also take specific account of the needs of the disadvantaged, so that they are not further disenfranchised. this will require particular attention to the role of the mainstream media which may disadvantage the vulnerable through misrepresentation and exclusion. in this article , our focus is on the central role played by communication in a public health emergency such as a flu pandemic, and th e ethical issues that arise from communication in this context. the two main eth ical issues d iscussed here are the need for effective communication, in order to ensure compliance with and therefore successful implementation of flu plans , and the need for communication strategies that do not exacerbate existing inequalities in the community. we will argue that ethical communication must be both effective and just. a flu pandemic will pose major threats to health and safety and has the potential to disrupt normal life in a variety of ways . measures such as case isolation, household quarantine, school or workplace closure and restrictions on travel figure prominently in many flu plans.p these measures will be requir ed to reduce the risks of contagion, leading to limitations on citizens' usual liberties. compliance from the public is required for measures su ch as quarantine and social distancing to be effective . whilst there is debate about the level of compliance required for effective containment of infection. " there is th e ri sk that non-compliance with restrictive measures from very small numbers of individuals may spread infection . it has been predicted, for example, that border restrictions and /or october internal travel restrictions must be more than % effective if they are to delay spread of infection by more than to weeks. given the types of liberty-limiting arrangements that a pandemic will bring, it is important to recognise the potential for pandemic plans to be undermined by individuals and groups failing to comply with directives . one manner in which this is likely to occur is through an illinformed populace. if people are unaware of what is to be expected and how to respond appropriately, then there is the risk that pandemic plans may be undermined by a lack of co-operation from the public. inadequate communication during the sars epidemic has been identified as one factor associated with the genesis of panic in the community and weakened co-operation and support from the public. another manner in which non-compliance may occur is through some people and groups viewing directives and government orders as unrepresentative or illegitimate. measures such as quarantine may be jeopardised by a refusal to comply with directives seen to lack fairness or authority. therefore, effective communication with the public is important for ensuring, to the extent possible, that all individuals both understand what is required of them and see restrictive measures as legitimate and worth adhering to. international communication guidelines draw attention to the need for communicators to understand existing public beliefs, opinions and knowledge, thus contributing to the public's involvement, or sense of involvement, in the planning and implementation process. effective communication alone, however, cannot guarantee that the public will comply with directives, as people may act in selfinterested ways not consistent with pandemic plans. effective and efficient communications should therefore be seen to be necessary but not sufficient for implementing pandemic plans. further, communications should not only be efficient and effective but also just. in the following sections, we argue for ethical pandemic communications that overcome barriers to accessing information and avoid inequalities imposed by current media arrangements. avoiding unnecessary harms caused by a lack of information can help to prevent greater disadvantage to the worst off. firstly, however, it is important to outline the role of communications in pandemic planning. communication can take many different forms . of these, the media has been recognised as having a key role to play in effective implementation of a plan in the event of a pandemic outbreak. the world health organisation, for example, specifically identifies the role of the media and draws attention to the need for public officials to utilize the press as a means of communicating with the public. ( ) public knowledge. communication strategies should aim at increasing public awareness about what is involved in a pandemic. the media should playa central role in informing the public of the content of pandemic plans, as well as contributing to the public's ( ) public rationality. in terms of creating and maintaining a social climate of rationality , it will be important that media information regarding risks and potential rationing of resources is not overplayed or sensationalised and is proportional to the actual threat at hand, thereby avoiding unnecessary public alarm. as thomas may points out, developing a communications infrastructure designed to accurately convey information can go a long way to mitigating crises created through fear. ( ) equity. there will be an ethical imperative to recognise and address inequalities in communications. if inequalities in access to information are not addressed , then there is the concern that some groups and individuals will miss out on vital information . as it may take only one ill-informed individual to spread disease, reaching every available person should be a priority in the event of a pandemic. inequalities in control over the mainstream media pose a potential threat to pandemic plans. as we discuss below, exclusion, m isrepresentation and stigmatisation of groups and individuals by the press may lead to a climate of non-compliance , thereby jeopardising the welfare of the whole population. ( ) inequalities in access to information. with regard to access to information, people vary both in their ability to receive information and to act on this. addressing inequalities in access therefore requires making information directly accessible for the public and ensuring that information is sensitive to the varying needs and interests of different individuals and groups in society so that it is information that people have the capacity to act on. we have identified three ethical issues that should be recognised and addressed in overcoming inequalities in access to information. these are: barriers to accessing information; voluntary versus involuntary lack of access to information; and provision of information that is relevant to people's capacities. (a) barriers to accessing information. many influenza plans are available on internet sites. this is inadequate communication from an ethical point of view, as it places the burden of responsibility on individuals to access information.p in planning for a public health crisis such as a pandemic, there needs to be more than a formal capacity to access necessary information. this should necessarily involve a concerted effort by governments and authorities to ensure that information reaches people in forms that are readily accessible.p including but not limited to the mainstream media. inequalities in access to information may be due to a range of factors such as geographic isolation, disabilities related to visual or hearing impairments, or decreased access related to long or irregular working hours . whilst these inequalities may not be in themselves unjust, they are inequalities that affect access to information and have the potential to jeopardise successful pandemic planning. there is a strong moral imperative to address and rectify inequalities in access that arise from involuntary circumstances. if some individuals are unable to comply with directives because their capacity to access information has not been considered, there is an ethical duty to ensure that people are not unnecessarily harmed when they could have been protected if given appropriate information. overcoming all inequalities in access may not be possible, however, if we include inequalities resulting from voluntary actions, such as never watching or listening to the news or reading mail delivered to the home. overcoming voluntary refusals to accept information may require significant, costly and overly burdensome interventions in people's lives, and therefore not be as morally justifiable as overcoming involuntary barriers. addressing the issue of access must also take into account what kinds of information are most important for individuals to receive. we suggest that this must involve adequate consideration of how capable people are of understanding and acting on directives. this requires a match between the content of the information, including instructions for action, and the resources and capacities of the recipients of that information. there is an ethical imperative to ensure that the varying information requirements of the population are adequately considered.vt during the build up to hurricane katrina, for example, the community received information advising them to leave new orleans or seek refuge in the superdome stadium. however, this information did not take into account the varying capacities of groups and individuals to act upon the directives given. this type of advice did not assist already vulnerable groups (such as people in poor health or with disabilities) who lacked the resources to abandon their property in the absence of insurance and assurances that they would be adequately taken care of. in this case the information available to the less well-off in new orleans was neither relevant nor particularly useful given the realities of people's circumstances. perhaps more importantly, the effect was to widen inequalities, as those who were well enough off to comply fared better than those who were not so able. communication during a pandemic must be sensitive to how capable people are of acting on information important to their health and well being, and the likely compounding effects on existing inequalities. it could be argued that it was not the nature of information distributed in the case of katrina that was the problem; rather, it was the poor socio-economic circumstances of much of the population together with the lack of other necessary resources. however, it is important to note that in the subsequent media reports , there was stigmatisation of those who had not complied with the advice, with the implication that much of the ensuing human disaster was the fault of the victims themselves, rather than anything else such as lack of capacity to follow the advice. in situations like this, the lack of appropriate information for the disadvantaged is exacerbated by media communication that is not sensitive to the capacities of people to act on that information. we will now look at inequalities in control over media content and give a brief account as to why addressing these inequalities is necessary for achieving compliance and avoiding extra injustices in the event of a pandemic. ( ) inequalities in control over media content. in the event of a pandemic, inequalities in access to and control over the media may cause a number of problems, limiting the successful implementation of pandemic plans . this is critical when, as for example in australia, media ownership is concentrated in the hands of a few whose interests do not overlap with the role of communication outlined above. not everybody in society has the freedom to engage in and influence media discourse. in terms of the ability of individuals and groups to engage in the public forum, rawls' theory of justice is helpful for making an important distinction between liberty and its value. liberty, according to rawls, is the complete structure of the liberties of october citizenship, whilst the worth of liberty is the value a liberty has for individuals and collectives depending upon their ability to advance their ends. for example, the value of freedom of speech is worth more to a radio-based 'shock jock' with the means of advancing their point of view than to an unemployed person lacking the capacities and opportunities to advance their interests through the media. mainstream news favours the interests and values of those with a stake in the media business ahead of any competing ethical principle such as the public interest or reducing inequities. here we take stakeholders to include advertisers, audiences, and those who work directly for media firms . as a result, the content of news stories, particularly within the commercial press, is typically slanted towards the interests of stakeholders, with consequent disenfranchisement of non-stakeholders . there are two main ways in which the mainstream media can have a negative impact on those who lack power and influence over the press: misrepresentation and exclusion of nonstakeholders. (a) misrepresentation of non-stakeholders. in the event of an influenza pandemic, media misrepresentation of the interests and claims of nonstakeholders, in particular the least well -off sections of the community, may be problematic. there is a risk that individuals and groups who protest current arrangements may be presented to audiences as disruptive and unhelpful to the situation. this has the potential to weaken compliance levels during a pandemic. the misrepresentation of some groups may lead the public at large to view these groups as troublesome, leading to further marginalisation. if this occurs, then it is unlikely that these groups or individuals will embrace the notion of 'civic duty', which is an important aspect of accepting liberty-limiting arrangements."? it will be concerning from an ethical standpoint if misrepresentative media coverage facilitates discrimination against certain groups, as happened in canada where there was public boycotting of chinese business interests after the outbreak of sars was linked to a chinese national.l" thus it is not difficult to imagine that in the event of a pandemic, certain groups will be treated less than fairly by the media, such that the public will also treat these groups unfairly. overall this inequality in representation of points of view , claims and interests is likely to impact negatively on pandemic plans. the interests of non-stakeholders are not well represented in the mainstream media. this means that in a pandemic, their information needs may be largely ignored, and their interests unnoticed, by the wider society.t? the exclusion of some groups may lead to a lack of understanding about the legitimate claims of these groups. this will be damaging for pandemic plans, particularly if certain groups have justified claims. for example, it may be that arrangements for dealing with a pandemic are actually harmful for some individuals or groups.s? a greater likelihood of infection in communities that lack infrastructure could lead to demands for extra resources in the event of a public health crisis. if the claims and view points of these groups are excluded from the press, then wider society simply will not understand what those claims and views are and how they might contribute to more effective handling of pandemics. it is important to note here that we do not want to develop an account of how the media ownership model could, or in fact should, be restructured in order to overcome the problems that we have identified. rather, we see it as important to highlight the specific problems that arise with current media arrangements that will, in the event of a flu pandemic, harm the vulnerable, despite any public perceptions that a privately owned press is a free and independent press. it is of course possible that privately owned media may act out of self-interest to promulgate effective communication, or be persuaded to act with benevolence. however, we suggest that more concrete action from pandemic planners and governments will be necessary to ensure that communications are equitable. having outlined how mainstream media may undermine pandemic planning, we now look in more detail at the effect that media bias may have on disadvantaged groups and individuals. living conditions and community infrastructure both have a bearing on how susceptible to infection a given community may be, or how well prepared and equipped a given community is to deal with infection.v' situations of socio-economic disadvantage facilitate transmission of infectious diseases, as we have seen to date with the patterns of emergence and transmission of both sars and bird flu. given the potential for increased burden of disease amongst the disadvantaged, it may be particularly harmful for the effective implementation of pandemic plans if less well-off sections of the community and vulnerable groups are not given a voice through the media. this increased vulnerability to infection places a disproportionate amount of responsibility on the disadvantaged to act in ways that will not spread illness, adding to the moral imperative to support these groups through equitable communications. the increased risk of infection faced by disadvantaged groups is likely to put them in a position whereby they become subjects of news. given the above concerns regarding the fair representation of disadvantaged communities in the media, a pandemic may create a climate of news coverage that misrepresents,stigmatises and excludes the disadvantaged or vulnerable. we already have experience of this, for example, with news stories regarding hiv/ aids in the s that contributed to stereotypical and harmful perceptions of the homosexual community, as well as leading to a lack of understanding by society at large as to how the virus is contracted.v' the potential for a pandemic outbreak to make the worst off even more worse off must be a consideration in structuring an ethical approach to communications. as the main communicative force in our society, the media will playa central role in communicating the ethical underpinnings of arrangements and decisions; as such the media will contribute to and influence how the public perceives the fairness of measures such as priority vaccination and distribution of resources. as well, the press will influence the public's judgement of how well state directives protect or have protected the public from harm. however, inequalities in control over media content suggest that the public may well be given a biased interpretation of the effectiveness of a given plan in safeguarding the collective interests of society, with the risk that pandemic reporting will favour the interests of wealthier sections of the community. in the event of an influenza pandemic, already vulnerable groups and communities will not only be in a position of greater risk with regard to infection, existing inequalities in media communications and infrastructure will further compound their vulnerability. by addressing these inequalities, it is possible to identify an ethical approach for communications about pandemic plans. in tum, addressing inequalities in communications means that pandemic plans are less likely to be undermined by groups and individuals not complying because their information needs have been ignored and their interests and points of view have not been fairly represented. box lists four features of ethical communications strategies. box : features of ethical communication strategies . equity in access to information . active redress of existing media inequities . decrease extra burdens on disadvantaged . increase information, legitimacy and trust taking these into account, we believe it is possible to implement pandemic plans with greater efficiency, effectiveness and compassion. we suggest that if policy makers and pandemic planners attend to inequalities in communication, this will help to avoid unnecessary disaster and spreading of disease, and also ensure that disadvantaged individuals and groups are not made more disadvantaged in the event of a public health crisis, as occurred in new orleans. endnotes earlier versions of this paper were presented at the th world congress of bioethics, beijing, china, august - , , and the australasian bioethics association annual conference, brisbane, july - , . we are grateful for comments received at these conferences, and from the anonymous reviewers. monitored, especially with respect to prospects for providing fair benefits to , and avoiding undue burdens on, disadvantaged groups, so that corrective adjustments can be made in a timely manner". for example, see th e european un ion public health in flu enz a website that offers access to flu plans detailing containment strategies , including liberty limiting arrangements: http:j jec.europa.eujhealthjph_threatsjcomjinfluenzaj influenza_en.htm reducing the im pac t of the next in fluenza pandemic using household-based public health in terventions stra tegies for mitigating an in fluen za pandemic the public's response to se vere a cute respiratory syndrome in toronto and the united states world health organisation, outbreak communication guidelines medical countermea sures for pandemic influenza: ethics and the law see for example the who outbreak communication guidelines, op . cit. , p . , which emphasise the rol e of trust and transparency in successful implementation of pandem ic plans clin ical decis ion m aking d u ring public health emergencie s : ethical con si de rations public co m m u nication, risk pe rception , and the via bility of preventa tive vaccina ti on against co m m u nicable d is ease s preparin g for an infl uenza pandem ic com mu nicati ng the risk s of bioterrori sm and other eme rge ncies in a d iverse society: a case stu dy of special populations in north dakot a afte r the flood oxford uni versi ty pr es s it will be important for people to accept liberty-limiting arrangem en ts, and this ofte n in volve s the public viewing co m pliance a s a civic resp on s ibility or duty. see for example the tor onto join t centre for bioethics pandemic influenza working gr oup rep ort, stand on guard for thee -ethical co nside rations in pr ep aredne s s planning for pandemic influenza , tor onto see al so sc hram j , 'how po pu lar percep tion s of ris k fro m sars are fermentin g dis crimination exp loring jou rnalis m ethics , sydney: un ivers ity of new south wal es pres s the bellagio sta te me n t of principle s h ighligh t s the n eed to m ake available accurate, u p-to -date and easily und erstood in for mation about avian and huma n pa ndem ic infl uenza for d isa dvantaged gro u p s . in particula r , prin ciple v states key: cord- -abs rvjk authors: liu, ming; kong, jian-qiang title: the enzymatic biosynthesis of acylated steroidal glycosides and their cytotoxic activity date: - - journal: acta pharm sin b doi: . /j.apsb. . . sha: doc_id: cord_uid: abs rvjk herein we describe the discovery and functional characterization of a steroidal glycosyltransferase (sgt) from ornithogalum saundersiae and a steroidal glycoside acyltransferase (sga) from escherichia coli and their application in the biosynthesis of acylated steroidal glycosides (asgs). initially, an sgt gene, designated as ossgt , was isolated from o. saundersiae. ossgt -containing cell free extract was then used as the biocatalyst to react with structurally diverse drug-like compounds. the recombinant ossgt was shown to be active against both β- and β-hydroxyl steroids. unexpectedly, in an effort to identify ossgt , we found the bacteria laca gene in lac operon actually encoded an sga, specifically catalyzing the acetylations of sugar moieties of steroid β-glucosides. finally, a novel enzymatic two-step synthesis of two asgs, acetylated testosterone- -o-β-glucosides (at- β-gs) and acetylated estradiol- -o-β-glucosides (ae- β-gs), from the abundantly available free steroids using ossgt and ecsga as the biocatalysts was developed. the two-step process is characterized by ecsga -catalyzed regioselective acylations of all hydroxyl groups on the sugar unit of unprotected steroidal glycosides (sgs) in the late stage, thereby significantly streamlining the synthetic route towards asgs and thus forming four monoacylates. the improved cytotoxic activities of ′-acetylated testosterone -o-β-glucoside towards seven human tumor cell lines were thus observable. steroidal glycosides (sgs) are characterized by a steroidal skeleton glycosidically linked to sugar moieties, which can be further acylated with aliphatic and aromatic acids thus forming complex acylated steroidal glycosides (asgs) . the resulting steroidal glycoside esters (sges) exhibit a wide variety of biological activities, like cholesterol-lowering effect , anti-diabetic properties , anti-complementary activity , immunoregulatory functions , and anti-cancer actions [ ] [ ] [ ] , which made asgs promising compounds with pharmaceutical potential. numerous methods, including direct extraction , chemical synthesis , and biosynthesis , have been developed to synthesize these acylated steroidal glycosides. direct extraction from varied organisms is one of the main methods to obtain asgs [ ] [ ] [ ] . however, the content of asgs was usually low in natural sources [ ] [ ] [ ] . moreover, the extraction routes were highly time-consuming and required laborious purification procedures [ ] [ ] [ ] , resulting in poor yields and/or low purity of the final products. the production of asgs was also achieved by chemical synthesis previously [ ] [ ] [ ] [ ] [ ] . however, these efforts often encounter a fundamental challenge, namely, regioselective acylation of single hydroxyl group of unprotected sgs in the late stage of the chemical synthesis of asgs. sgs generally possess multiple hydroxyl groups with similar reactivity. regioselective acylation of a particular one of multiple hydroxyl groups generally requires multi-step protection/deprotection procedures, which makes the synthetic pathway of these sges costly, wasteful, long and timeconsuming, and results in low yield in the end. the biosynthesis of asgs from free steroids based on enzymatic catalysis was deemed to reduce the number of protection/ deprotection steps due to the high selectivity of enzymes. theoretically, the biosynthesis of asgs includes two steps. in the first reaction, the sugar moiety from nucleotide-activated glycosyl donors was attached to steroids at different positions, most commonly at the c- hydroxyl group (oh), under the action of nucleotide dependent sgts , . the glycosylation of a hydroxyl group at the c- position of steroids was well characterized and a few of steroidal β-glucosyltransferases were isolated from diverse species , . however, the reports of sgts specific for positions other than c- of steroids are limited. the sugar moieties of the resultant sgs can further be acylated by sgas to form asgs in the next step , . compared to sgts, surprisingly little is known about sgas. up to date, no sga genes has yet been cloned, which in turn limit the enzyme-mediated biosynthesis of asgs. hence, the successful gene isolation and functional characterization of sgas have become a prominent challenge for bioproduction of asgs. herein, the functional characterizations of a plant-derived sgt with activity against both β-and β-hydroxyl steroids, and a bacterial sga, as well as their application in the biosynthesis of asgs are reported. initially, a steroidal glycosyltransferase ossgt was isolated from medicinal plant o. saundersiae and showed activities for β-and βhydroxyl steroids. unexpectedly, in an effort to identify the function of ossgt , we characterized laca protein (designated as essga ) from e. coli as a sga, catalyzing testosterone- -oβ-glucoside (t- β-g) and estradiol- -o-β-glucoside (e- β-g) to form corresponding acylates. further, under the synergistic actions of ossgt and essga , the biosynthetic preparation of two acylated steroidal glycosides (asgs), namely acetylated t- β-gs (at- β-gs) and e- β-gs (ae- β-gs), was first achieved, thereby yielding four monoacetylated steroidal glucosides, namely -o, -o, -o and -o-acylates (scheme ). the cytotoxic activities of these monoacylates were evaluated against seven human tumor cell lines (hct , bel , mgc , capan , nci-h , nci-h and a ) and -acetylated testosterone -o-β-glucoside was observed to display improved cytotoxic activity against these seven cell lines (scheme ). the species o. saundersiae is a monocotyledonous plant rich in steroidal glycosides, suggesting that it may contain sgts responsible for the glycosylation of steroidal aglycons [ ] [ ] [ ] . o. saundersiae is thus selected as the candidate plant for sgts isolation. the transcriptome of o. saundersiae was thus sequenced with the aim of isolating genes encoding sgts. a total of , , raw reads were generated after the transcriptome sequencing of o. saundersiae. after removal of dirty reads with adapters, unknown or low quality bases, a total of , , clean reads were retained. these clean reads were combined by assembling soft trinity to form longer unigenes. finally, an rna-seq database containing , unigenes with mean length of bp was obtained. next, these unigenes were aligned to publicly available protein databases for functional annotations, retrieving unigenes displaying the highest sequence similarity with sgts. unigene with bp in length was thus retrieved from the unigene database for its high similarity with sgts ( supplementary information fig. s ). moreover, orf finder result showed that this unigene contained a complete open reading frame (orf) of bp, starting at nucleotide with an atg start codon and ending at position with a tga stop codon. the unigene contained bp of -utr (untranslated region) and bp of -utr. therefore, unigene was selected for further investigation. to verify the identity of unigene , a nested pcr assay was therefore carried out to amplify the cdna corresponding to the orf of unigene using gene-specific primers (supplementary information table s ). an expected band with approximately . kb was obtained, as observed in agarose gel electrophoresis ( supplementary information fig. s a ). the amplicon was then inserted into peasy tm -blunt plasmid (supplementary information table s ) to form a recombinant vector for sequencing. results indicated that the amplified product was % identity with that of unigene , confirming unigene was a bona fide gene in o. saundersiae genome. the bp orf encoded a polypeptide of amino acids (aa) with a predicted molecular mass of . kda and pi of . . blast analysis of the deduced protein revealed its predominant homology with sterol β-glucosyltransferase from elaeis guineensis (xp_ . , %), musa acuminata subsp. malaccensis (xp_ . , %) and anthurium amnicola (jat . , %). the cdna was therefore designated as ossgt and submitted to genbank library with an accession number of mf . the sequence analyses of ossgt were first assessed with the aim to direct its expression and functional verification. no putative trans-membrane domain was observed in ossgt based on the prediction results by tmhmm (http://www.cbs.dtu.dk/services/ tmhmm/), suggesting ossgt is a cytoplasmic sgt and may be expressed heterologously in e. coli in a soluble form. multiple alignment of ossgt and other plant sgts indicated that the middle and c-terminal parts of these sgts were more conservative than the n-terminal region ( supplementary information fig. s ), consistent with previous notion . moreover, two conservative motifs, namely a putative steroid-binding domain (psbd) and a plant secondary product glycosyltranferase box (pspg), were observed in ossgt ( supplementary information fig. s ). the region named psbd located in the middle part of ossgt and was thought to be involved in the binding of steroidal substrates . pspg box is about aa in length and close to the carboxy-terminus. this box is a characteristic "signature sequence" of udp glycosyltransferase and deduced to be responsible for the binding of the udp moiety of the nucleotide sugar . the presence of psbd and pspg boxes suggests that ossgt may be involved in secondary metabolism, catalyzing the transfer of udp-sugars to steroidal substrates thereby forming steroidal glycosides. the phylogenetic tree based on deduced amino acid sequences of ossgt and other sgt was generated by mega . . as scheme an enzymatic two-step synthesis of at- β-g ( b- e) and ae- β-g ( b- e) from the free steroids testosterone ( ) and estradiol ( ) . firstly, two sgs, t- β-g ( a) and e- β-g ( a), were prepared from their corresponding steroidal substrates testosterone ( ) and estradiol ( ) in the presence of a steroidal glycosyltransferase ossgt from o. saundersiae. the resulting t- β-g ( a) was further regioselectively acetylated under the action of an acyltransferase ecsga from e. coli, thereby yielding four monoacetylated steroidal glucosides ( b- e) with the yield ratio of : : : . likewise, e- β-g ( a) was acetylated by ecsga to form monoacetylated products b- e in a ratio of : : : . shown in supplementary information fig. s , all selected sgts were clusted into four clades, mon, di, ba and fun clades. the four clades included sgts from monocots, dicots, bacteria and fungi, respectively. ossgt belonged to mon clade, suggesting that ossgt was most similar to sgts from monocots. ossgt was then inserted into pet- a(þ) to yield a recombinant pet a-ossgt (supplementary information table s ) , which was transformed into transetta(de ) (transgen, beijing, china) for heterologous expression. sodium dodecyl sulfate polyacrylamide gel electrophoresis (sds-page) indicated that most of the expressed ossgt protein was present in the form of insoluble inclusion body, which was regarded to be devoid of bioactivity. it was well known that chaperone proteins were able to assist protein folding and thus increase production of active protein . therefore, a chaperone plasmid pgro (takara biotechnology co., ltd., dalian, china) was applied to be co-expressed with pet aossgt in bl (de ) (transgen, beijing, china), facilitating the soluble expression of ossgt . as shown in supplementary information table s , the plasmid pgro contains two genes encoding chaperone proteins groes and groel. under the synergistic action of chaperones groes and groel, an intense band with an apparent molecular mass of kda was present in the crude extract of bl (de )[pet aossgt þpgro ], but not in the crude proteins of the control strain bl (de )[pet- a (þ)þpgro ] ( supplementary information fig. s b ). the immunoblot analysis with an anti-polyhistidine tag antibody showed a bound band, but the control extract did not cross-react with the antibody ( supplementary information fig. s c ). these data collectively indicated that ossgt was successfully expressed in e. coli in a soluble form (supplementary information figs. s b and c) in accord with the predicted result of soluble expression of ossgt . to identify the activity of ossgt , the ossgt -containing crude protein was used as the biocatalyst for glycosylation reactions. each member of the acceptor library ( - , fig. b and supplementary information fig. s ) was first assessed as sugar table s ). of the substrates, only steroids were observed to be glucosylated by ossgt , forming corresponding monoglucosides (figs. - and supplementary information figs. s - ) . the ten steroids included seven β-hydroxysteroids ( - , supplementary information figs. s - ), two β-hydroxylsteroids ( - , figs. and ) and one β, β-dihydroxysteroid ( , supplementary information fig. s ). the reaction activities of ossgt towards the ten substrates indicated that ossgt was an sgt showing activities towards both β-and β-hydroxysteroids, consistent with the predicted result by bioinformatics analyses (supplementary information figs. s and s ). in fact, the reports of sgts with glycosylation activity against steroids at positions other than c- were limited and only three sgts from yeast were verified to exhibit selectivity towards both β-and β-hydroxylsteroids . ossgt was therefore viewed as the first plant sgt with selectivity towards both β-and β-hydroxylsteroids (figs. and and supplementary information figs. s - ) . however, the glycosylation activity of ossgt towards βhydroxyl group would be lost if additional hydroxyl group at β-( β-oh-testosterone, ), β-( β-oh-testosterone, ), β-( β-oh-testosterone, ), or α-position ( α-oh-testosterone, ) , even a methyl group at c -position (methyltestosterone and its derivatives, - ) was attached to testosterone ( ), generating not any glycosylated products. moreover, ossgt has no activity towards other compounds, including steroids without β-and β-hydroxyl groups ( - and - ) , flavonoids ( - ), alkaloids ( ) ( ) ( ) ( ) ( ) ( ) ( ) , triterpenoids ( - ), phenolic acids ( - ) and coumarins ( - ) as shown in supplementary information fig. s . among reactive β-and β-hydroxylsteroids, dehydroepiandrosterone ( ) had a maximum conversion approaching %, followed by diosgenin ( ) with % conversion and the other compounds having conversion below % (fig. a) . to produce sufficient glucosylated products for structural characterization, scale-up of ossgt -mediated reactions to preparative scale ( ml) was conducted. the resultant glucosides ( ) glycosylation. hplc chromatogram of reaction product of estradiol ( ) incubated with ossgt protein (a) or without ossgt (b). uv spectra of and enzymatic product a are shown in upper panels. the hplc conditions are available in supplementary information table s . were prepared by hplc and subjected to nmr analysis for structural elucidation. to determine the glycosylation sites of a- a, a and a, the c nmr analyses of the corresponding aglycons - , and were also performed (supplementary information figs. s - and table s ). the c nmr glycosylation shifts (Δδ, δ glucoside -δ aglycon ) of these glycosides were thus examined to ascertain the glycosylation position (table , and supplementary information tables s - ). the steroidal glycosides were observed to have significant glycosylation shift Δδs for c- (glycosides - ) or c- position (compounds - ), showing their -or -glycosides. for a and a, the location of glucose group was determined to be at c- based on their hmbc correlations between h- and c- (supplementary information figs. s and ). the β-anomeric configuration of the d-glucose unit in these ten glucosides ( a- a) was determined from the large anomeric proton-coupling constants of h- (j ¼ . hz) ( table and supplementary information tables s - ). the structures of these glucosides were thus assigned to βglucosides ( a- a and a) or β-glucosides ( a and a) of steroids based on h nmr ( a- a) and c nmr ( a- a) signals, hsqc ( a- a, a- a, a- a), hmbc ( a- a, a) and dept ( a, a) spectra (table , supplementary information figs. s - and tables s - ). these data collectively showed that ossgt was an inverting-type glycosyltransferase. in the preparation of t- β-g ( a), when the concentrated reaction mixture was separated by reversed phase highperformance liquid chromatography (rp-hplc), we accidentally discovered that in addition to the major peak representing t- β-g ( a), a minor peak ( b) was also present in the hplc profile ( supplementary information fig. s a ). the minor product with a t r of . min was also subjected to lc-ms analysis. surprisingly, the [m þ h] þ value of the minor product was assigned to . , more than that of monoglycosylated testosterone ( supplementary information fig. s b ). this finding hints that the minor product may be an acetylated testosterone glucoside. to characterize the exact structure of b, the minor product was prepared in bulk for nmr experiment (supplementary information figs. s - ). details of h and c nmr spectra were tabulated in table . the minor product was thus identified as -acetylated testosterone -o-β-glucoside ( -at- β-g, b). to test if the acetylated product b was from glucoside a, the purified glucoside a was used as the substrate to incubate with crude extract of e. coli expressing pet- a(þ) or pet ossgt , respectively. in both conditions, we observed the presence of b ( supplementary information fig. s ). on the contrary, no acetylated product b was detected in the e. coli lysate without the addition of substrate a ( supplementary information fig. s ). we therefore inferred that testosterone ( ) was first glycosylated at the β-hydroxyl group by ossgt to form t- β-g ( a), which was then selectively acetylated at c- of sugar moiety to yield the -at- β-g ( b) by a soluble bacterial acetyltransferase ( supplementary information fig. s ) . likewise, two metabolites, e- β-g ( a) and -acetylated estradiol -o-β-glucoside ( -ae- β-g, b), were detected in the concentrated ossgt -catalyzed reaction mixture of estradiol ( ) as shown in supplementary information figs. s - , tables s and s . these data collectively revealed that e. coli cell contained at least one sgas specific for the acetylation of steroidal β-glycosides. moreover, the sugar donor promiscuity of ossgt -catalyzed glycosylation reactions was also investigated. β-sitosterol ( ) and testosterone ( ) were chosen as the sugar acceptors to react with varied sugar donors listed in the supplementary experimental section, respectively. results demonstrated that both β-sitosterol ( ) and testosterone ( ) had no reactive activity towards other udp-activated nucleotides except udp-glc under the action of ossgt , indicating ossgt was specific for udp-glc. to characterize the genes encoding sgas, the first task was to analyze the genome sequence of bl (de ), which was public in ncbi database (accession no. cp . ). this bacterial strain contains at least putative acetyltransferase genes, in which laca, maa and wech, were predicted to encode o-acetyltransferase (supplementary information table s ). as shown in supplementary information fig. s , further sequence analyses revealed wech protein was a membrane-bound protein with a total of membrane-spanning helixes, inconsistent with the above results, in which the candidate acetyltransferase was determined to be a soluble protein in bacterial lysate. laca and maa proteins were predicted to have no transmembrane helixes, suggesting their soluble form in bacterial. thus, the remaining two genes, laca and maa, were further investigated. first, the entire orfs of the two genes were isolated from bacterial genome using gene-specific primers (supplementary information fig. s a and table s ). the orfs of laca and maa genes were and bp encoding polypeptides of and aa, respectively. the predicted molecular weights of the two proteins were . and . kda. the two genes were then inserted into pet- a(þ) to generate two recombinant vectors, which were introduced into bl (de ) for heterologous expression. after isopropyl-β-d-thiogalactoside (iptg) induction, the accumulation of approximately or kda was observed in the lysate of bacterial strain harboring pet a-laca or pet a-maa (supplementary information fig. s b ). moreover, the presence of bacterially-expressed his-laca or his-maa fusion protein in the bacterial lysate was verified by western-blot with anti-his antibody ( supplementary information fig. s c ). the expressed laca or maa protein was then purified to near homogeneity by affinity chromatography ( supplementary information fig. s b ). the purified his-laca or his-maa fusion protein was used as the biocatalyst to react with t- β-g ( a) and acetyl-coa. the reactions were monitored by hplc-uv/ms analysis using the method e (supplementary information table s ). as shown in fig. , -at- β-g ( b) was detected in laca-catalyzed bioconversion of t- β-g ( a), attesting laca encoded a sga (fig. , upper panel). on the contrary, there were not any new products in maa-mediated reaction. laca was thus designated as ecsga (e. coli steroidal glucoside acetyltransferase) for convenience hereinafter and submitted to genbank with an accession number of mf . it is generally accepted that hydrolases and acyltransferases are two classes of enzymes responsible for acylation reactions of sgs . the enzymatic acylations reported now are largely performed by hydrolases like lipases . on the other hand, not any genes encoding sgas are isolated up to date , . ecsga is therefore regarded as the first steroidal glycoside acyltransferase catalyzing the attachment of acyl groups into the hydroxyl groups of steroidal β-glycosides, to our knowledge. also, ecsga was observed to catalyze another steroidal βglycoside, e- β-g ( a), to form corresponding acylate ( b, fig. , upper panel) . on the other hand, the other glucosides listed in supplementary information fig. s could not be acetylated by ecsga , testifying ecsga was specific to steroidal β-glycosides. moreover, the acyl donor promiscuity of ecsga was investigated. t- β-g ( a) or e- β-g ( a) was used as the acyl acceptor to react with different acyl donors (acetyl-coa, succinyl-coa, arachidonoyl-coa, palmitoyl-coa and acetoacetyl-coa) under the action of the purified ecsga . results manifested that neither t- β-g ( a) nor e- β-g ( a) could react with these acyl donors except acetyl-coa, indicating that ecsga had strict donor selectivity. after careful check of ecsga -catalyzed reaction mixture in hplc profile, we have found several other minor peaks adjacent to the major product b (fig. , upper panel) . these minor peaks are so close that we could not distinguish. therefore, an efficient hplc method, namely method i (supplementary information table s ), was developed to separate these peaks. as shown in fig. (lower panel) , besides the major product b (t r ¼ . min), we observed three other minor peaks at t r ¼ . , . table s ). lower panel, hplc profile of acetylated products of a separated by chromatogramic method i (supplementary information table s ). and . min, respectively. the lc-ms measurement of these minor peaks showed that all of them have an [m þ na] þ value of . , thus suggesting their monoacetylated testosterone glucosides ( supplementary information fig. s ) . likewise, e- β-g ( a, t r ¼ . min) was observed to form four acetylated glucosides using purified ecsga as the biocatalyst (fig. ) . besides the well-characterized -ae- β-g ( b, t r ¼ . min), the other three products were determined to be monoacetylated estradiol glucosides based on their ms data ( supplementary information fig. s ). it was assumed that ecsga could introduce an acyl group into different hydroxyl groups of steroidal β-glycosides, generating varied monoacetylated products (figs. and ). to obtain sufficient amount of monoacetylated testosterone glucosides for structural characterization and further cytotoxicity assay, an enzymatic two-step process for at- β-gs ( b- e) was developed (scheme ). firstly, the whole cell biotransformation for the formation of at- β-gs ( b- e) was exploited due to its simple catalyst preparation. when testosterone ( ) was incubated with the engineered strain bl (de )[pet a-ossgt þpgro ], not any new products were detected. on the other hand, when t- β-g ( a) was added into the same whole-cell system, -at- β-g ( b) was present in the reaction mixture ( supplementary information fig. s ). these data indicated that testosterone ( ) could not be transported into the cell while the glycosylation of testosterone ( ) significantly improved the intercellular transport. thus, the formation of at- β-gs ( b- e) from testosterone ( ) using the single whole-cell biocatalyst is infeasible. a two-step process is therefore established to address this limitation. specifically, ossgt -catalyzed reaction was performed in the membrane-free crude cell extract of bl (de )[pet a-ossgt þpgro ], while ecsga -mediated acetylation was conducted in the whole-cell system of bl (de )[pet a-essga ]. the optimal ph and temperature of ossgt -catalyzed reaction using the cell-free extract of bl (de )[pet a-ossgt þp-gro ] as the biocatalyst were first determined to be alkaline ph value of and c, respectively (supplementary information fig. s ). next, the μl screening scale of ossgt -catalyzed glycosylation reaction was scaled to ml scale, in which mg testosterone ( ) were glycosylated to form mg t- β-g ( a) under optimized conditions (scheme ). the resultant t- β-g ( a) was subsequently used as the substrate applied in the scale-up of the whole-cell system of bl (de )[pet a-essga ] ( ml) under optimized ph . and c. the resulting reaction mixture was subjected to high-performance liquid chromatography-solid phase extraction-nmr spectroscopy (hplc-spe-nmr) measurement. comparison of the h and c nmr spectra of c- e with those of b suggested that compounds c- e had the same framework as b and the structural difference might be the position of the acetyl group. the location of acetyl group was determined to be at c- based on the hmbc correlations between h- (δ . ) and c- " (δ . ) as shown in supplementary information fig. s . thus, compound c was assigned as -at- β-g. the isolated glucose proton at δ . (h- ) of compound d exhibited long-range correlations with carbonyl carbons at δ . ( supplementary information fig. s ). moreover, h- (δ . ) of compound e showed long-range table s ). lower panel, hplc profile of acetylated products of a separated by chromatogramic method (supplementary information table s ). correlations with c- " (δ . ), as revealed by the hmbc spectrum ( supplementary information fig. s ). these data supported that the structure of d and e was elucidated as -at- β-g and -at- β-g, respectively. hence, the three trace products at t r ¼ . , . and . min were thus assigned to be -( d), -( e) and -at- β-g ( c) based on their respective nmr data (tables - and supplementary information figs. s - ) . these data indicate that ecsga can effectively introduce the acetyl group into the primary hydroxyl group and each secondary hydroxyl group of t- β-g ( a), yielding four monoacylates without the formation of diacylates ( fig. and scheme ). because the primary c( )-oh was the most reactive of the four hydroxyl groups in t- β-g ( a), acetylation of t- β-g ( a) took place preferentially at the c ( )-oh, giving -o-acylate predominantly in % yield ( fig. and scheme ). also, ecsga can regioselectively acetylate each secondary hydroxyl of t- β-g ( a) in the presence of the primary hydroxyl group, giving -( c), -( d) and -at- β-g ( e) in %, % and % yield, respectively. these data revealed the reactivity trend of hydroxyls is -oh -oh -oh -oh. likewise, the formation of four monoacylates was also present in ecsga -catalyzed acylation of e- β-g ( a, t r ¼ . min, fig. ). in addition to the well-characterized major product b, there are three trace products c- e. because of their trace amount, we did not further enrich these monoacetylated estradiol glucosides for nmr analysis. however, according to the catalytic behavior of ecsga towards t- β-g ( a), it was easy to infer that these products were most likely -( c, t r ¼ . min), -( d, t r ¼ . min) and -ae- β-g ( e, t r ¼ . min, fig. ). the order of reactivity of the hydroxyls was determined as -oh -oh -oh -oh with a yield ratio of : : : (fig. ) . regioselective acylation of one of the multiple hydroxyl groups in sgs is the major obstacle to the synthesis of sges and direct methods for site-selective acylation of unprotected sgs have rarely been documented. in this contribution, we successfully achieved the regioselective acylation of fully unprotected sgs using ecsga as the biocatalyst, thereby leading to an extremely short-step synthesis of asgs. acetylated steroidal glucosides, namely b, c, d and e, together with b were tested for their in vitro cytotoxicity against seven human cancer cell lines including hct , bel , mgc , capan , nci-h , nci-h and a . the results indicated that -at- β-g ( d) exhibited a wide spectrum of cytotoxic activities against the tested cell lines (table ). -at- β-g ( b) displayed much less cytotoxicity than -at- β-g ( d) but showed a mild cytotoxicity against human non-small cell lung carcinoma cell line nci-h with ic values of . μmol/l ( table ) . on the contrary, the control t- β-g ( a) did not display significant cytotoxicity towards these tested cell lines (ic . μmol/l). these evidences revealed that the acyl groups of sges are of importance to their cytotoxicity and direct regioselective acylation of sgs is thus believed as a powerful tool for the discovery of drug candidates. acylated steroidal glycosides have attracted our attentions primarily due to their biological and pharmacological significances , , , . there are two enzymes, namely sgts and sgas, responsible for the biosynthesis of asgs. to synthesize asgs, the primary premise is to obtain glycosyltransferases capable of catalyzing the formation of sgs from the abundantly available free steroids. o. saundersiae is thus selected as the candidate plant for sgts isolation. o. saundersiae is a monocotyledonous plant rich in acylated steroidal glycosides, suggesting that it may contain sgts and sgas responsible for the biosynthesis of asgs . thus, the transcriptome of o. saundersiae was sequenced with the aim to facilitate the genes discovery. ossgt was then isolated from o. saundersiae based on the rna-seq data. subsequently, ossgt -containing cell-free extract was used as the biocatalyst for glycosylations of structurally diverse drug-like scaffolds. the use of cell-free extract offers a number of advantages. unlike the ambitious purification procedures, the preparation of cell-free extract was simple and timesaving. moreover, compared the purified enzymes, the recombinant proteins used in crude extract-based system were more stable. steroidal glycosides are one of the main sources of innovative drugs . sgt-catalyzed glycodiversification of steroids could expand the molecular diversification, thereby facilitating the discovery of pharmacological leads. thus, the search of sgts with catalytic promiscuity may provide potent biocatalysts for glycodiversification. therefore, a library containing structural diverse drug-like molecules was utilized to react with the recombinant ossgt with the aim to explore the substrate flexibility of ossgt . in vitro enzymatic analyses revealed that ossgt was active against various steroids, including physterols ( - , supplementary information figs. s - ) , steroid hormones ( , - , figs. and , and supplementary information figs. s , s and s ), steroidal sapogenin ( , supplementary information fig. s ) and cardiac aglycon ( , supplementary information fig. s ) , exhibiting a wider substrate range than that of previously identified sgts from plant , . to investigate the regioselectivity of ossgt , diversified steroids were selected as the sugar acceptors for ossgt -catalyzed glycosylations. as illustrated in figs. - and supplementary information s - , ossgt specifically attacked the hydroxyl groups at c- and c- positions, but no activities towards hydroxyl groups at c- ( , ), c- ( ), c- ( ), c- ( ) , c- ( ), c- ( ) , c- ( , and ) and c- ( and ) . when steroids having two potentially reactive hydroxyl groups, like α-hydroxypregnenolone ( ) or androstenediol ( ), were used as the substrate for ossgt -assisted glycosylation, only glycosides with a glycosyl substituent in c- position were detected in the reaction mixture, suggesting ossgt exhibited prominent regioselectivity towards the -oh of both substrates ( supplementary information figs. s and s ) . also, ossgt the stereoselectivity of ossgt was also assessed in this study. estradiol ( ) and α-estradiol ( ) differ for the configuration of the hydroxyl group at c- position. when each of the two compounds was used to react with ossgt , only β-configurated glycosides were generated (fig. ) . likewise, ossgt showed βselective glycosylation towards the hydroxyl group at c- position. cumulatively, these evidences revealed that ossgt catalyzed glycosylations were conducted in a region-and stereoselective fashion. one of the most striking findings of this study is the characterization of bacterial laca protein as a steroidal glycoside acyltransferase. it is well known that laca is one of three structural genes (lacz, lacy and laca) in lac operon , . the function of lacz and lacy is well-characterized , . lacz encodes a βgalactosidase, catalyzing the cleavage of lactose into glucose and galactose. lacy encodes a lactose permease responsible for lactose uptake , . the third structural protein encoded by laca gene in lac operon was initially inferred to be an acetyltransferase. the exact action of this protein, however, remains in doubt until now. in this investigation, in an effort to identify the function of ossgt , we unexpectedly characterized laca protein from e. coli as a sga. in vitro enzymatic analyses revealed that laca protein could specifically catalyze the attachment of acyl groups into the hydroxyl groups of sugar moieties of steroidal β- glycosides (figs. and ) . although we have no evidences for the role of laca protein in vivo, these findings in the present work may provide a starting point for identifying the exact activity of laca protein in lactose metabolism. the bottleneck in enzymatic synthesis of asgs is the lack of well-characterized sgas. the successful characterization of laca made it to be the first sga and laca protein was thus designated as ecsga . a novel enzymatic two-step synthesis of at- β-gs and ae- β-gs from the abundantly available free steroids under the sequential actions of a steroidal glycosyltransferase ossgt from o. saundersiae and ecsga was achieved. the two-step process is characterized by acyltransferase-catalyzed regioselective acylations of all hydroxyl groups of unprotected sgs in the late stage, thereby significantly streamlining the synthetic route towards asgs and thus forming four monoacylates. regioselective acylation could expand molecular diversity, thereby facilitating the discovery of pharmaceutical leads. in this investigation, ecsga -catalyzed acetylation of two steroid βglucosides (t- β-g and e- β-g) leaded to the production of eight new monoacylates. furthermore, the cytotoxic activities of these monoacylates were tested and -at- β-g was observed to display improved activities towards seven human tumor cell lines, suggesting this compound had promisingly pharmacological potential. this study therefore reports for the first time a novel synthetic process for the green preparation of acylated steroidal glycosides with medicinal interest. a steroidal glycosyltransferase ossgt from o. saundersiae was identified to be the first plant sgt with selectivity towards both β-and β-hydroxylsteroids. one of the most striking findings of this study is the characterization of bacterial laca protein as a steroidal glycoside acyltransferase, catalyzing the attachment of acyl groups into the hydroxyl groups of steroidal β-glycosides. a novel enzymatic two-step synthesis of at- β-gs and ae- β-gs from the abundantly available free steroids under the sequential actions of ossgt and ecsga was achieved. the two-step process is characterized by acyltransferase-catalyzed regioselective acylations of all hydroxyl groups of unprotected sgs in the late stage, thereby significantly streamlining the synthetic route towards asgs and thus forming four monoacylates. moreover, the cytotoxic activities of these monoacylates were tested and -at- β-g was observed to display improved activities towards seven human tumor cell lines. this study therefore reports for the first time a novel synthetic process for the green preparation of acylated steroidal glycosides with medicinal interest. in this contribution, four compound libraries, namely sugar acceptor, sugar donor, acyl acceptor and acyl donor libraries, were provided for enzyme-mediated reactions. the compounds listed in fig. and supplementary information fig. s include diverse structures like steroids , flavonoids ( ) ( ) ( ) ( ) ( ) ( ) ( ) , alkaloids ( ) ( ) ( ) ( ) ( ) ( ) ( ) , triterpenoids ( - ), phenolic acids ( - ) and coumarins ( - ) are used as the sugar acceptors for ossgt -catalyzed glycosylation reactions ( fig. and supplementary information fig. s ). the sugar donors consist of seven udp-activated nucleotides, among which, udp-dglucose (udp-glc), udp-d-galactose (udp-gal), udp-d-glucuronic acid (udp-glca) and udp-n-acetylglucosamine (udp-glcnac) were obtained from sigma-aldrich co., llc. (st. louis, mo, usa). udp-d-xylose (udp-xyl), udp-l-arabinose (udp-ara) and udp-d-galacturonic acid (udp-gala) was synthesized by enzymemediated reactions in our laboratory [ ] [ ] [ ] . the acyl acceptor library is made up of steroidal glucosides ( a- a) and other glucosides ( - ) listed in supplementary information fig. s . the acyl donor library includes acetyl-coa, succinyl-coa, arachidonoyl-coa, palmitoyl-coa and acetoacetyl-coa, all of which were purchased from sigma-aldrich co., llc. the other chemicals were either reagent or analytic grade when available. the resulting raw reads from sequence library of o. saundersiae was firstly filtered to obtain clean reads, discarding dirty reads with adaptors, unknown or low quality bases. these clean reads were subsequently combined to form longer unigenes by assembling program trinity. these unigenes obtained by de novo assembly cannot be extended on either end. next, these unigenes were aligned by blast x algorithm to protein databases, such as nr, swiss-prot, kegg and cog (e-valueo . ) for functional annotation. the unigenes displaying similarity to sgts were retrieved for further orf analysis. in a word, those unigenes with a complete orf and displaying high similarity to sgts were selected as the candidate for further investigation. to verify the authenticity of the candidate unigene, cdna isolation was performed using gene-specific primers by a nested pcr assay as previously described (supplementary information table s ) [ ] [ ] [ ] [ ] . the obtained amplicon was inserted into peasy tm -blunt plasmid (transgen co., ltd., beijing, china) and then transformed into e. coli trans -t competent cells for recombinant plasmid selection (supplementary information table s ). the resultant recombinant plasmid was isolated and subjected to nucleotide sequencing. the obtained cdna was thus designated as ossgt for convenience. the bioinformatics analyses of ossgt , like prediction of physiochemical properties, multiple sequence alignment and phylogenetic analysis, were performed as detailed in our previous reports [ ] [ ] [ ] [ ] . ossgt was amplified using gene-specific primers (supplementary information table s ) and the resulting pcr product was ligated into ecori and hind iii sites within the pet- a(þ) vector (novagen, madison, usa) using seamless assembly cloning kit (clonesmarter technologies inc., houston, tx, usa) as described previously . the generated construct pet a-ossgt was transformed into e. coli strain transetta (de ) for expression as described previously . also, to improve heterologous expression of ossgt , pet a-ossgt was cotransformed into e. coli bl (de ) strain with a chaperone plasmid pgro (takara biotechnology co., ltd., dalian, china) as introduced by yin et al. the expression of ossgt was induced by iptg at a final concentration of . mmol/l. the expressed ossgt was checked by sds-page and western-blot analyses as described by guo et al. next, the bl (de ) [pet a-ossgt þpgro ] suspension cells were disrupted in a high-pressure homogeniser (apv- , albertslund, denmark) operated at bar. disrupted cells were centrifuged at , rpm for min to discard the pellet. the resultant supernatant, namely the membrane-free crude extract, was used as the biocatalyst for steroidal glycosylation. after verification of heterologous expression of ossgt , the crude extract containing the recombinant ossgt was applied as the biocatalyst to react with various sugar acceptors and donors ( fig. and supplementary information fig. s ). the total reaction mixture was μl contained mmol/l phosphate buffer (ph . ), a sugar acceptor ( mmol/l), a sugar donor ( mmol/l) and μl crude ossgt proteins. the reaction mixture was incubated at c for h. the formation of glycosylated products was table the cytotoxic activities of monoacylates against human tumor cell lines. unambiguously determined by a combination of hplc-uv, hplc-ms and nmr as described previously , . the determination conditions for hplc-uv were summarized in supplementary information table s . the genome dna was extracted from e. coli strain bl (de ) using bacteriagen dna kit (cwbio co., ltd., beijing, china) according to the supplier recommendation. the resulting genome dna was then used as the template of pcr amplification to isolate these candidate sga genes using gene-specific primers (supplementary information table s ). the amplified pcr products were inserted into peasy tm -blunt plasmid to generate recombinant vectors for sequencing verification. next, these oacetyltransferase-encoding genes were heterologously expressed in bl (de ) as described above. sds-page and western-blot of these recombinant proteins were conducted as that of ossgt (see above). the recombinant o-acetyltransferase proteins were subjected to purification with ni-nta agarose columns according to the manufacture's protocol. purified protein concentrations were determined using bradford protein assay (bio-rad, hercules, ca, usa). the enzymatic activities of ecsgas were determined in μl citrate buffer solution (ph . ) containing an acyl acceptor ( mmol/l) listed in supplementary information fig. s , an acyl donor ( mmol/l) summarized in chemicals section and the purified protein ( . μg). the reactions were incubated at c for h. then μl methanol was added to terminate the reaction. the reaction mixture was monitored by hplc-uv (supplementary information table s ) and the structure of the generated product was determined by a combination of hplc-ms and nmr as reported by liu et al. . . biotransformation of testosterone and testosterone- -oβ-glucoside using the whole cell system after iptg induction, the engineered bl (de )[pet a-ossgt þpgro ] cells were harvested by centrifugation at , rpm for min and then resuspended in m medium with cell density of od value of . . the substrate testosterone ( ) or testosterone- -o-β-glucoside (t- -o-β-g, a) with the final concentration of . mmol/l was added into the m medium and continued to incubate at c overnight. the formation of products was monitored by hplc analysis as mentioned above. the effects of ph and temperature on ossgt -and ecsga catalyzed reactions were investigated. the crude extract of bl (de )[pet a-ossgt þpgro ] and the purified ecsga protein were applied as the biocatalyst in their respective reactions. the effects of ph on both reactions were determined at varied buffers including citric acid/sodium citrate buffer ( . mol/l, ph . - . ), na hpo /nah po ( . mol/l, ph . - . ), na hpo / naoh buffer ( . mol/l, ph . - ). the influences of temperature were explored in a range of to c with intervals of c (ossgt -mediated glycosylation) or c (ecsga -catalyzed acylation) in the standard reaction mixture as described above. scale-up of ossgt -and ecsga -catalyzed reactions was performed to obtain sufficient at- β-gs for structural characterization and further cytotoxicity assay. initially, the μl ossgt catalyzed reaction was directly scaled to ml, in which mg testosterone ( ) were added into and then incubated with crude cell extract at optimal ph and temperature for h. the resultant reaction mixture was applied to preparative hplc to isolate pure t- β-g, which was then used as the substrate in ml ecsga -catalyzed reaction for at- β-gs production. structure characterization of at- β-gs was performed using hplc-spe-nmr technique as described by liu et al. except some modifications on chromatographic conditions. hplc separation was carried out on an ymc-pack ph column ( μm, nm, mm  . mm) with an isocratic elution of % water-trifluoroacetic acid (a, . %: . %, v/v) and % methanol (b) at a flow rate of ml/min. seven human cancer cell lines, hct- (human colon cancer cell line), bel (human hepatocellular carcinoma cell line), mgc (human gastric carcinoma cell line), capan (human pancreatic cancer cell line), nci-h , nci-h and a (human lung cancer cell lines) were used in the cytotoxicity assay. the viability of the cells after treated with various chemicals was evaluated using the mtt ( -( , -dimethylthiazol- -yl)- , -diphenyl tetrazolium bromide) assay performed as previously reported , . the inhibitory effects of these tested compounds on the proliferation of cancer cells were reflected by their respective ic ( % inhibitory concentration). steryl glycosides and acylated steryl glycosides in plant foods reflect unique sterol patterns rice bran extract containing acylated steryl glucoside fraction decreases elevated blood ldl cholesterol level in obese japanese men structural analysis of novel bioactive acylated steryl glucosides in pre-germinated brown rice bran a potent anticomplementary acylated sterol glucoside from orostachys japonicus therapeutic potential of cholesteryl o-acyl alpha-glucoside found in helicobacter pylori immunological functions of steryl glycosides cholestane glycosides with potent cytostatic activities on various tumor cells from ornithogalum saundersiae bulbs a new rearranged cholestane glycoside from ornithogalum saundersiae bulbs exhibiting potent cytostatic activities on leukemia hl- and molt- cells acylated cholestane glycosides from the bulbs of ornithogalum saundersiae synthesis of a cholestane glycoside osw- with potent cytostatic activity first total synthesis of an exceptionally potent antitumor saponin, osw- improved enzyme-mediated synthesis and supramolecular selfassembly of naturally occurring conjugates of β-sitosterol simple method for high purity acylated steryl glucosides synthesis regioselective diversification of a cardiac glycoside, lanatoside c, by organocatalysis sterol glycosyltransferases-the enzymes that modify sterols the functions of steryl glycosides come to those who wait: recent advances in plants, fungi, bacteria and animals sterol β-glucosyltransferase biocatalysts with a range of selectivities, including selectivity for testosterone molecular cloning and biochemical characterization of a recombinant sterol -o-glucosyltransferase from gymnema sylvestre r.br. catalyzing biosynthesis of steryl glucosides cloning and functional expression of ugt genes encoding sterol glucosyltransferases from saccharomyces cerevisiae, candida albicans, pichia pastoris, and dictyostelium discoideum glycosyltransferases in plant natural product synthesis: characterization of a supergene family chaperone coexpression plasmids: differential and synergistic roles of dnak-dnaj-grpe and groel-groes in assisting folding of an allergen of japanese cedar pollen, cryj , in escherichia coli regioselective enzymatic acylation of complex natural products: expanding molecular diversity structure, bioactivity, and chemical synthesis of osw- and other steroidal glycosides in the genus ornithogalum functional diversification of two ugt enzymes required for steryl glucoside synthesis in arabidopsis lac operon induction in escherichia coli: systematic comparison of iptg and tmg induction and influence of the transacetylase laca the lac operon galactoside acetyltransferase cdna isolation and functional characterization of udp-d-glucuronic acid -epimerase family from ornithogalum caudatum transcriptome-guided discovery and functional characterization of two udp-sugar -epimerase families involved in the biosynthesis of anti-tumor polysaccharides in ornithogalum caudatum transcriptome-guided gene isolation and functional characterization of udp-xylose synthase and udp-d-apiose/udp-dxylose synthase families from ornithogalum caudatum ait transcriptome-wide identification of sucrose synthase genes in ornithogalum caudatum transcriptomeenabled discovery and functional characterization of enzymes related to ( s)-pinocembrin biosynthesis from ornithogalum caudatum and their application for metabolic engineering functional analyses of ocrhs and ocuer involved in udp-l-rhamnose biosynthesis in ornithogalum caudatum interactions among sars-cov accessory proteins revealed by bimolecular fluorescence complementation assay cdna isolation and functional characterization of squalene synthase gene from ornithogalum caudatum probing steroidal substrate specificity of cytochrome p bm variants steroids hydroxylation catalyzed by the monooxygenase mutant - from bacillus megaterium bm metabolic engineering of escherichia coli for -butanol production cytotoxic cholestane glycosides from the bulbs of ornithogalum saundersiae supplementary data associated with this article can be found in the online version at doi: . /j.apsb. . . . key: cord- -a eedna authors: cohen, irun r. title: informational landscapes in art, science, and evolution date: - - journal: bull math biol doi: . /s - - - sha: doc_id: cord_uid: a eedna an informational landscape refers to an array of information related to a particular theme or function. the internet is an example of an informational landscape designed by humans for purposes of communication. once it exists, however, any informational landscape may be exploited to serve a new purpose. listening post is the name of a dynamic multimedia work of art that exploits the informational landscape of the internet to produce a visual and auditory environment. here, i use listening post as a prototypic example for considering the creative role of informational landscapes in the processes that beget evolution and science. invites us to explore it-physically or in reverie. a landscape of associations weaves a narrative. an informational landscape denotes an array of information that, like a natural landscape, invites exploration; the informational landscape too holds a narrative. here, i shall use listening post as an allegory to explore two other systems that deal with informational landscapes: biologic evolution and human understanding. waddington has used the term epigenetic landscape as a metaphor to describe the interactions between genes and environment that take place during embryonic development (waddington, ) . an informational landscape is quite another matter; this landscape represents the maze of information available for potential exploitation by a suitable system. as we shall see below, the informational landscape is a substrate for system-making. let's start by seeing how listening post exploits information to organize a work of art. listening post is formed by two components: an overt visual-auditory display designed by artist rubin and a covert process designed by mathematician hansen. the display is composed of a suspended rectangular grid of over brick-sized electronic monitors and a set of audio installations (fig. ) . the monitors pulsate with fragments of texts and compositions of light and the sound tracks pulsate with musical passages and artificial speech. the display triggers living associations: "a sense of cycles in life, day and night, the seasons. . .the information. . .lighting up as if the world is awakening from sleep and later changing to large sweeps." (eleanor rubin, personal communication; see, http//www.ellyrubin.com). the covert process that produces listening post's art is an algorithm developed by hansen, then a mathematician at bell laboratories. the algorithm randomly samples, in real time, the many thousands of chats, bulletin boards, and bits of message that flow dynamically through the cyberspace of the internet. this simultaneous mélange of signals, in the aggregate, is meaningless noise. the algorithm, by seeking key words and patterns of activity, artfully exploits this raw information to construct patterns of light, sound, and words that please human minds. the substrate of information flowing over the internet is in constant flux so the patterns presented by listening post are unpredictable at the fine microscopic scale; but at the macroscopic scale of sensible experience, listening post is manifestly pleasing. the patterned flow of sights, sounds, and words (seen and heard) arouses associations, memories, and feelings in the mind of the viewer-which is what we call art: art, whether visual, auditory, tactile, gustatory, or verbal, is an artifact made to arouse associations, memories, and feelings. listening post is an artful representation of the complexity of information flowing through the internet; listening post transforms the internet's massive informational landscape into a comprehensible miniature. two attributes of listening post illustrate our theme: the work feeds on information designed for other purposes and it survives by engaging our minds. above, i have used four words-information, signal, noise, and meaning-that usually need no definition; we use them every day. but they are important to this discussion, so i will define them. i use the word information in the sense defined by shannon ( ) : information is a just-so arrangement-a defined structure-as opposed to randomness (cohen, ) . information becomes a signal if you respond to the information. the meaning of the signal is how you respond to it (atlan and cohen, ) . for example, a chinese character bears information (it is arranged in a characteristic form), but the character is not a signal unless you know how to read chinese. what the chinese character means is the way the community of chinese readers use the character and respond to it. information, then, is an intrinsic property of form. in biology, form is expressed by the characteristic structures of molecules, cells, organs, organisms, populations, and societies. informational structures are not limited to material entities: organizations and processes through their defining regularities also fashion informational structures. an informational landscape encompasses a totality of information-material and organizational. information expressed in biological structures is essential to life, but structure alone is not sufficient for living. biological information is collective and reproducible-the structural information repeats itself at various scales, from molecules through societies (and beyond into human culture). the information encoded in biological structures persists and cycles; we call it development, physiology, and reproduction. but most importantly, biological structures perform functions-biological information bears meaning (neuman, ) . meaning, in contrast to information, is not an intrinsic property of an entity (a word or a molecule, for example); the meaning of an entity emerges from the interactions of the test entity (the word or molecule) with other entities (for example, words move people, and molecules interact with their receptors, ligands, enzymes, etc.). interactions mark all manifestations of biological structure-molecular through social. meaning can thus be viewed as the impact of information-what a word means is how people use it; what a molecule means is how the organism uses it; what an organism means is what it does to others and how others respond to it; and so on over the scales life-meaning is generated by interaction (cohen, ) . in summary, we can connect the three conceptsinformation, signal, and meaning thusly: a signal is information with meaning. in other words, signalling is the ability of information to elicit a response, which is the meaning of the information. note that a signal, like information in general, is free of intentions; astrophysicists, for example, receive signals from distant galaxies, which have no interest in communicating with an astrophysicist; it is the response of the astrophysicist to the galactic radiation (meaning-making) that transforms the structure of the radiation (its information) into a signal. noise is more varied than signal. noise can refer to a number of different situations: randomness as opposed to structured information (a scribble versus a chinese character); information without meaning (a greek conversation overheard by one who does not understand greek); meaningful information in a meaningless context (across-the-room cocktail-party chatter). noise in the generic sense proposed by shannon is the unstructured; but noise can also refer to meaningless information that might be meaningful in other circumstances. for example, a dna sequence composing an immunoglobulin v gene element has little meaning until the sequence is recombined with other dna sequences to form a functioning antibody gene (cohen, ) -that recombination charges the dna sequence with meaning. whether cyberspace transmits noise or signal, as we have defined the terms, depends on who you are and what you seek. if you are the recipient of a specific bit of information-a chat or an email directed to you, for example-then you perceive signal. but if you don't understand the language, or if you do understand the language but see intruding spam, or experience a profusion of many messages at once, then you hear only noise. (noise too can have meaning if it makes you leave the room). the difference then between signal and noise, one might claim, is having the right reception. or, to put it another way, the same information can be either noise or signal, depending on how you perceive it-or transmit it. combatants may attempt to evade detection by disguising signals as noise; for example, humans encrypt secret transmissions (kahn, ) and infectious agents scramble antigens (cohen, ; cohen, ) . the transformation of noise back into signal is part of the game; counter-intelligence can learn to break the enemy's linguistic codes; an immune system can learn to decipher a pathogen's molecular codes. in informational terms, listening post is a machine that transforms noisy cyberspace information into a new narrative by selecting and recombining fragments of the flux. listening post dynamically self-organizes, similar to a living organism (atlan, ; weisbuch and solomon, ) . the internet created a new informational landscape, a new niche, that could be sampled and exploited by hansen and rubin to enhance their fitness as artists in the wilds of the manhattan art world (fig. ) . biological evolution traditionally is described in terms of fitness: species evolve because the fittest survive to pass on their genes, while the unfit die with their genes (darwin, ). survival-of-the-fittest thus selects the "best" genetic variants from among the phenotypes in the breeding population (plotkin, ) . the process leads, with time, to creatures that are better adapted and improved. improved fitness is the aim of evolution, so say some experts. but there are at least two problems with the concept of fitness: first, fitness is difficult to define; most definitions involve a circular argument-what survives is fit, by definition (gould and lewontin, ) . this amounts to a tautology: fit is fit. attempts have been made to define fitness in terms of reproductive success (hoffman et al., ) . but different species survive well despite vastly different numbers of surviving offspring: compare the reproductive strategy of the elephant with that of the gnat that rides on its back; compare the whale with the sardine; each to its own reproductive profligacy or dearth. second, evolution does not rest with fit creatures; evolution assembles increasingly complex creatures. accumulating complexity is manifest in the evolutionary tree. creatures higher in the evolutionary tree-more recent creatures-tend to be more complex than the creatures that preceded them: eukaryotic cells deploy more genes and house more organelles than do prokaryotic cells; mammals have more organs and express more behaviors than do the trees, round worms, or insects that preceded them. quite simply, evolution generates complexity (fig. ) . now, one might argue that the more complex species is the more fit species; if that is true, then the quest for fitness alone should generate increasing complexity. but is that true; does fitness itself drive complexity onward? what is complexity? complexity is a relative term: more complex entities compared to simpler entities incorporate more component parts and integrate more diverse interactions between their component parts. moreover, the single component parts of complex living systems usually participate in a variety of different functions (pleiotropism). complexity thus can be defined in terms of information; complex systems sense, store, and deploy more information than do simple systems. complexity presents two aspects: intrinsic and extrinsic. a complex system such as a cell, brain, organism, or society is complex intrinsically because of the way its parts interact and hold the system together. a complex system is also complex extrinsically because we who study it have difficulty understanding the properties that emerge from it (the biological perspective); we also have trouble fixing it (the medical perspective). evolution. the environment constitutes an informational landscape of considerable complexity (designated by the bundles of arrows) that in the aggregate is not readily useful (note the puzzled face, ?). however, a creature with a suitable physiology can extract useful information (a limited bundle of information) and the creature's new phenotype is subject to selection. survival can lead to a new species (new meaning) and so the complexity of the informational landscape is amplified and enriched through positive feedback. the new species too becomes an informational landscape for further exploitation by parasites and yet newer species. fitness, unlike complexity, does not relate to the way a system is put together or the way the system is understood by those who study it. fitness relates to the success of a system in thriving in its own world. we can conclude, therefore, that fitness and complexity describe independent attributes. so there is no necessary correlation between complexity and fitness: no dinosaurs have survived irrespective of how complex they were (and some were very complex indeed). in fact, it seems that the dinosaurs were unfit because they were too complex to survive the environmental disruptions brought about by the earth's collision with a comet (morrison, ) . primitive bacteria have survived such calamities despite their relative simplicity (probably because of their relative simplicity). indeed, the lowest and simplest bacteria will probably prove to be more fit for survival than we more complex humans if we both have to face a really severe change in the world environment. extremely complex systems, like us, are extremely fragile, and so they are less fit in certain situations. the bottom line is that the quest for fitness cannot explain the rise of complexity. but why then is complexity usually-but not always-more evident the higher one climbs up the evolutionary tree? it has been possible to demonstrate the evolution of complexity mathematically (chaitin, ; wolfram, ; lenski et al., ) . but evolution on a computer (in silico) is not evolution in the world of nature (in mundo). if complex systems tend to be fragile, why does evolution persist in devising them? the accumulation of complexity during evolution can be explained, despite fragility, by a principle of self-organization; the principle, formulated by atlan, is that existing information tends automatically to breed additional new information (atlan, ). atlan's argument goes like this: existing information first generates surplus copies of itself, which happens regularly in reproducing biological systems. the surplus copies can then safely undergo mutations, and so create modified (new), added information without destroying the untouched copies of the old information. the system thus becomes enriched; it now contains the new information along with the old information. indeed, it appears that the complexity of vertebrate evolution was preceded and made possible by a seminal duplication of the ancestral genome (dehal and boore, ). atlan's formulation implies that the more structures (the more information) a system encompasses, the greater the likelihood of variation in that system (discussed in cohen, ) . hence, the amount of existing information composing a system (previously evolved structures) breeds a commensurate amount of new information (variant structures). note that the constant variation of existing information is guaranteed by the second law of thermodynamics-order tends automatically to dissipate into variant structures; hence, any information will mutate over time (fig. ) . information, in other words, feeds back on itself in a positive way; a great amount of information, through its variation, leads to even more information. and as information varies it increases, and so does complexity. once a relatively stable, nonrandom structure (an informational entity) comes into being, it will be subject to variation. all genes, for example, mutate; proteins assume different conformations and functions; minds get new ideas. repositories of information like genes, proteins, minds, cultures, and so forth, vary to generate new genes, proteins, minds, or cultures that then get exploited for additional uses. a human is manyfold more complex than is a round worm, yet the human bears less than twice the number of genes borne by the worm (about , human genes compared to about , analogous worm genes). the human species has accumulated its characteristic complexity by using its set of gene products in much more complicated ways than does the worm. humans exploit a more complex informational landscape than do round worms. humans are also more fragile: the columbia space shuttle that disintegrated upon re-entry into the atmosphere in carried both humans and round worms; only the round worms survived. the difference between classical darwinian evolution and the idea of evolving informational landscapes is highlighted by the difference between selection and exploitation. darwin, familiar with the art of animal husbandry, applied to nature the atlan ( ) , the net amount of information in a system will increase only if two conditions are satisfied: the original information duplicates to produce surplus copies and the surplus copies mutate to produce variant new information. system x, for example, has not duplicated its information (triangle); even if the triangle mutates, there will be no net increase in information, since the original information is merely replaced (n = n). system y, in contrast, has duplicated its original information (both triangle and box), and so mutation of each surplus copy generates a net increase in information. also note that information accelerates logarithmically (log ): in system y, two mutations can give rise to n × complexity. a system that starts out with more information (system y) will generate new information faster than does a system that starts with less information (system x). wisdom of artificial selection; the wise breeder selects for propagation from among the variant household creatures those individuals most fitting market or household needs. likewise, nature has selected for propagation those creatures most fit for survival-natural selection recapitulates artificial selection (darwin, ). my point here is that the natural informational landscape, in contrast to the th century english manor, does not merely provide grounds for selecting what fits the market or the whims of the landlord; the natural informational landscape provides grounds for extravagant exploitation. any organism, simple or complex, that manages to mine the landscape for enough energy and information to create meaning (through productive interactions) might manage to survive there. exploitation, as i use the term here, refers to the modification of information for new uses. listening post arose by exploiting the new informational landscape provided by the internet, which itself arose by exploiting other informational landscapeslanguage, computers, society, culture. let us extend the listening post metaphor and say that biologic evolution proceeds through the inevitable exploitation of new informational landscapes by new or variant creatures. evolving and evolved creatures themselves become part of an enriched informational landscape, continuously available for further exploitation by other creatures. the dynamics of evolution are the dynamics of information. like the algorithm of listening post, an evolving species creates new meaning by exploiting information flowing through its environment-its cyberspace. that in a nutshell is the informational view of evolution. information exploits information and compounds information to generate new meanings; life is constant ferment. the escalation of complexity is evident medically. consider parasitism: there exists no creature that is not exploited by some other creature. hosts are informational landscapes for raising parasites. indeed, the biosphere is one great food chain; each species makes a living by exploiting the information (structures and processes) of other creatures. the generative informational landscape includes even the artifacts of cultural evolution, and not only the natural products of chemical and cellular evolution. the human brain generated language; language generated culture; and culture is changing the world. the evolution of human air travel-a complex of machines, logistics, economics, and habits of mind designed for anything but parasites-contributed to the informational landscapes that generated the spread of hiv and west nile virus; poultry farming gave rise to sars and avian influenza; air conditioning systems provided a landscape for legionnaires' disease. information, because of positive feedback, breeds more information, and complexity results. note that positive feedback will accelerate any process; witness the accelerating complexity that marks our world: , years of hunting and gathering was replaced by , years of agriculture and animal husbandry that generated a thousand years of accelerated urbanization, years of industrialization, a few decades of informational civilization, and an emerging global economic culture. unfortunately, positive feedback, unless regulated by negative feedback or other kinds of control, accelerates into instability, chaos, and disaster (robertson, ; segel and bar-or, ) . information that breeds ever more complex systems is dangerous-like the dinosaurs, an overly complex system collapses of its own inherent fragility, and complexity has to again re-start its evolution (fig. ) . consider an example closer to home: your personal computer crashes when you try to run too many programs simultaneously; the more complex your program, the more often you have to push restart. (being smarter than dinosaurs, we might use our foresight to prevent global warming, over-population, collective terror, and who knows what else; let us avoid having to restart a world.) in summary, we can say that complexity is managed throughout evolution by a balance between two opposing forces: autocatalysis and fragility. on the one hand, complexity inexorably increases in time through the autocatalytic force of increasing information. on the other hand, catastrophic extinctions mark the fragility of large complex ecosystems; on a lesser scale, complexity may also be held in check or reduced by the day-to-day survival of the fittest creature that may be the less complex creature. darwin's concept of natural selection, including survival of the fittest, does play a critical role in the process of evolution, but mostly after a new or variant species has begun to exploit an informational landscape. quite simply, the species has to survive long enough to maintain itself. indeed, the informational landscape might include other species competing for the same sources of information and energy; fig. ) over evolutionary time (t) leads to intrinsically fragile complex systems (c) that are susceptible to crash when challenged by severe environmental perturbations. complexity then has to restart its accumulation from a lower level. the scales and the form of the curve shown here are hypothetical. a species' fitness may be quantified as the measure of time occupied by that species from its origin to its extinction. this formulation avoids the difficulties of identifying factors common to all the various species with their vastly different strategies for survival (reproduction rate, lifespan, efficiency of energy use, etc.); to quantify the fitness of a species, we merely measure its survival time. in the hypothetical figure, we compare the fitness of a hypothetical bacterium that continues to survive for some × years compared to a hypothetical dinosaur species that survived for some years till its extinction about × years ago. the human species arose some years ago, and who knows how long it will last. the figure suggests that there is no positive correlation between complexity (c) and fitness; the opposite might be the case. in that case, survival of the fittest is an apt description of the conflict between the competitors. but much of evolution occurs without competition. in fact it is clear that many products of evolution are neutral; they manifest no selective advantage over other phenotypes (kimura, ) . neutral evolution simply expresses the exploitation of an informational landscape: survival of the fittest is not an explanation for neutral evolution. fitness, then, is only another word for survival (so, as i said, survival of the fittest is a tautology). the measure of a species' fitness can be assessed most clearly by hindsight, by observing the species' past history of success. a species that manages to last only a short time in its informational landscape is manifestly less fit than is a species that lasts a longer time in its informational landscape. thus the fitness of a species is commensurate with the amount of time the species has persisted from its inception to its extinction, its crash. fitness can be measured quantitatively by the amount of time needed by the environment to eradicate the species-by the elapsed time to unfitness, the time to extinction (fig. ) . this notion of fitness will be elaborated elsewhere; the point i want to make here is that fitness is merely the temporary absence of unfitness. fitness, in other words, can be reduced to mere survival. the offspring of fitness is taught classically to be improvement. listening post shows us that the evolution of new arrangements of information (new complexity) may not necessarily lead to improvement. hansen and rubin use new information to create a new art; is listening post an improvement over rembrandt's old art? homo sapiens is certainly more artistically pleasing than is e. coli or c. elegans, but hardly better adapted (cohen, ) . indeed, in the world of art, fitness has always been measured by survival time; it's only natural. rembrandt's paintings have thrived for hundreds of years and will certainly continue to live with us. influenced by the beauty of newtonian mechanics, biologists have long embraced the hope that if we could inspect the ultimate parts of an organism, we would be able to reduce the complexity of the organism to simple principles-the aim of ultimate understanding. the organism just had to be some sort of machine, however complicated. now we have almost achieved our wish and are close to seeing all the parts of the organism (the genome, the proteome) out on the table. but to our dismay, the organism is not just a complicated clock. even solving the genome project has not graced us with ultimate understanding (cohen and atlan, ) . the organism clearly is not a collection of wheels and cogs; the organism is more akin to cyberspace. in place of electromagnetic codes generated by computer networks, the information flowing within and through the cell-life's subunit-is encoded in molecules. but the informational structure of both networks, cell and internet, is similar: each molecule in a cell, like a chat box signal, emerges from a specific origin, bears an address, and carries a message. our problem is that the cell's molecules are not addressed to our minds, so we don't understand them. the mind exists on a different scale than does the cell; the mind and the cell live in different informational landscapes. we are unable to directly see molecular information; we have to translate the cell's molecules and processes into abstract representations: words, numbers, and pictures. the cell looks to us like a seething swarm of molecules, large and small, that appear redundant, pleiotropic, and degenerate (cohen, ) . every ligand signals more than one receptor, every receptor binds more than one ligand; every species of molecule has more than one function; and every function seems to be carried out redundantly by different agents. causes cannot be reduced to simple one-to-one relationships between molecules; biologic causality seems more like a complex pattern or web of interactions. the flowing patterns of subunit molecules in the cell, like aggregates of internet signals, make no obvious sense to us, the outside observers. we stand before the cyberspace of the cell and the maelstrom of information that confronts us becomes noise (fig. ) . the more information we gather, the more confused we become. the flow of information generated by the living cell, viewed from our scale, is turbulence. how can we understand living matter when its complexity exceeds the ability of our minds to remember and process the mass of accumulating information? intrinsic complexity (the organism) leads to extrinsic complexity (our confusion). the informational landscape of the cell-organism-species-society is like the informational landscape of the internet; viewed in the aggregate it is incomprehensible noise. true, minds can be helped by computers. but understanding a cell is not to be had merely by reproducing in silico the complexity of a real cell, even if that were possible. cataloguing the molecules and their connections to simulate the cell on a computer is a useful way to begin. but precise specification is not enough. human understanding is not mere representation-linguistic, mathematical, visual, or auditory; understanding is the exercise of proficiency (cohen, ; efroni et al., ) . we understand a thing when we know how to interact with it and use it well. thus, we understand complex information by transforming it, as does listening post, into a meaningful signal. we understand complexity by learning to respond to it in new and productive ways; information triggers new thoughts. human understanding is not a static state of knowledge; human understanding is a creative process of interaction with the world (fig. ) . the information present in the world is a fertile landscape for growing ideas. this functional view of understanding fits the definition of science proposed by the scientist and educator james b. conant, president of harvard university from to is "an interconnected series of concepts and conceptual schemes that have developed as the result of experimentation and observation and are fruitful for further experimentation and observation" (conant, ) . science, according to conant, is a self-perpetuating process: scientific understanding amounts to doing good experiments that lead to more ideas for better experiments (and leading, in the case of needs, to better solutions). so to understand the cell means to turn its complexity into a metaphor that productively stimulates our minds to devise better experiments, ideas, and treatments. classically, we have been taught that science is driven by the formulation of hypotheses and by experiments designed to discredit them (popper, ) . a hypothesis that has been falsified experimentally is then replaced by a modified hypothesis. the new, modified hypothesis too is tested by experimentation and its falsification leads to a third hypothesis. thus, science advances ideally toward the truth by continuously adjusting its hypotheses through experimentation. unfortunately, this description of science may be suitable for certain fundamental aspects of physics and chemistry, but the study of biology and other complex systems doesn't really progress that way. living systems are just too complex to be described adequately by simple hypotheses (pennisi, ) . we seem to learn much more by tinkering with them than we do by hypothecating about them. but biology aims for more than mere tinkering; we don't want merely to accumulate data; we want to comprehend essential principles about life. how can masses of complex data be transformed into comprehension? biologists today (and in time to come) are joining forces with information scientists to develop ways to answer that question. computers are helpful. the most primitive of computers puts our memory to shame; our conscious minds are no match for advanced parallel processing. nevertheless, we are endowed with a unique gift. our cognitive advantage over the computer is our ability to see associations, to create and use metaphors. the computer's memory is composed of precisely retrievable lists; our memory is a web of associations-our memory is not the computer's ram. but we, unlike computers, can create art and science, and we can respond to art and science. in closing, i will describe one example, most familiar to me, of the new synthesis between biology and information science-between mind and computer. my colleagues and i have begun to consider ways we might achieve two objectives: to record and catalogue complex scientific data in a precise format amenable to computer-assisted simulation and testing; and to have the data themselves construct representations that stimulate human minds productively. we have termed this two-tiered approach reactive animation (ra). ra emerged from our simulation of the complex development of a key class of cells in the adaptive immune system-t cells (efroni et al., ) . in the first tier, we recorded basic information about t-cell development culled from some research papers using the visual language of statecharts to convert the data to a precise computer format (harel, ) ; statecharts had been developed by david harel and his colleagues more than years earlier for building and analyzing complex man-made systems (actually, statecharts grew out of the need to coordinate the planning of a new fighter aircraft). statecharts seems suitable to deal with biologically evolved systems and not only with systems generated by human minds; objects such as molecules, cells, and organs can be described in terms of their component parts, interactions, and transitions from state to state, which is the way we actually do our experiments and record the data. moreover, the statecharts formalism is modular; we can easily add new objects, interactions, and states as we accumulate information. and most importantly, we can run the system dynamically on a computer and see the outcomes in statecharts format of any stimulation or modification of the system we care to study. sol efroni, at the time a student jointly supervised by david harel and me, then added a second tier to the representation; he devised a way for the statecharts simulation to produce and direct an animation of the action showing the t cells and other cells moving, interacting, multiplying, differentiating, or dying in the course of development in the organ called the thymus (efroni et al., ) . ra makes it possible for us to cross scales-to zoom into individual cells and their component molecules and to zoom out to view thousands of cells forming the thymus as an organ, as we please. ra allows us to experiment with the animated system in silico. best of all, ra shows us the emergence of properties in t-cell development we never dreamed were there; the animation arm of ra reveals to our eyes aspects of the data hidden from intuition. the experiments motivated by ra in silico lead to new laboratory experiments in mundo; the results produce new data for improved statecharts, and the cycle described by conant continues (fig. ) . the details of ra and of t cells and thymus are well beyond the scope of the present article; but know that ra, like listening post provides us with a representational analogue of how we might approach seemingly incomprehensible complexity; the complexity is reduced to representations that engage the mind. ra transforms the seeming noise of a complex system into a useful informational landscape (fig. ) . listening post exemplifies how transformations of informational landscapes are at the roots of the tree of life-a tree whose arborizations include biologic evolution and human understanding. look again at figs. , , and ; they represent the same processes in essence-only the labels vary. art and science flourish when they transform old information into a narrative of new ideas that engage ever more human minds. this issue of the bulletin for mathematical biology is composed of a series of papers written in memory of our colleague lee segel, professor of applied mathematics at the weizmann institute of science, whose recent death has touched us all. lee's child-like curiosity led him to apply math as an experimental tool to probe the wonder of living systems. he was always asking questions; he always wanted to see the latest experimental results; he always had an interpretation and a new question. lee loved to teach as he loved to learn. the pleasure of his company was grounded in curiosity and wonder. for me, lee was a central influence in a landscape of information that led me to explore ideas beyond the raw data of my particular field of research-cellular immunology. for me, lee transformed experimental information into a narrative of new ideas that engaged my mind in ever evolving ways. he was my teacher as well as my friend. he lives on in all my work. fig. reactive animation. the translation of experimental data into the language of statecharts converts complex masses of basic facts into a format suitable for simulation in silico. reactive animation (ra) empowers the statecharts simulation to generate a realistic animation of the system and allows one to experiment with the representation in silico. the animation in silico reveals emergent properties of the system and stimulates the experimenter to undertake new experiments in mundo. the new data enter the cycle of animated representation and improved experimentation. ra represents the data in a way that engages the mind. self-creation of meaning immune information, self-organization and meaning the limits of mathematics, the unknowable, exploring randomness, conversations with a mathematician tending adam's garden: evolving the cognitive immune self regen und auferstehung: talmud und naturwissenschaft im dialog mit der welt limits to genetic explanations impose limits on the human genome project hiv. escape artist par excellence science and common sense two rounds of whole genome duplication in the ancestral vertebrate toward rigorous comprehension of biological complexity: modeling, execution, and visualization of thymic t-cell maturation a theory for complex systems: reactive animation poised between old and new. the gallery. the wall street journal the spandrels of san marco and the panglossian paradigm: a critique of the adaptationist programme statecharts: a visual formalism for complex systems exploring the relationship between parental relatedness and male reproductive success in the antarctic fur seal arctocephalus gazella the codebreakers: the comprehensive history of secret communication from ancient times to the internet, rev the neutral theory of molecular evolution the evolutionary origin of complex features impacts and evolution: future prospects meaning-making in the immune system tracing life's circuitry darwin machines and the nature of knowledge the logic of scientific discovery, th edn on the role of feedback in promoting conflicting goals of the adaptive immune system a mathematical theory of communication. bell syst. tech. j. , . waddington, c.h., . organizers and genes a new kind of science i am the mauerberger professor of immunology at the weizmann institute of science, the director of the center for the study of emerging diseases and a member of the steering committee of the center for complexity science, jerusalem, and the director of the national institute for biotechnology in the negev, ben-gurion university of the negev. this paper has emerged from discussions with henri atlan, yonatan cohen, sol efroni, david harel, uri hershberg, dan mishmar, ohad parnes, ben rubin, david rubin, eleanor rubin, eitan rubin, lee segel, and sorin solomon. key: cord- -pufcn j authors: fibikova, lenka; mueller, roland title: threats, risks and the derived information security strategy date: - - journal: isse securing electronic business processes doi: . / - - - - _ sha: doc_id: cord_uid: pufcn j this article concentrates on the development of an information security strategy. an information security strategy needs to focus on an overall objective, usually the objectives laid out in an organization’s business strategy and its derived information technology strategy, where it takes the status quo and reflects the main objectives derived and postulates how and when to close the identified gaps. this strategy approach for improving information security is intended for an organization which supports an automotive and captive finance enterprise but is not restricted to this. the approach is aligned to the scope of iso “code of practice for an information security management system” [iso ]. however, compliance is left out of the scope. the strategy concentrates on four areas considered the relevant areas for infonnation security: people, business processses. applications and infrastructure and has therefore a clear focus on processes, stability, resilience and efficiency which are the pillars of a successful enterprise. there are two main streams related to a security strategy nowadays -either it is considered an information security strategy and its main focus is on the three common objectives of information security which are confidentiality, integrity and availability (cia) or it is considered a cyber security strategy and there are various discussions why cyber security has a broader view in addressing also objectives which go beyond the cia objectives such as reputation and legal consequences. although a strategy should consider the latter objectives as well, we will make use of the more common term "information security' throughout this article. therefore, we concentrate on an information security strategy and the main objective is the establishment of a 'process driven organization with stable and efficient operations." todays information technology is in a flux where well-known techniques are now used in a way offering new opportunities for security and stability as well as for cost savings. areas like virtualization, cloud computing, and big data are based on technologies which were examined and discussed for decades but can now be handled by the underlying technical eqnipment and the technical development. also, the internet is on a threshold which is not ouly triggered by higher speed but also by technical standards such as internet protocol version (ipv ) and the domain name service security extensions (dnssec)[dnssllj. fioally, the work life is changing and the limits between work and leisure are also io a move; the trend of byod or "briog your own device~ social computiog and social networks and tools like smart phones and tablet pc have played a major role io brioging work to the people. the goal of implementing ioformation security io an organization is to protect the organization's ioformation by ensuriog its confidentiality, iotegrity and availability. infonnation is created and processed by employees, contractors and third party users (also known as ioformation users) within busioess processes using applications and tools which are hosted io the it infrastructure (see also figure ). consequently, four areas need to be taken ioto account when implementiog infonnation security: • information users, or how people handle ioformation and use tools and applications properly to protect ioformation • business processes, or how ioformation security is embedded withio working practices • applications, or how well they are developed to ensure the protection of iofonnation stored and processed • infrastructure, or how well it provides sufficient capacities and adequate protection of infonnation and applications against unauthorized access and modification infonnation security is ensured via implementation of various measures. these measures need to • cover all aspects of the four areas-iofonnation users, busioess processes, applications and infrastructure (completeness) • provide adequate protection for ioformation (effectiveness) • be seamiessly iotegrated ioto the processes (integration) • be supported by efficient tools and simple templates (support) • avoid putting an unacceptable burden on the employees (simplicity) each of these properties is crucial for achieving effective protection of information. • if any of these areas is not completely or not sufficiently covered (completeness and adequacy) then the existing weaknesses may be deliberately exploited or lead to failures. • if a measure is not sufficiently supported or simple to use (support and simplicity) then users may try to circumvent the measure. • if a measure is not integrated within the existing processes (integration) then its use cannot be guaranteed. protectin g information with out puttin g a burden on th e information users users (security appropri ately applied) ... seam le ss integ rati on of security within the business business processes processes (built-in security) im plementation of t he integrated processes including applications securit y (tran sparent security) ... h igh a va il abi lity and easily manageable infrastructure infrastructure provi ding sufficient capaci ti es and adequate protection for information and app li cation s (seaml ess security) effi cient information securit y organiza ti on enabling e ffectiv e supp ort of the loca l business (a value·added and trusted business p a rtner) in addition to the named four areas, there must be an organization that initiates, coordinates and supports all activities with respect to information security. therefore, the organization of information security is another area that needs to be taken into account in order to achieve an effective implementation of information security; i.e. adding a fifth area: • information security organization. or how efficiently the information security organization enables effective support of the local business in protecting their information figure provides an overview of the defined information security areas and summarizes the higb-level objectives. the following section will first address common threats as reported in verizon's data breach report [veri ] as well as introduced in symantec's internet security threat report [syma ] and will then focus on the threats directed to the four areas which the information security strategy is addressing. additionally, it will point out where new technologies may influence the situation. during the last years the threat landscape has changed dramatically. in former years, the main threats were caused by viruses, phisbing and unintended data losses. nowadays the number of targeted attacks has increased to a level beyond expectations. and these attacks are not only caused by "hacktivism" which is politically motivated but even more by data thieves who "are professional criminals deliberately trying to steal information they can turn into cash" [veril ] . despite the deciioe of new disclosed vulnerabilities in commercial applications -the peak of disclosed vulnerabilities reported was reached in aod is slowly decreasing -the severity of the vulnerabilities has increased as lined out in the hp top cyber security risk report [tcsri ], in almost a quarter of disclosed vulnerabilities is rated as high-severity vulnerabilities. finally, it has to be mentioned that vulnerabilities are rising in non-traditional enterprise infrastructures such as industrial control systems (e.g., supervisory control aod data acquisition (scada)), ip telephony (voice over ip (voip)) aod new infrastructure such as mobile phones aod cloud infrastructure [tcsri ] . stuxnet, duqu aod flame are examples of targeted malware against scada infrastructures. all these aspects need to be considered in order to protect the areas ao organization relies upon. the primary problems caused by information users include the incorrect haodling of information due to missing awareness and knowledge about its value and insufficient training and awareness in using applicable tools. additionally, problems arise through external attaclks targeting information users (so-called advaoced targeted attacks, phishing, social engineering, etc.), especially considering the increased use of social networks. aware of internal security measures aod compliaoce issues; however, the extent of misuse. if it occurs, increases heavily aod causes greater damage [verii ] . therefore, in order to ensure that no further risks arise from information users, awareness activities have to be kept on a high level aod some of these activities have to address common weaknesses in processes administering employees. the main threats for business processes are interruptions to critical processes which endangers the availability of these processes aod the inability of the processes to guaraotee confidentiality, integrity aod correctness of the information. applications supporting the respective business processes are confronted with analogous threats. the it infrastructure provides the medium for communication between information users, applications. and business processes, and builds the border between an enterprise's internal environment aod the external world. consequently, it is facing threats from the outside as well as from within of the orgaoization. threats targeting the infrastructure (internal as well as external) are either passed to the information user (e.g., trojaos aod viruses) aod applications (e.g., hacking), or addressing the infrastructure components (e.g., denial of service attack, hacking the components). therefore, the protection of the infrastructure builds the basis for protecting the other three areas, the users, processes and applications. specific threats arise of technologies where common security techniques need to be modified. as an example. virtualization may counter a certain kind of threat but impose new risks because common security mechanisms do not work as used before. in this section, the four areas are examined from the perspective how an organization usually applies measures. all measures are validated on their completeness, effectiveness, integration support and simplicity as far as applicable takiog into account an average enterprise, its security posture and its setup. this will build the basis for later prioritization of measures. information security for information users is based on three pillars: . user awareness targets on training the behavior focusing on compliance with applicable legislation and the organization's policies and using applicable tools. . hr processes which ensure that appropriate employees are selected, their skills are kept up-to-date and starterlleaver/position change procedures enforce the need-to-know principle. . procurement is usually responsible for implementing measures that contractors and third party users follow the same regulations as employees and do not increase the risks. completeness: ) an induction training/orientation days usually cover information security topics. further user awareness programs are rarely established at an organization. ) an end-to-end employee lifecycle management does rarely exist starter processes usually work well for employees. during employment, many organizations pay attention that their workforce is sufficiently trained and kept up to date with new technological developments. deficiencies exist in the leaver process with the exception of dismissal. position change is frequently implemented only rudimentarily. ) contractors and third parties are usually not considered at all because they are frequently not administered by hr and its processes. procurement departments are not aware about their responsibility and the requirements for proper termination of contracts with third parties. effectiveness: ) since induction training is a one-time activity at the start of employment, it does not have any long-term effects. ) and ) the implementation of the need-to-know principle relies on the respective line managers and the people who manage and direct third party users in an organization. integration in the interual hr and procurement processes. support ) a sample induction training as well as on-going awareness programs are usually centrally provided. however, these do not cover all aspects of information security; especially information handling is dealt with rudimentarily. there exist also several tools enabling appro-priate handling of information within an organization (e.g" tools for information classification [lfrm ] or for the encryption offiles, ms office document templates for labeling). however, these do not cover all aspects and users are neither sufficiently aware of the tools available nor do they know how to handle them properly. ) and ) there are only few tools on the market and seldomly in use which support the user provisioning in the hr and procurement processes. simplicity: varies, depending on the tools in use. a simple conclusion follows out of this evaluation: the processes for securing information by the respective owners are usually not sufficient to reduce risks with respect to confidentiality, integrity and availability. when it comes to business processes, two aspects are of relevance with respect to information security: ) the information flows, and ) the importance of the respective process for the enterprise. lnformation flows need to be evaluated to ensure proper handling of information in protecting its confidentiality and integrity. importance of the business process indicates how important its information is for conducting business; this determines the priorities for continuity activities and recovery. completeness: is usually not given. handling of information within the business processes relies on the attentiveness of the employees (see also section . ) and individual business owners dealing with the information. ab a consequence, segregation of duties violations and missing business continuity are the most frequent deficits in many organizations. business continuity management has currently played an important role only in the asian market due to the past experiences (sars, bird flue, and tsunami); usually, no central activities exist to establish a proper business continuity management process. effectivenes for the respective measures cannot be validated since only limited activities have been initiated in most organizations. however, ifbusiness continuity is properly set up and tested regnlarly then an appropriate level of effectiveness can be achieved. integration, support and simplicity have to be considered when other areas are reviewed. here the verdict is almost identical to the one related to information users: business processes do not sufficiently implement information security requirements but focus on topics triggered by compliance (e.g., segregation of duties and dual control). business applications need to be developed in such a way that they enforce proper protection of information by implementing the need-to-know principle. this includes input and output controls (especially for web applications), correct data processing, authentication and authorization of users and processes, protection of information during transfer and storage and appropriate deletion of information. the development and the change of applications need to follow a stringent approval process. furthermore, application recovery activities ensure that the business processes can retorn to normal work after a disruption of the application operations as fast as possible. note: applications currently include the respective server operating systems and middleware (e.g., databases); with the rise of cloud computing and its approach of software as a service (saas) this will change. completeness: all major organizations have set up an approach which is similar to microsoft's secure development lifecycle (sdl) where information security is an integral part of software development however, most organizations usually rely on a majority oflegacy applications and only some modnles have been modified or expanded in order to better use the internet. the improvement of existing applications is a crucial issue, especially when it comes to applications that are used by business entities, but not developed and hosted within the it organization which usually is the driver for the sdl. this way, applications are brought into operations whose information security features are not sufficiently used respectively known. effectiveness: the sdl methodology and similar approaches effectively enforce the implementa- integration: for the in-house development of major applications, the use of an sdl is enforced. however, there is no integration of information security for other development projects or for approaches like saas due to non-existent formal requirements. usually, project management tools support the project management process for developing major applications. this encompasses the enforcement of the quality gates, including information security. they cover all phases of the application development lifecycle; however, additional information security tools and guidelines (e.g., information classification tools and tools for ensuring compliance with existing policies) are not integrated (separate tools and guidelines need tu be used to fnlfill individual requirements). also, some aspects of information security within common quality gates are often not supported by tools (e.g., code inspection). simplicity: currently, sdl processes are quite complex and need to be simplified in order to improve acceptance. the verdict is now more complex: there are areas where information security is integrated and there are areas where there is a gap. the risks rely heavily on the implementation and integration of a secure development !ifecycle for applications and its support by tools .. effectiveness: the majority of the measures are implemented correctly. problems arise with local network architecture, operating system hardening, vulnerability management (especially proper patching) and inadequate provisioning oflocal administrative privileges. shared service centers specializing in specific infrastructure services show an increase in effectiveness and efficiency in the implementation of information security when provided centrally by specialized experts. integration: some information security measures are commonly included in the operating procedures (e.g., backup, capacity management); some measures are rarely integrated (e.g., vulnerability management). procedures are rarely documented within an organization. support some areas are covered by tools, however not continuously throughout the processes, i.e., some tools support only a part of a process (e.g., vulnerability management tools like qualys support the vulnerability identification phase within vninerability management but not the remediation). tools are often wrongly configured. simplicity: varies, depending on the respective tool an information security organization for a global enterprise has to be set up in a way that it covers global aspects as well as local ones. ideally, the global aspects are the methodologies, policies and tools for all entities and the local aspects concentrate on the different markets and their specialties. global aspects are then managed by a department providing methodologies, policies and tools globally. local aspects are dealt with people in the markets. depending on the portfolio of an organization, there might be as well a regional or a divisional intermediate level between the global and the local areas; we will describe them as divisioual coordinators. commonly, information security is part of an it organization. positive aspects: • direct responsibility for information security at the local entities is assigued locally. in case of divisional coordination it can be ensured that subject matter expertise is available for supporting the local entities, which do not necessarily have this expertise locally. • the divisional coordinator serves as a validation/distribution platform for problems in escalating general problems to the global department, in developing divisional solutions for division-internal problems and in providing solutions gained from local entities to the other entities within the division • a self-assessment and the regular on-site assessments conducted at the local entities by the divisional coordinator enable measuring of the quality of information security, and help identifying common problems as well as enforcing ongoing improvement of the information security status at the local entities. • on-site assessments serve also to provide consnltation for the local entities in their specific problems and for creating awareness about the importance of information security among the local management. negative aspects: • an information security organization which is integrated within an it organization is usually not sufficiently involved in the busioess processes, which is crucial for the success of all initiatives. therefore. its possibilities are limited at all levels (local, regioual as well as central level). • global information security departments tend to concentrate on information security problems related to those entities which are considered the heavy weights of an enterprise's portfolio and other portfolio items are left without support. by mapping the high-level objectives defined in section and the deficiencies identified in the in section on risk considerations, the following strategic goals should be set up in order to generate an integrated information security strategy within an organization: . set up a role concept for all business processes (target areas: information users, business processes). . set up business continuity management at all entities (target areas: information users, business processes, applications, infrastructure). . provide and embed information security in the software development lifecycle searolessly, including the change process (target area: applications). . establish an ongoing improvement process for existing applications (target area: applications). . integrate information security into the it operating procedures and services (target area: infrastructore). . optimize resources by establishing an exchange platform within the information security commuuity (target area: information security organization). some of these goals sound simple, some of them are quite complex. if we map the new technology trends which have been touched in the very begiuuing, then one can see that some goals can be achieved by simple directives of the management and executive management support and some need further elaboration. for exarople, the role concept is a task which requires an organization to document its business processes and the roles that are required in executing the business processes. also, if an enterprise defines which of the business processes are the most critical ones from a global perspective then business continuity activities can also be strearulined. finally, if a mapping between business processes and applications is achieved by the first two goals then standardization on the application level could be an outcome as well. however, there are areas which need special attention: if we look at a common infrastructure concept today then we must be aware that a switch to ipv will on the one hand solve old problems like the searuless integration of virtual private network into the infrastructore but will bring up new ones which were solved by techniques that cannot be used any more. for exarople, ipv does not allow hiding a network layout via network address translation and private ip addressing but requires more stringent methods. in addition, some upcoming or newly introduced technologies have not sufficiently been evaluated: if we look at virtualization then we can simplify backup for many virtual servers at once but are confronted with the risk that a recovery of data for a specific virtual server is substantially more complex. and similar problems arise with virus protection; it needs to be understood that if different operating systems are hosted as guest operating systems within a virtual server then virus protection cannot be established on the hosting virtual machine. and there are technologies waiting whose potential is not yet sufficiently examined if we look at the domain name service security extensions (dnssec) then we easily notice that this approach helps fighting hackers who misuse the dns for attacking networks but it also allows prohibiting phishing attacks which might damage the reputation of an organization. and there are more ways how this dnssec infrastructure can be used in establishing trust for an organization and within an organization. we have outlined in this article which pillars of an organization should be looked at when developing an information security strategy. we have also touched areas of new technologies which may influence such a strategy. however, we have not sufficiently discussed which are the critical success factor whenever developing and implementing an information security strategy. we strongly believe that a well-accepted information security strategy is tightly aligned with the business strategy and has full support by executive management. we have noticed that information security can play an important role in modernizing it if the influence of information security and potential drawbacks are clearly pointed out and discussed in depth. there is a common understanding among many information security expert who say that "with an up-to-date information security strategy we can do business which was either impossible or prohibited in the past" dns security extensions technical overview international organization for standardization -iso : code of practice for information security management a simplified apfroach for classifying applications' at isse data breach investigations report key: cord- - s lghj authors: buonomo, bruno title: effects of information-dependent vaccination behavior on coronavirus outbreak: insights from a siri model date: - - journal: nan doi: . /s - - - sha: doc_id: cord_uid: s lghj a mathematical model is proposed to assess the effects of a vaccine on the time evolution of a coronavirus outbreak. the model has the basic structure of siri compartments (susceptible–infectious–recovered–infectious) and is implemented by taking into account of the behavioral changes of individuals in response to the available information on the status of the disease in the community. we found that the cumulative incidence may be significantly reduced when the information coverage is high enough and/or the information delay is short, especially when the reinfection rate is high enough to sustain the presence of the disease in the community. this analysis is inspired by the ongoing outbreak of a respiratory illness caused by the novel coronavirus covid- . aviruses can infect people and then spreading from person-to-person. however, this may happen with serious consequences: well known cases are that of severe acute respiratory syndrome (sars) which killed people worldwide during - outbreak [ ] , and the more recent case of middle east respiratory syndrome coronavirus (mers), where a total of confirmed cases including associated deaths were reported, the majority from saudi arabia (at the end of november , [ ] ). therefore, coronavirus may represent a serious public health threat. the emergency related to the novel outbreak in china is still ongoing at time of writing this article and it is unclear how the situation worldwide will unfold. the news released by media create great concern and behavioral changes can be observed in the everyday life of individuals, even in europe where at the moment only few cases have been reported. for example, the fear of coronavirus has driven rapidly to sold out of protective face masks in pharmacies in italy long before the first case in the country was reported [ ] . a specific aspects of diseases caused by coronavirus is that humans can be reinfected with respiratory coronaviruses throughout life [ ] . the duration of immunity for sars, for example, was estimated to be greater than years [ ] . moreover, investigations on human coronavirus with infected volunteers has shown that even though the immune system react after the infection (serum-specific immunoglobulin and igc antibody levels peak - days after infection) at one year following experimental infection there is only partial protection against re-infection with the homologous strain [ ] . predictions or insight concerning the time-evolution of epidemics, especially when a new emerging infectious disease is under investigation, can be obtained by using mathematical models. in mathematical epidemiology, a large amount of literature is devoted to the use of the so called compartmental epidemic models, where the individuals of the community affected by the infectious disease are divided in mutually exclusive groups (the compartments) according to their status with respect to the disease [ , , , , ] . compartmental epidemic models are providing to be the first mathematical approach for estimating the epidemiological parameter values of covid- in its early stage and for anticipating future trends [ , , ] . when the disease under interest confer permanent immunity from reinfection after being recovered, the celebrated sir model (susceptible-infectious-recovered) and its many variants are most often adopted. however, where reinfection cannot be neglected the sirs model (susceptible-infectious-recovered, and again susceptible) and its variants may be used, under the assumption that infection does not change the host susceptibility [ , , , , ] . since the disease of our interest has both reinfection and partial immunity after infection, we consider as starting point the so-called siri model (susceptibleinfectious-recovered-infectious) which takes into account of both these features (see [ ] and the references contained therein for further information on siri model). when the epidemic process may be decoupled from the longer time-scale demographic dynamics, i. e. when birth and natural death terms may be neglected, one gets a simpler model with an interesting property. in fact, according to the values of three relevant parameters (the transmission rate, the recovery rate and the reinfection rate), the model exhibits three different dynamics [ , ] : (i) no epidemic will occur, in the sense that the fraction of infectious will decrease from the initial value to zero; (ii) an epidemic outbreak occurs, in the sense that the fraction of infectious will initially increase till a maximum value is reached and then it decreases to zero; (iii) an epidemic outbreak occurs and the disease will permanently remain within the population. at time of writing this paper, scholars are racing to make a vaccine for the novel covid- coronavirus available. as of february , , it was announced that 'the first vaccine could be ready in months' [ ] . therefore, it becomes an intriguing problem to qualitatively assess how the administration of a vaccine could affect the outbreak, taking into account of the behavioral changes of individuals in response to the information available on the status of the disease in the community. this is the main aim of this paper. the scenario depicted here is that of a community where a relatively small quantity of infectious is present at time of delivering the vaccine. the vaccination is assumed to be fully voluntary and the choice to get vaccinated or not is assumed to depend in part on the available information and rumors concerning the spread of the disease in the community. the behavioral change of individuals is introduced by employing the method of information-dependent models [ , , ] which is based on the introduction of a suitable information index. such an approach has been applied to general infectious diseases [ , , , , ] as well as specific ones, including childhood diseases like measles, mumps and rubella [ , ] and is currently under development (for very recent papers see [ , , ] ). therefore, another goal of this manuscript is to provide an application of the information index to a simple model containing relevant features of a coronavirus disease. specifically, we use epidemiological parameter values based on early estimation of novel coronavirus covid- [ ] . the rest of the paper is organized as follows: in sect. we introduce the basic siri model and recall its main properties. in sect. we implement the siri model by introducing the information-dependent vaccination. the epidemic and the reinfection thresholds are discussed in sect. . section is devoted to numerical investigations: the effects of the information parameters on the time evolution of the outbreak are discussed. conclusions and future perspective are given in sect. . since the disease of our interest has both reinfection and partial immunity after infection, we first consider the siri model, which is given by the following nonlinear ordinary differential equations (the upper dot denotes the time derivative) [ ] : here s, i and r denote, respectively, the fractions of susceptible, infectious (and also infected) and recovered individuals, at a time t (the dependence on t is omitted); β is the transmission rate; γ is the recovery rate; μ is the birth/death rate; σ ∈ ( , ) is the reduction in susceptibility due to previous infection. model ( ) assumes that the time-scale under consideration is such that demographic dynamics must be considered. however, epidemics caused by coronavirus often occurs quickly enough to neglect the demographic processes (as in the case of sars in [ ] [ ] . when the epidemic process is decoupled from demography, i.e. when μ = , one obviously gets the reduced model:Ṡ this very simple model has interesting properties. indeed, introduce the basic reproduction number r = β/γ . it has been shown that the solutions have the following behavior [ ] : if r ≤ , then no epidemic will occur, in the sense that the state variable i (t) denoting the fraction of infectious will decrease from the initial value to zero; if r ∈ ( , /σ ), then an epidemic outbreak will follow, in the sense that the state variable i (t) will initially increase till a maximum value is reached and then it decreases to zero; if r > /σ , then an epidemic outbreak will follow and the disease will permanently remain within the population, in the sense that the state variable i (t) will approach (after a possibly non monotone transient) an endemic equilibrium e, given by: where: the equilibrium e is globally asymptotically stable [ ] and it is interesting to note that, since the demography has been neglected, the disease will persist in the population due to the reservoir of partially susceptible individuals in the compartment r. from a mathematical point of view, the threshold r = r σ , where r σ = /σ , is a bifurcation value for model ( ) . this does not happen for model ( ) . in fact, when demography is included in the model, the endemic equilibrium exists for r > , where r = β/(μ + γ ) and therefore both below and above the reinfection threshold. model ( ) (as well as ( )) is a simple model which is able to describe the timeevolution of the epidemic spread on a short time-scale. however, it does not takes into account of possible control measure. the simplest one to consider is vaccination. we consider the scenario where the vaccination is assumed to be fully voluntary. in order to emphasize the role of reinfection, we assume that only susceptible individuals (i.e. individuals that did not experience the infection) consider this protective option. when the vaccine is perfect (i.e. it is an ideal vaccine which confer percent life-long immunity) one gets the following model: where v denotes the fraction of vaccinated individuals and ϕ is the vaccination rate. in the next section we will modify the siri model ( ) to assess how an hypothetical vaccine could control the outbreak, taking into account of the behavioral changes of individuals produced by the information available on the status of the disease in the community. we modify the siri model by employing the idea of the information-dependent epidemic models [ , ] . we assume that the vaccination is fully voluntary and information-dependent, in the sense that the choice to get vaccinated or not depends on the available information and rumors concerning the spread of the disease in the community. the information is mathematically represented by an information index m(t), which summarizes the information about the current and past values of the disease and is given by the following distributed delay [ ] [ ] [ ] ] : here, the functiong describes the information that individuals consider to be relevant for making their choice to vaccinate or not to vaccinate. it is often assumed thatg depends only on prevalence [ , , , ] where g is a continuous, differentiable, increasing function such that g( ) = . in particular, we assume that: in ( ) the parameter k is the information coverage and may be seen as a 'summary' of two opposite phenomena, the disease under-reporting and the level of media coverage of the status of the disease, which tends to amplify the social alarm. the range of variability of k may be restricted to the interval ( , ) (see [ ] ). the delay kernel k (t) in ( ) is a positive function such that +∞ k (t)dt = and represents the weight given to past history of the disease. we assume that the kernel is given by the first element erl ,a (t) of the erlangian family, called weak kernel or exponentially fading memory. this means that the maximum weight is assigned to the current information and the delay is centered at the average /a. therefore, the parameter a takes the meaning of inverse of the average time delay of the collected information on the disease. with this choice, by applying the linear chain trick [ ] , the dynamics of m is ruled by the equation: we couple this equation with model ( ). the coupling is realized through the following information-dependent vaccination rate: where the constant ϕ ∈ ( , ) represents the fraction of the population that chooses to get vaccinate regardless of rumors and information about the status of the disease in the population, and ϕ (m(t)) represents the fraction of the population whose vaccination choice is influenced by the information. generally speaking, we require that ϕ ( ) = and ϕ is a continuous, differentiable and increasing function. however, as done in [ , ] , we take: where ε > . this parametrization leads to an overall coverage of −ε (asymptotically for m → ∞). here we take ε = . , which means a roof of % in vaccine uptakes under circumstances of high perceived risk. we also take d = [ ] . note that this choice of parameter values implies that a . % vaccination coverage is obtained in correspondence of an information index m = . (see fig. ). finally we assume that the vaccine is not perfect, which is a more realistic hypothesis, so that the vaccinated individuals may be infected but with a reduced susceptibility ψ. the siri epidemic model with information-dependent vaccination that we consider is therefore given by the meaning of the state variables, the parameters and their baseline values are given in table . note that ( ) to get: let us introduce the quantity which is the basic reproduction number of model ( ) [ ] . from the second equation of ( ) it easily follows thatİ where: it immediately follows that, if i ( ) > , then: assuming that i ( ) > and r( ) = v ( ) = (and therefore s( ) < ) it follows that: if p < /s( ), then the epidemic curve initially decays. if p > /s( ) the epidemic takes place since the infectious curve initially grows. from the first equation in ( ) it can be seen that at equilibrium it must bes = . therefore, all the possible equilibria are susceptible-free. since the solutions are clearly bounded, this means that for large time any individual who was initially susceptible has experienced the disease or has been vaccinated. looking for equilibria in the form e = Ĩ ,r,Ṽ ,m , from ( ) we get: disease-free equilibria: ifĨ = . it can be easily seen from ( ) that therefore there are infinitely many disease-free equilibria of the form endemic equilibrium: we begin by looking for equilibria such that this implies that: therefore:Ṽ = andr it follows that an unique susceptibles-free endemic equilibrium exists, which is given by: where which exists only if the quantity is the reinfection threshold. when σ > σ c the disease may spread and persist inside the community where the individuals live. note that in classical sir models the presence of an endemic state is due to the replenishment of susceptibles ensured by demography [ ] , which is not the case here. the local stability analysis of e requires the jacobian matrix of system ( ): taking into account of ( ), ( ) and that v = , it follows the eigenvalues are: and the eigenvalues of the submatrix: the trace is negative and the determinant is so that e is locally asymptotically stable. (i) the stable endemic state e can be realized thanks to the imperfection of the vaccine, in the sense that when ψ = in ( ) the variable v is always increasing. (ii) the information index, in the form described in sect. , may be responsible of the onset of sustained oscillations in epidemic models both in the case of delayed information (see e.g. [ , , , ] ) and instantaneous information (as it happens when the latency time is included in the model [ ] ). in all these mentioned cases, the epidemic spread is considered on a long time-scale and demography is taken into account. the analysis in this section clearly shows that sustained oscillations are not possible for the short time-scale siri model with information. table we use epidemiological parameter values based on early estimation of novel coronavirus covid- provided in [ ] . the estimation, based on the use of a seir metapopulation model of infection within chinese cities, revealed that the transmission rate within the city of wuhan, the epicenter of the outbreak, was . day − , and the infectious period was . days (so that γ = . day − ). therefore the brn given in ( ) is p = . (of course, in agreement with the estimate in [ ] ), and the value σ c := /p = . is the threshold for the infection rate. for vaccinated individuals, the relative susceptibility (compared to an unvaccinated individuals) is set ψ = . , which means that vaccine administration reduces the transmission rate by % (vaccine efficacy = . ). this value falls within the estimates for the most common vaccine used in the usa, where vaccine efficacy ranges between . and . (see table . , p. , in [ ] ). as for the relative susceptibility of recovered individuals, we consider two relevant baseline cases: (i) case i: σ = . . this value is representative of a reinfection value below the reinfection threshold σ c ; (i) case ii: σ = . . this value is representative of a reinfection value above the reinfection threshold σ c . the information parameter values are mainly guessed or taken from papers where the information dependent vaccination is used [ , ] . the information coverage k ranges from a minimum of . (i.e. the public is aware of % of the prevalence) to . the average time delay of information ranges from the hypothetical case of immediate information (t = ) to a delay of days. the description and baseline values of the parameters are presented in table . the initial data reflect a scenario in which a small portion of infectious is present in the community at time of administrating the vaccine. furthermore, coherently with the initial data mentioned in sect. , we assume that: and, clearly, s( ) = − i ( ). according to the analysis made in sect. , values of σ below the threshold σ c implies that the epidemic will eventually die out. when σ is above σ c , then the disease is sustained endemically by reinfection. this behavior is illustrated in fig. , where it is considered the worst possible scenario, where k = . and t = days. in fig. , left panel, the continuous line is obtained for σ = . . vaccination is not able to influence the outbreak, due to the large delay. however, even though an epidemic peak occurs after three weeks, thereafter the disease dies out due to the low level of reinfection. the case σ = . is represented by the dotted line. as expected, the reinfection is able to 'restart' the epidemic. the trend (here captured for one year) would be to asymptotically converge to the endemic equilibrium e . the corresponding time evolution of the information index m is shown in fig. , right panel. in particular, in the elimination case (σ = . ), the information index reaches a maximum of . (approx.) which corresponds to a vaccination rate of . % (see fig. ). after that, it declines but, due to memory of past events, the information index is still positive months after the elimination of the disease. the 'social alarm' produced in the case σ = . is somehow represented by the increasing continuous curve in fig. , right panel. at the end of the time frame it is m ≈ . which corresponds to a vaccination rate of %. in summary, a large reinfection rate may produce a large epidemic. however, even in this worst scenario, the feedback produced by behavioral changes due to information may largely affect the outbreak evolution. in fig. table model ( ), is given by the quantity: more informed people react and vaccinate and this, in turn, contribute to the elimination of the disease. therefore, a threshold value k c exists above which the disease can be eliminated. an insight on the overall effect of parameter k on the epidemic may be determined by evaluating how it affects the cumulative incidence (ci), i.e. total number of new cases in the time frame [ , t f ]. we also introduce the following index which measures the relative change of cumulative incidence for two different values, say p and p , of a given parameter p over the simulated time frame (in other words, the percentage variation of the cumulative incidence varying p from p ). in fig. (first plot from the left) it is shown the case of a reinfection value σ = . , that is under the reinfection threshold. it can be seen how ci is declining with increasing k. in fig. (second plot from the left) a comparison with the case of low information coverage, k = . , is given: a reduction till % of ci may be reached by increasing the value of k till k = . . when the reinfection value is σ = . (fig. , third and fourth plot), that is above the reinfection threshold, the 'catastrofic case' is represented in correspondence of k = . . this case is quickly recovered by increasing k, as we already know from fig. , because of the threshold value k c , between . and . , which allows to pass from the endemic to no-endemic asymptotic state. then, again table ci is declining with increasing k. this means that when reinfection is high, the effect of information coverage is even more important. in fact, in this case the prevalence is high and a high value of k result in a greater behavioral response by the population. in fig. it is shown the influence of the information delay t on ci. in the case σ = . ci grows concavely with t (first plot from the left). in fig. (second plot) a comparison with the case of maximum information delay, t = days, is given: a reduction till % of ci may be reached by reducing the value of t till to very few days. when the reinfection value is σ = . (fig. , third and fourth plot), that is above the reinfection threshold, ci increases convexly with t . a stronger decreasing effect on ci can be seen by reducing the delay from t = days to t ≈ , and a reduction till % of ci may be reached by reducing the value of t till to very few days. in this paper we have investigated how a hypothetical vaccine could affect a coronavirus epidemic, taking into account of the behavioral changes of individuals in response to the information about the disease prevalence. we have first considered a basic siri model. such a model contains the specific feature of reinfection, which is typical of coronaviruses. reinfection may allow the disease to persist even when the time-scale of the outbreak is small enough to neglect the demography (births and natural death). then, we have implemented the siri model to take into account of: (i) an available vaccine to be administrated on voluntary basis to susceptibles; (ii) the change in the behavioral vaccination in response to information on the status of the disease. we have seen that the disease burden, expressed through the cumulative incidence, may be significantly reduced when the information coverage is high enough and/or the information delay is short. when the reinfection rate is above the critical value, a relevant role is played by recovered individuals. this compartment offers a reservoir of susceptibles (although with a reduced level of susceptibility) and if not vaccinate may contribute to the re-emergence of the disease. on the other hand, in this case a correct and quick information may play an even more important role since the social alarm produced by high level of prevalence results, in turn, in high level of vaccination rate and eventually in the reduction or elimination of the disease. the model on which this investigation is based is intriguing since partial immunity coupled to short-time epidemic behavior may lead to not trivial epidemic dynamics (see the 'delayed epidemic' case, where an epidemics initially may decrease to take off later [ ] ). however, it has many limitations in representing the covid- propagation. for example, the model represents the epidemics in a closed community over a relatively short time-interval and therefore it is unable to capture the complexity of global mobility, which is one of the main concerns related to covid- propagation. another limitation, which is again related to the global aspects of epidemics like sars and covid- , is that we assume that individuals are influenced by information on the status of the prevalence within the community where they live (i.e. the fraction i is part of the total population) whereas local communities may be strongly influenced also by information regarding far away communities, which are perceived as potential threats because of global mobility. moreover, in absence of treatment and vaccine, local authorities face with coronavirus outbreak using social distancing measures, that are not considered here: individuals are forced to be quarantined or hospitalized. nevertheless, contact pattern may be reduced also as response to information on the status of the disease. in this case the model could be modified to include an information-dependent contact rate, as in [ , ] . finally, the model does not include the latency time and the diseaseinduced mortality is also neglected (at the moment, the estimate for covid- is at around %). these aspects will be part of future investigations. mascherine sold out in farmacie roma data-based analysis, modelling and forecasting of the novel coronavirus ( -ncov) outbreak. medrxiv preprint infectious diseases of humans. dynamics and control mathematical epidemiology oscillations and hysteresis in an epidemic model with informationdependent imperfect vaccination global stability of an sir epidemic model with information dependent vaccination globally stable endemicity for infectious diseases with information-related changes in contact patterns modeling of pseudo-rational exemption to vaccination for seir diseases the time course of the immune response to experimental coronavirus infection of man mathematical structures of epidemic systems a new transmission route for the propagation of the sars-cov- coronavirus information-related changes in contact patterns may trigger oscillations in the endemic prevalence of infectious diseases bistable endemic states in a susceptible-infectious-susceptible model with behavior-dependent vaccination vaccinating behaviour, information, and the dynamics of sir vaccine preventable diseases. theor bifurcation thresholds in an sir model with informationdependent vaccination fatal sir diseases and rational exemption to vaccination the impact of vaccine side effects on the natural history of immunization programmes: an imitation-game approach infection, reinfection, and vaccination under suboptimal immune protection: epidemiological perspectives epidemiology of coronavirus respiratory infections epidemics with partial immunity to reinfection modeling infectious diseases in humans and animals nonlinear dynamics of infectious diseases via informationinduced vaccination and saturated treatment modeling the interplay between human behavior and the spread of infectious diseases an introduction to mathematical epidemiology bistability of evolutionary stable vaccination strategies in the reinfection siri model biological delay systems: linear stability theory the change of susceptibility following infection can induce failure to predict outbreak potential for r novel coronavirus -ncov: early estimation of epidemiological parameters and epidemic predictions world health organization: cumulative number of reported probable cases of sars middle east respiratory syndrome coronavirus world health organization: novel coronavirus world health organization: director-general's remarks at the media briefing on -ncov on statistical physics of vaccination duration of antibody responses after severe acute respiratory syndrome stability and bifurcation analysis on a delayed epidemic model with information-dependent vaccination publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations conflict of interest the author states that there is no conflict of interest. key: cord- - fh mk authors: yasnoff, william a.; o'carroll, patrick w.; friede, andrew title: public health informatics and the health information infrastructure date: journal: biomedical informatics doi: . / - - - _ sha: doc_id: cord_uid: fh mk what are the three core functions of public health, and how do they help shape the different foci of public health and medicine? what are the current and potential effects of a) the genomics revolution; and b) / on public health informatics? what were the political, organizational, epidemiological, and technical issues that influenced the development of immunization registries? how do registries promote public health, and how can this model be expanded to other domains (be specific about those domains) ? how might it fail in others?why? what is the vision and purpose of the national health information infrastructure? what kinds of impacts will it have, and in what time periods? why don’t we have one already? what are the political and technical barriers to its implementation? what are the characteristics of any evaluation process that would be used to judge demonstration projects? biomedical informatics includes a wide range of disciplines that span information from the molecular to the population level. this chapter is primarily focused on the population level, which includes informatics applied to public health and to the entire health care system (health information infrastructure). population-level informatics has its own special problems, issues, and considerations. creating information systems at the population level has always been very difficult because of the large number of data elements and individuals that must be included, as well as the need to address data and information issues that affect health in the aggregate (e.g., environmental determinants of health). with faster and cheaper hardware and radically improved software tools, it has become financially and technically feasible to create information systems that will provide the information about individuals and populations necessary for optimized decision-making in medical care and public health. however, much work remains to fully achieve this goal. this chapter deals with public health informatics primarily as it relates to the medical care of populations. however, it should be emphasized that the domain of public health informatics is not limited to the medical care environment. for example, information technology is being applied to automatically detect threats to health from the food supply, water systems, and even driving conditions (such as obstacles on the roadway beyond the reach of visible headlight beams), and to assist in man-made or natural disaster management. monitoring the environment for health risks due to biological, chemical, and radiation exposures (natural and made-made) is of increasing concern to protecting the public's health. for example, systems are now being developed and deployed to rapidly detect airborne bioterror agents. although they do not directly relate to medical care, these applications designed to protect human health should properly be considered within the domain of public health informatics. public health informatics has been defined as the systematic application of information and computer science and technology to public health practice, research, and learning (friede et al., ; yasnoff et al., ) . public health informatics is distinguished by its focus on populations (versus the individual), its orientation to prevention (rather than diagnosis and treatment), and its governmental context, because public health nearly always involves government agencies. it is a large and complex area that is the focus of another entire textbook in this series (o'carroll et al., ) . the differences between public health informatics and other informatics specialty areas relate to the contrast between public health and medical care itself (friede & o'carroll, ; yasnoff et al., ) . public health focuses on the health of the community, as opposed to that of the individual patient. in the medical care system, individuals with specific diseases or conditions are the primary concern. in public health, issues related to the community as the patient may require "treatment" such as disclosure of the disease status of an individual to prevent further spread of illness or even quarantining some individuals to protect others. environmental factors, especially ones that that affect the health of populations over the long term (e.g. air quality), are also a special focus of the public health domain. public health places a large emphasis on the prevention of disease and injury versus intervention after the problem has already occurred. to the extent that traditional medical care involves prevention, its focus is primarily on delivery of preventive services to individual patients. public health actions are not limited to the clinical encounter. in public health, the nature of a given intervention is not predetermined by professional discipline, but rather by the cost, expediency, and social acceptability of intervening at any potentially effective point in the series of events leading to disease, injury, or disability. public health interventions have included (for example) wastewater treatment and solid waste disposal systems, housing and building codes, fluoridation of municipal water supplies, removal of lead from gasoline, and smoke alarms. contrast this with the modern healthcare system, which generally accomplishes its mission through medical and surgical encounters. public health also generally operates directly or indirectly through government agencies that must be responsive to legislative, regulatory, and policy directives, carefully balance competing priorities, and openly disclose their activities. in addition, certain public health actions involve authority for specific (sometimes coercive) measures to protect the community in an emergency. examples include closing a contaminated pond or a restaurant that fails inspection. community partners to provide such care. though there is great variation across jurisdictions, the fundamental assurance function is unchanged: to assure that all members of the community have adequate access to needed services. the assurance function is not limited to access to clinical care. rather, it refers to assurance of the conditions that allow people to be healthy and free from avoidable threats to health-which includes access to clean water, a safe food supply, well-lighted streets, responsive and effective public safety entities, and so forth. this "core functions" framework has proven to be highly useful in clarifying the fundamental, over-arching responsibilities of public health. but if the core functions describe what public health is for, a more detailed and grounded delineation was needed to describe what public health agencies do. to meet this need, a set of ten essential public health services (table . ) was developed through national and state level deliberations of public health providers and consumers (department of health and human services (dhhs), ). it is through these ten services that public health carries out its mission to assure the conditions in which people can be healthy. the core function of assessment, and several of the essential public health services rely heavily on public health surveillance, one of the oldest systematic activities of the public health sector. surveillance in the public health context refers to the ongoing collection, analysis, interpretation, and dissemination of data on health conditions (e.g., breast cancer) and threats to health (e.g., smoking prevalence). surveillance data represent one of the fundamental means by which priorities for public health action are set. surveillance data are useful not only in the short term (e.g., in surveillance for acute infectious diseases such as influenza, measles, and hiv/aids), but also in the longer term, e.g., in determining leading causes of premature death, injury, or disability. in either case, what distinguishes surveillance is that the data are collected for the purposes of action-either to guide a public health response (e.g., an outbreak investigation, or mitigation of a threat to a food or water source) or to help direct public health policy. a recent example of the latter is the surveillance data showing the dramatic rise in obesity in the united states. a tremendous amount of energy and public focus has been brought to bear on this problem-including a major dhhs program, the healthierus initiative-driven largely by compelling surveillance data. . monitor the health status of individuals in the community to identify community health problems . diagnose and investigate community health problems and community health hazards . inform, educate, and empower the community with respect to health issues . mobilize community partnerships in identifying and solving community health problems . develop policies and plans that support individual and community efforts to improve health . enforce laws and rules that protect the public health and ensure safety in accordance with those laws and rules . link individuals who have a need for community and personal health services to appropriate community and private providers . ensure a competent workforce for the provision of essential public health services . research new insights and innovate solutions to community health problems . evaluate the effectiveness, accessibility, and quality of personal and population-based health services in a community the fundamental science of public health is epidemiology, which is the study of the prevalence and determinants of disability and disease in populations. hence, most public health information systems have focused on information about aggregate populations. almost all medical information systems focus almost exclusively on identifying information about individuals. for example, almost any clinical laboratory system can quickly find jane smith's culture results. what public health practitioners want to know is the time trend of antibiotic resistance for the population that the clinic serves, or the trend for the population that the clinic actually covers. most health care professionals are surprised to learn that there is no uniform national routine reporting -never mind information system -for most diseases, disabilities, risk factors, or prevention activities in the united states. in contrast, france, great britain, denmark, norway and sweden have comprehensive systems in selected areas, such as occupational injuries, infectious diseases, and cancer; no country, however, has complete reporting for every problem. in fact, it is only births, deaths, and -to a lesser extentfetal deaths that are uniformly and relatively completely reported in the united states by the national vital statistics system, operated by the states and the centers for disease control and prevention (cdc). if you have an angioplasty and survive, nobody at the state or federal level necessarily knows. public health information systems have been designed with special features. for example, they are optimized for retrieval from very large (multi-million) record databases, and to be able to quickly cross-tabulate, study secular trends, and look for patterns. the use of personal identifiers in these systems is very limited, and their use is generally restricted to linking data from different sources (e.g., data from a state laboratory and a disease surveillance form). a few examples of these kinds of populationfocused systems include cdc systems such as the hiv/aids reporting system, which collects millions of observations concerning people infected with the human immunodeficiency virus (hiv) and those diagnosed with acquired immunodeficiency syndrome (aids) and is used to conduct dozens of studies (and which does not collect personal identifiers; individuals are tracked by pseudo-identifiers); the national notifiable disease surveillance system, which state epidemiologists use to report some diseases (the exact number varies as conditions wax and wane) every week to the cdc (and which makes up the center tables in the morbidity and mortality weekly report [mmwr] ). the cdc wonder system (friede et al., ) , which contains tens of millions of observations drawn from some databases, explicitly blanks cells with fewer than three to five observations (depending on the dataset), specifically to prevent individuals with unusual characteristics from being identified. if there is no national individual reporting, how are estimates obtained for, say, the trends in teenage smoking or in the incidence of breast cancer? how are epidemics found? data from periodic surveys and special studies, surveillance systems, and disease registries are handled by numerous stand-alone information systems. these systemsusually managed by state health departments and federal health agencies (largely the cdc) or their agents -provide periodic estimates of the incidence and prevalence of diseases and of certain risk factors (for example, smoking and obesity); however, because the data are from population samples, it is usually impossible to obtain estimates at a level of geographic detail finer than a region or state. moreover, many of the behavioral indices are patient self-reported (although extensive validation studies have shown that they are good for trends and sometimes are more reliable than are data obtained from clinical systems). in the case of special surveys, such as cdc's national health and nutrition examination survey (nhanes), there is primary data entry into a cdc system. the data are complete, but the survey costs many millions of dollars, is done only every few years, and it takes years for the data to be made available. there are also disease registries that track -often completely -the incidence of certain conditions, especially cancers, birth defects, and conditions associated with environmental contamination. they tend to focus on one topic or to cover certain diseases for specific time periods. the cdc maintains dozens of surveillance systems that attempt to track completely the incidence of many conditions, including lead poisoning, injuries and deaths in the workplace, and birth defects. (some of these systems use samples or cover only certain states or cities). as discussed above, there is also a list of about notifiable diseases (revised every year) that the state epidemiologists and the cdc have determined are of national significance and warrant routine, complete reporting; however, it is up to providers to report the data, and reporting is still often done by telephone or mail, so the data are incomplete. finally, some states do collect hospital discharge summaries, but now that more care is being delivered in the ambulatory setting, these data capture only a small fraction of medical care. they are also notoriously difficult to access. what all these systems have in common is that they rely on special data collection. it is rare that they are seamlessly linked to ongoing clinical information systems. even clinical data such as hospital infections is reentered. why? all these systems grew up at the same time that information systems were being put in hospitals and clinics. hence, there is duplicate data entry, which can result in the data being shallow, delayed, and subject to input error and recall bias. furthermore, the systems themselves are often unpopular with state agencies and health care providers precisely because they require duplicate data entry (a child with lead poisoning and salmonella needs to be entered in two different cdc systems). the national electronic disease surveillance system (nedss) is a major cdc initiative that addresses this issue by promoting the use of data and information system standards to advance the development of efficient, integrated, and interoperable surveillance systems at federal, state and local levels (see www.cdc.gov/nedss). this activity is designed to facilitate the electronic transfer of appropriate information from clinical information systems in the health care industry to public health departments, reduce provider burden in the provision of information, and enhance both the timeliness and quality of information provided. now that historical and epidemiological forces are making the world smaller and causing lines between medicine and public health to blur, systems will need to be multifunctional, and clinical and public health systems will, of necessity, coalesce. what is needed are systems that can tell us about individuals and the world in which those individuals live. to fill that need, public health and clinical informaticians will need to work closely together to build the tools to study and control new and emerging threats such as bioterror, hiv/aids, sars and its congeners, and the environmental effects of the shrinking ozone layer and greenhouse gases. it can be done. for example, in the late 's, columbia presbyterian medical center and the new york city department of health collaborated on the development of a tuberculosis registry for northern manhattan, and the emory university system of health care and the georgia department of public health built a similar system for tuberculosis monitoring and treatment in atlanta. it is not by chance that these two cities each developed tuberculosis systems; rather, tuberculosis is a perfect example of what was once a public health problem (that affected primarily the poor and underserved) coming into the mainstream population as a result of an emerging infectious disease (aids), immigration, increased international travel, multidrug resistance, and our growing prison population. hence, the changing ecology of disease, coupled with revolutionary changes in how health care is managed and paid for, will necessitate information systems that serve both individual medical and public health needs. immunization registries are confidential, population based, computerized information systems that contain data about children and vaccinations (national vaccine advisory committee, ). they represent a good example for illustrating the principles of public health informatics. in addition to their orientation to prevention, they can only function properly through continuing interaction with the health care system. they also must exist in a governmental context because there is little incentive (and significant organizational barriers) for the private sector to maintain such registries. although immunization registries are among the largest and most complex public health information systems, the successful implementations show conclusively that it is possible to overcome the challenging informatics problems they present. childhood immunizations have been among the most successful public health interventions, resulting in the near elimination of nine vaccine preventable diseases that historically extracted a major toll in terms of both morbidity and mortality (iom, a) . the need for immunization registries stems from the challenge of assuring complete immunization protection for the approximately , children born each day in the united states in the context of three complicating factors: the scattering of immunization records among multiple providers; an immunization schedule that has become increasingly complex as the number of vaccines has grown; and the conundrum that the very success of mass immunization has reduced the incidence of disease, lulling parents and providers into a sense of complacency. the - u.s. measles outbreak, which resulted in , cases and preventable deaths (atkinson et al., ) , helped stimulate the public health community to expand the limited earlier efforts to develop immunization registries. because cdc was proscribed by congress from creating a single national immunization registry (due to privacy concerns), the robert wood johnson foundation, in cooperation with several other private foundations, established the all kids count (akc) program that awarded funds to states and communities in to assist in the development of immunization registries. akc funded the best projects through a competitive process, recruited a talented staff to provide technical assistance, and made deliberate efforts to ensure sharing of the lessons learned, such as regular, highly interactive meetings of the grantees. subsequent funding of states by cdc and the woodruff foundation via the information network for public health officials (inpho) project (baker et al., ) was greatly augmented by a presidential commitment to immunization registries announced in (white house, ) . this resulted in every state's involvement in registry development. immunization registries must be able to exchange information to ensure that children who relocate receive needed immunizations. to accomplish this, standards were needed to prevent the development of multiple, incompatible immunization transmission formats. beginning in , cdc worked closely with the health level standards development organization (see chapter ) to define hl messages and an implementation guide for immunization record transactions. the initial data standard was approved by hl in and an updated implementation guide was developed in . cdc continues its efforts to encourage the standards-based exchange of immunization records among registries. as more experience accumulated, akc and cdc collaborated to develop an immunization registry development guide (cdc, ) that captured the hard-won lessons developed by dozens of projects over many years. by , a consensus on the needed functions of immunization registries had emerged (table . ), codifying years of experience in refining system requirements. cdc also established a measurement system for tracking progress that periodically assesses the percentage of immunization registries that have operationalized each of the functions ( figure . electronically store data regarding all national vaccine advisory committee-approved core data elements . establish a registry record within weeks of birth for each child born in the catchment area . enable access to vaccine information from the registry at the time of the encounter . receive and process vaccine information within month of vaccine administration . protect the confidentiality of medical information . protect the security of medical information . exchange vaccination records by using health level standards . automatically determine the immunization(s) needed when a person is seen by the health care provider for a scheduled vaccination . automatically identify persons due or late for vaccinations to enable the production of reminder and recall notices . automatically produce vaccine coverage reports by providers, age groups, and geographic areas . produce authorized immunization records . promote accuracy and completeness of registry data registries, the national healthy people objectives include the goal of having % of all u.s. children covered by fully functioning immunization registries (dhhs, ) . the development and implementation of immunization registries presents challenging informatics issues in at least four areas: ) interdisciplinary communication; ) organizational and collaborative issues; ) funding and sustainability; and ) system design. while the specific manifestations of these issues are unique to immunization registries, these four areas represent the typical domains that must be addressed and overcome in public health informatics projects. interdisciplinary communications is a key challenge in any biomedical informatics project-it is certainly not specific to public health informatics. to be useful, a public health information system must accurately represent and enable the complex concepts and processes that underlie the specific business functions required. information systems represent a highly abstract and complex set of data, processes, and interactions. this complexity needs to be discussed, specified, and understood in detail by a variety of personnel with little or no expertise in the terminology and concepts of information technology. therefore, successful immunization registry implementation requires clear communication among public health specialists, immunization specialists, providers, it specialists, and related disciplines, an effort complicated by the lack of a shared vocabulary and differences in the usage of common terms from the various domains. added to these potential communication problems are the anxieties and concerns inherent in the development of any new information system. change is an inevitable part of such a project-and change is uncomfortable for everyone involved. implementation of information systems. in this context, tensions and anxieties can further degrade communications. to deal with the communications challenges, particularly between it and public health specialists, it is essential to identify an interlocutor who has familiarity with both information technology and public health. the interlocutor should spend sufficient time in the user environment to develop a deep understanding of the information processing context of both the current and proposed systems. it is also important for individuals from all the disciplines related to the project to have representation in the decisionmaking processes. the organizational and collaborative issues involved in developing immunization registries are daunting because of the large number and wide variety of partners. both public and private sector providers and other organizations are likely participants. for the providers, particularly in the private sector, immunization is just one of many concerns. however, it is essential to mobilize private providers to submit immunization information to the registry. in addition to communicating regularly to this group about the goals, plans, and progress of the registry, an invaluable tool to enlist their participation is a technical solution that minimizes their time and expense for registry data entry, while maximizing the benefit in terms of improved information about their patients. it is critical to recognize the constraints of the private provider environment, where income is generated mostly from "piecework" and time is the most precious resource. governance issues are also critical to success. all the key stakeholders need to be represented in the decision-making processes, guided by a mutually acceptable governance mechanism. large information system projects involving multiple partners -such as immunization registries -often require multiple committees to ensure that all parties have a voice in the development process. in particular, all decisions that materially affect a stakeholder should be made in a setting that includes their representation. legislative and regulatory issues must be considered in an informatics context because they impact the likelihood of success of projects. with respect to immunization registries, the specific issues of confidentiality, data submission, and liability are critical. the specific policies with respect to confidentiality must be defined to allow access to those who need it while denying access to others. regulatory or legislative efforts in this domain must also operate within the context of the federal health insurance portability and accountability act (hipaa) that sets national minimum privacy requirements for personal health information. some jurisdictions have enacted regulations requiring providers to submit immunization data to the registry. the effectiveness of such actions on the cooperation of providers must be carefully evaluated. liability of the participating providers and of the registry operation itself may also require legislative and/or regulatory clarification. funding and sustainability are continuing challenges for all immunization registries. in particular, without assurances of ongoing operational funding, it will be difficult to secure the commitments needed for the development work. naturally, an important tool for securing funding is development of a business case that shows the anticipated costs and benefits of the registry. while a substantial amount of information now exists about costs and benefits of immunization registries (horne et al., ) , many of the registries that are currently operational had to develop their business cases prior to the availability of good quantitative data. specific benefits associated with registries include preventing duplicative immunizations, eliminating the necessity to review the vaccination records for school and day care entry, and efficiencies in provider offices from the immediate availability of complete immunization history information and patient-specific vaccine schedule recommendations. the careful assessment of costs and benefits of specific immunization registry functions may also be helpful in prioritizing system requirements. as with all information systems, it is important to distinguish "needs" (those things people will pay for) from "wants" (those things people would like to have but are not willing to spend money on) (rubin, ). information system "needs" are typically supported by a strong business case, whereas "wants" often are not. system design is also an important factor in the success of immunization registries. difficult design issues include data acquisition, database organization, identification and matching of children, generating immunization recommendations, and access to data, particularly for providers. acquiring immunization data is perhaps the most challenging system design issue. within the context of busy pediatric practices (where the majority of childhood immunizations are given), the data acquisition strategy must of necessity be extremely efficient. ideally, information about immunizations would be extracted from existing electronic medical records or from streams of electronic billing data; either strategy should result in no additional work for participating providers. unfortunately neither of these options is typically available. electronic medical records are currently implemented only in roughly - % of physician practices. while the use of billing records is appealing, it is often difficult to get such records on a timely basis without impinging on their primary function-namely, to generate revenue for the practice. also, data quality, particularly with respect to duplicate records, is often a problem with billing information. a variety of approaches have been used to address this issue, including various forms of direct data entry as well as the use of bar codes (yasnoff, ) . database design also must be carefully considered. once the desired functions of an immunization registry are known, the database design must allow efficient implementation of these capabilities. the operational needs for data access and data entry, as well as producing individual assessments of immunization status, often require different approaches to design compared to requirements for population-based immunization assessment, management of vaccine inventory, and generating recall and reminder notices. one particularly important database design decision for immunization registries is whether to represent immunization information by vaccine or by antigen. vaccinebased representations map each available preparation, including those with multiple antigens, into its own specific data element. antigen-based representations translate multi-component vaccines into their individual antigens prior to storage. in some cases, it may be desirable to represent the immunization information both ways. specific consideration of required response times for specific queries must also be factored into key design decisions. identification and matching of individuals within immunization registries is another critical issue. because it is relatively common for a child to receive immunizations from multiple providers, any system must be able to match information from multiple sources to complete an immunization record. in the absence of a national unique patient identifier, most immunization registries will assign an arbitrary number to each child. of course, provisions must be made for the situation where this identification number is lost or unavailable. this requires a matching algorithm, which utilizes multiple items of demographic information to assess the probability that two records are really data from the same person. development of such algorithms and optimization of their parameters has been the subject of active investigation in the context of immunization registries, particularly with respect to deduplication (miller et al., ) . another critical design issue is generating vaccine recommendations from a child's prior immunization history, based on guidance from the cdc's advisory committee on immunization practices (acip). as more childhood vaccines have become available, both individually and in various combinations, the immunization schedule has become increasingly complex, especially if any delays occur in receiving doses, a child has a contraindication, or local issues require special consideration. the language used in the written guidelines is sometimes incomplete, not covering every potential situation. in addition, there is often some ambiguity with respect to definitions, e.g., for ages and intervals, making implementation of decision support systems problematic. considering that the recommendations are updated relatively frequently, sometimes several times each year, maintaining software that produces accurate immunization recommendations is a continuing challenge. accordingly, the implementation, testing, and maintenance of decision support systems to produce vaccine recommendations has been the subject of extensive study (yasnoff & miller, ) . finally, easy access to the information in an immunization registry is essential. while this may initially seem to be a relatively simple problem, it is complicated by private providers' lack of high-speed connectivity. even if a provider office has the capability for internet access, for example, it may not be immediately available at all times, particularly in the examination room. immunization registries have developed alternative data access methods such as fax-back and telephone query to address this problem. since the primary benefit of the registry to providers is manifest in rapid access to the data, this issue must be addressed. ready access to immunization registry information is a powerful incentive to providers for entering the data from their practice. in the united states, the first major report calling for a health information infrastructure was issued by the institute of medicine of the national academy of sciences in (iom, ) . this report, "the computer-based patient record," was the first in a series of national expert panel reports recommending transformation of the health care system from reliance on paper to electronic information management. in response to the iom report, the computer-based patient record institute (cpri), a private not-for-profit corporation, was formed for the purpose of facilitating the transition to computer-based records. a number of community health information networks (chins) were established around the country in an effort to coalesce the multiple community stakeholders in common efforts towards electronic information exchange. the institute of medicine updated its original report in (iom, ), again emphasizing the urgency to apply information technology to the information intensive field of health care. however, most of the community health information networks were not successful. perhaps the primary reason for this was that the standards and technology were not yet ready for cost-effective community-based electronic health information exchange. another problem was the focus on availability of aggregated health information for secondary users (e.g., policy development), rather than individual information for the direct provision of patient care. also, there was neither a sense of extreme urgency nor were there substantial funds available to pursue these endeavors. however, at least one community, indianapolis, continued to move forward throughout this period and has now emerged as an a national example of the application of information technology to health care both in individual health care settings and throughout the community. the year brought widespread attention to this issue with the iom report "to err is human" (iom, b) . in this landmark study, the iom documented the accumulating evidence of the high error rate in the medical care system, including an estimated , to , preventable deaths each year in hospitals alone. this report has proven to be a milestone in terms of public awareness of the consequences of paperbased information management in health care. along with the follow-up report, "crossing the quality chasm" (iom, ) , the systematic inability of the health care system to operate at high degree of reliability has been thoroughly elucidated. the report clearly placed the blame on the system, not the dedicated health care professionals who work in an environment without effective tools to promote quality and minimize errors. several additional national expert panel reports have emphasized the iom findings. in , the president's information technology advisory committee (pitac) issued a report entitled "transforming health care through information technology" (pitac, ) . that same year, the computer science and telecommunications board of the national research council (nrc) released "networking health: prescriptions for the internet" (nrc, ) which emphasized the potential for using the internet to improve electronic exchange of health care information. finally, the national committee on vital and health statistics (ncvhs) outlined the vision and strategy for building a national health information infrastructure (nhii) in its report, "information for health" (ncvhs, ) . ncvhs, a statutory advisory body to dhhs, indicated that federal government leadership was needed to facilitate further development of an nhii. on top of this of bevy of national expert panel reports, there has been continuing attention in both scientific and lay publications to cost, quality, and error issues in the health care system. the anthrax attacks of late further sensitized the nation to the need for greatly improved disease detection and emergency medical response capabilities. what has followed has been the largest-ever investment in public health information infrastructure in the history of the united states. some local areas, such as indianapolis and pittsburgh, have begun to actively utilize electronic information from the health care system for early detection of bioterrorism and other disease outbreaks. in , separate large national conferences were devoted to both the cdc's public health information network (phin) (cdc, ) and the dhhs nhii initiative (dhhs, yasnoff et al., . while the discussion here has focused on the development of nhii in the united states, many other countries are involved in similar activities and in fact have progressed further along this road. canada, australia, and a number of european nations have devoted considerable time and resources to their own national health information infrastructures. the united kingdom, for example, has announced its intention to allocate several billion pounds over the next few years to substantially upgrade its health information system capabilities. it should be noted, however, that all of these nations have centralized, government-controlled health care systems. this organizational difference from the multifaceted, mainly private health care system in the u.s. results in a somewhat different set of issues and problems. hopefully, the lessons learned from health information infrastructure development activities across the globe can be effectively shared to ease the difficulties of everyone who is working toward these important goals. the vision of the national health information infrastructure is anytime, anywhere health care information at the point of care. the intent to is to create a distributed system, not a centralized national database. patient information would be collected and stored at each care site. when a patient presented for care, the various existing electronic records would be located, collected, integrated, and immediately delivered to allow the provider to have complete and current information upon which to base clinical decisions. in addition, clinical decision support (see chapter ) would be integrated with information delivery. in this way, clinicians could receive reminders of the most recent clinical guidelines and research results during the patient care process, thereby avoiding the need for superhuman memory capabilities to assure the effective practice of medicine. the potential benefits of nhii are both numerous and substantial. perhaps most important are error reduction and improved quality of care. numerous studies have shown that the complexity of present-day medical care results in very frequent errors of both omission and commission. this problem was clearly articulated at the meeting of the institute of medicine: "current practice depends upon the clinical decision making capacity and reliability of autonomous individual practitioners, for classes of problems that routinely exceed the bounds of unaided human cognition" (masys, ) . electronic health information systems can contribute significantly to improving this problem by reminding practitioners about recommended actions at the point of care. this can include both notifications of actions that may have been missed, as well as warnings about planned treatments or procedures that may be harmful or unnecessary. literally dozens of research studies have shown that such reminders improve safety and reduce costs (kass, ; bates, ) . in one such study (bates et al., ) , medication errors were reduced by %. a more recent study by the rand corporation showed that only % of u.s. adults were receiving recommended care (mcglynn et al., ) . the same techniques used to reduce medical errors with electronic health information systems also contribute substantially to ensuring that recommended care is provided. this is becoming increasingly important as the population ages and the prevalence of chronic disease increases. guidelines and reminders also can improve the effectiveness of dissemination of new research results. at present, widespread application of a new research in the clinical setting takes an average of years (balas & boren, ) . patient-specific reminders delivered at the point of care highlighting important new research results could substantially increase the adoption rate. another important contribution of nhii to the research domain is improving the efficiency of clinical trials. at present, most clinical trials require creation of a unique information infrastructure to insure protocol compliance and collect essential research data. with nhii, where every practitioner would have access to a fully functional electronic health record, clinical trials could routinely be implemented through the dissemination of guidelines that specify the research protocol. data collection would occur automatically in the course of administering the protocol, reducing time and costs. in addition, there would be substantial value in analyzing deidentified aggregate data from routine patient care to assess the outcomes of various treatments, and monitor the health of the population. another critical function for nhii is early detection of patterns of disease, particularly early detection of possible bioterrorism. our current system of disease surveillance, which depends on alert clinicians diagnosing and reporting unusual conditions, is both slow and potentially unreliable. most disease reporting still occurs using the postal service, and the information is relayed from local to state to national public health authorities. even when fax or phone is employed, the system still depends on the ability of clinicians to accurately recognize rare and unusual diseases. even assuming such capabilities, individual clinicians cannot discern patterns of disease beyond their sphere of practice. these problems are illustrated by the seven unreported cases of cutaneous anthrax in the new york city area two weeks before the so-called "index" case in florida in the fall of (lipton & johnson, ) . since all the patients were seen by different clinicians, the pattern could not have been evident to any of them even if the diagnosis had immediately been made in every case. wagner et al have elucidated nine categories of requirements for surveillance systems for potential bioterrorism outbreaks-several categories must have immediate electronic reporting to insure early detection (wagner et al., ) . nhii would allow immediate electronic reporting of both relevant clinical events and laboratory results to public health. not only would this be an invaluable aid in early detection of bioterrorism, it would also serve to improve the detection of the much more frequent naturally occurring disease outbreaks. in fact, early results from a number of electronic reporting demonstration projects show that disease outbreaks can routinely be detected sooner than was ever possible using the current system (overhage et al., ) . while early detection has been shown to be a key factor in reducing morbidity and mortality from bioterrorism (kaufmann et al., ) , it will also be extremely helpful in reducing the negative consequences from other disease outbreaks. this aspect of nhii is discussed in more detail in section . . finally, nhii can substantially reduce health-care costs. the inefficiencies and duplication in our present paper-based health care system are enormous. recent study showed that the anticipated nationwide savings from implementing advanced computerized provider order entry (cpoe) systems in the outpatient environment would be $ billion per year (johnston et al., ) , while a related study (walker et al., ) estimated $ billion more is savings from health information exchange (for a total of $ billion per year). substantial additional savings are possible in the inpatient setting-numerous hospitals have reported large net savings from implementation of electronic health records. another example, electronic prescribing, would not only reduce medication errors from transcription, but also drastically decrease the administrative costs of transferring prescription information from provider offices to pharmacies. a more recent analysis concluded that the total efficiency and patient safety savings from nhii would be in range of $ - billion each year (hillestad et al., ) . while detailed studies of the potential savings from comprehensive implementation of nhii, including both electronic health records and effective exchange of health information, are still ongoing, it is clear that the cost reductions will amount to hundreds of billions of dollars each year. it is important to note that much of the savings depends not just on the widespread implementation of electronic health records, but the effective interchange of this information to insure that the complete medical record for every patient is immediately available in every care setting. there are a number of significant barriers and challenges to the development of nhii. perhaps the most important of these relates to protecting the confidentiality of electronic medical records. the public correctly perceives that all efforts to make medical records more accessible for appropriate and authorized purposes simultaneously carry the risk of increased availability for unscrupulous use. while the implementation of the hipaa privacy and security rules (see chapter ) has established nationwide policies for access to medical information, maintaining public confidence requires mechanisms that affirmatively prevent privacy and confidentiality breaches before they occur. development, testing, and implementation of such procedures must be an integral part of any nhii strategy. another important barrier to nhii is the misalignment of financial incentives in the health care system. although the benefits of nhii are substantial, they do not accrue equally across all segments of the system. in particular, the benefits are typically not proportional to the required investments for a number of specific stakeholder groups. perhaps most problematic is the situation for individual and small group health care providers, who are being asked to make substantial allocations of resources to electronic health record systems that mostly benefit others. mechanisms must be found to assure the equitable distribution of nhii benefits in proportion to investments made. while this issue is the subject of continuing study, early results indicate that most of the nhii financial benefit accrues to payers of care. therefore, programs and policies must be established to transfer appropriate savings back to those parties who have expended funds to produce them. one consequence of the misaligned financial incentives is that the return on investment for health information technology needed for nhii is relatively uncertain. while a number of health care institutions, particularly large hospitals, have reported substantial cost improvements from electronic medical record systems, the direct financial benefits are by no means a forgone conclusion, especially for smaller organizations. the existing reimbursement system in the united states does not provide ready access to the substantial capital required by many institutions. for health care organizations operating on extremely thin margins, or even in the red, investments in information technology are impractical regardless of the potential return. in addition, certain legal and regulatory barriers prevent the transfer of funds from those who benefit from health information technology to those who need to invest but have neither the means nor the incentive of substantial returns. laws and regulations designed to prevent fraud and abuse, payments for referrals, and private distribution of disguised "profits" from nonprofit organizations are among those needing review. it is important that mechanisms be found to enable appropriate redistribution of savings generated from health information technology without creating loopholes that would allow abusive practices. another key barrier to nhii is that many of the benefits relate to exchanges of information between multiple health care organizations. the lack of interoperable electronic medical record systems that provide for easy transfer of records from one place to another is a substantial obstacle to achieving the advantages of nhii. also, there is a "first mover disadvantage" in such exchange systems. the largest value is generated when all health care organizations in a community participate electronic information exchange. therefore, if only a few organizations begin the effort, their costs may not be offset by the benefits. a number of steps are currently under way to accelerate the progress towards nhii in the united states. these include establishing standards, fostering collaboration, funding demonstration projects in communities that include careful evaluation, and establishing consensus measures of progress. establishing electronic health record standards that would promote interoperability is the most widely recognized need in health information technology at the present time. within institutions that have implemented specific departmental applications, extensive time and energy is spent developing and maintaining interfaces among the various systems. although much progress has been made in this area by organizations such as health level , even electronic transactions of specific health care data (such as laboratory results) are often problematic due to differing interpretations of the implementation of existing standards. recently, the u.s. government has made substantial progress in this area. ncvhs, the official advisory body on these matters to dhhs, has been studying the issues of both message and content standards for patient medical record information for several years (ncvhs, ) . the consolidated healthcare informatics (chi) initiative recommended five key standards (hl version .x, loinc, dicom, ieee , and ncpdp script) that were adopted for government-wide use in early , followed by more that were added in . in july, , the federal government licensed the comprehensive medical vocabulary known as snomed (systematized nomenclature of medicine; see chapter ), making it available to all u.s. users at no charge. this represents a major step forward in the deployment of vocabulary standards for health information systems. unlike message format standards, such as hl , vocabulary standards are complex and expensive to develop and maintain and therefore require ongoing financial support. deriving the needed funding from end users creates a financial obstacle to deployment of the standard. removing this key barrier to adoption should promote much more widespread use over the next few years. another important project now under way is the joint effort of the institute of medicine and hl to develop a detailed functional definition of the electronic health record (ehr). these functional standards will provide a benchmark for comparison of existing and future ehr systems, and also may be utilized as criteria for possible financial incentives that could be provided to individuals and organizations that implement such systems. the elucidation of a consensus functional definition of the ehr also should help prepare the way for its widespread implementation by engaging all the stakeholders in an extended discussion of its desired capabilities. this functional standardization of the ehr is expected to be followed by the development of a formal interchange format standard (ifs) to be added to hl version . this standard would enable full interoperability of ehr systems through the implementation of an import and export capability to and from the ifs. while it is possible at the present time to exchange complete electronic health records with existing standards, is both difficult and inconvenient. the ifs will greatly simplify the process, making it easy to accomplish the commonly needed operation of transferring an entire electronic medical record from one facility to another. another key standard that is needed involves the representation of guideline recommendations. while the standard known as arden syntax (hl , ; see chapter ) partially addresses this need, many real-world medical care guidelines are too complex to be represented easily in this format. at the present time, the considerable effort required to translate written guidelines and protocols into computer executable form must be repeated at every health care organization wishing to incorporate them in their ehr. development of an effective guideline interchange standard would allow medical knowledge to be encoded once and then distributed widely, greatly increasing the efficiency of the process (peleg at al., ) . collaboration is another important strategy in promoting nhii. to enable the massive changes needed to transform the health care system from its current paper-based operation to the widespread utilization of electronic health information systems, the support of a very large number of organizations and individuals with highly varied agendas is required. gathering and focusing this support requires extensive cooperative efforts and specific mechanisms for insuring that everyone's issues and concerns are expressed, appreciated, and incorporated into the ongoing efforts. this process is greatly aided by a widespread recognition of the serious problems that exist today in the u.s. healthcare system. a number of private collaboration efforts have been established such as the e-health initiative and the national alliance for health information technology (nahit). in the public sector, national health information infrastructure (nhii) has become a focus of activity at dhhs. as part of this effort, the first ever national stakeholders meeting for nhii was convened in mid- to develop a consensus national agenda for moving forward (yasnoff et al., ) . these multiple efforts are having the collective effect of both catalyzing and promoting organizational commitment to nhii. for example, many of the key stakeholders are now forming high-level committees to specifically address nhii issues. for some of these organizations, this represents the first formal recognition that this transformational process is underway and will have a major impact on their activities. it is essential to include all stakeholders in this process. in addition to the traditional groups such as providers, payers, hospitals, health plans, health it vendors, and health informatics professionals, representatives of groups such as consumers (e.g., aarp) and the pharmaceutical industry must be brought into the process. the most concrete and visible strategy for promoting nhii is the encouragement of demonstration projects in communities, including the provision of seed funding. by establishing clear examples of the benefits and advantages of comprehensive health information systems in communities, additional support for widespread implementation can be garnered at the same time that concerns of wary citizens and skeptical policymakers are addressed. there are several important reasons for selecting a community-based strategy for nhii implementation. first and foremost, the existing models of health information infrastructures (e.g., indianapolis and spokane, wa) are based in local communities. this provides proof that it is possible to develop comprehensive electronic health care information exchange systems in these environments. in contrast, there is little or no evidence that such systems can be directly developed on a larger scale. furthermore, increasing the size of informatics projects disproportionately increases their complexity and risk of failure. therefore, keeping projects as small as possible is always a good strategy. since nhii can be created by effectively connecting communities that have developed local health information infrastructures (lhiis), it is not necessary to invoke a direct national approach to achieve the desired end result. a good analogy is the telephone network, which is composed of a large number of local exchanges that are then connected to each other to form community and then national and international networks. another important element in the community approach is the need for trust to overcome confidentiality concerns. medical information is extremely sensitive and its exchange requires a high degree of confidence in everyone involved in the process. the level of trust needed seems most likely to be a product of personal relationships developed over time in a local community and motivated by a common desire to improve health care for everyone located in that area. while the technical implementation of information exchange is non-trivial, it pales in comparison to the challenges of establishing the underlying legal agreements and policy changes that must precede it. for example, when indianapolis implemented sharing of patient information in hospital emergency rooms throughout the area, as many as institutional lawyers needed to agree on the same contractual language (overhage, ) . the community approach also benefits from the fact that the vast majority of health care is delivered locally. while people do travel extensively, occasionally requiring medical care while away from home, and there are few out-of-town consultations for difficult and unusual medical problems, for the most part people receive their health care in the community in which they reside. the local nature of medical care results in a natural interest of community members in maintaining and improving the quality and efficiency of their local health care system. for the same reasons, it is difficult to motivate interest in improving health care beyond the community level. focusing nhii efforts on one community at a time also keeps the implementation problem more reasonable in its scope. it is much more feasible to enable health information interchange among a few dozen hospitals and a few hundred or even a few thousand providers than to consider such a task for a large region or the whole country. this also allows for customized approaches sensitive to the specific needs of each local community. the problems and issues of medical care in a densely populated urban area are clearly vastly different than in a rural environment. similarly, other demographic and organizational differences as well as the presence of specific highly specialized medical care institutions make each community's health care system unique. a local approach to hii development allows all these complex and varied factors to be considered and addressed, and respects the reality of the american political landscape, which gives high priority to local controls. the community-based approach to hii development also benefits from the establishment of national standards. the same standards that allow effective interchange of information between communities nationwide can also greatly facilitate establishing effective communication of medical information within a community. in fact, by encouraging (and even requiring) communities to utilize national standards in building their own lhiis, the later interconnection of those systems to provide nationwide access to medical care information becomes a much simpler and easier process. demonstration projects also are needed to develop and verify a replicable strategy for lhii development. while there are a small number of existing examples of lhii systems, no organization or group has yet demonstrated the ability to reliably and successfully establish such systems in multiple communities. from the efforts of demonstration projects in numerous communities, it should be possible to define a set of strategies that can be applied repeatedly across the nation. seed funding is essential in the development of lhii systems. while health care in united states is a huge industry, spending approximately $ . trillion each year and representing % of the gdp, shifting any of the existing funds into substantial it investments is problematic. the beneficiaries of all the existing expenditures seem very likely to strongly oppose any such efforts. on the other hand, once initial investments begin to generate the expected substantial savings, it should be possible to develop mechanisms to channel those savings into expanding and enhancing lhii systems. careful monitoring of the costs and benefits of local health information interchange systems will be needed to verify the practicality of this approach to funding and sustaining these projects. finally, it is important to assess and understand the technical challenges and solutions applied to lhii demonstration projects. while technical obstacles are usually not serious in terms of impeding progress, understanding and disseminating the most effective solutions can result in smoother implementation as experience is gained throughout the nation. the last element in the strategy for promoting a complex and lengthy project such as nhii is careful measurement of progress. the measures used to gauge progress define the end state and therefore must be chosen with care. measures may also be viewed as the initial surrogate for detailed requirements. progress measures should have certain key features. first, they should be sufficiently sensitive so that their values change at a reasonable rate (a measure that only changes value after five years will not be particularly helpful). second, the measures must be comprehensive enough to reflect activities that impact most of the stakeholders and activities needing change. this ensures that efforts in every area will be reflected in improved measures. third, the measures must be meaningful to policymakers. fourth, periodic determinations of the current values of the measures should be easy so that the measurement process does not detract from the actual work. finally, the totality of the measures must reflect the desired end state so that when the goals for all the measures are attained, the project is complete. a number of different types or dimensions of measures for nhii progress are possible. aggregate measures assess nhii progress over the entire nation. examples include the percentage of the population covered by an lhii and the percentage of health care personnel whose training occurs in institutions that utilize electronic health record systems. another type of measure is based on the setting of care. progress in implementation of electronic health record systems in the inpatient, outpatient, long-term care, home, and community environments could clearly be part of an nhii measurement program. yet another dimension is health care functions performed using information systems support, including, for example, registration systems, decision support, cpoe, and community health information exchange. it is also important to assess progress with respect to the semantic encoding of electronic health records. clearly, there is a progression from the electronic exchange of images of documents, where the content is only readable by the end user viewing the image, to fully encoded electronic health records where all the information is indexed and accessible in machine-readable form using standards. finally, progress can also be benchmarked based on usage of electronic health record systems by health care professionals. the transition from paper records to available electronic records to fully used electronic records is an important signal with respect to the success of nhii activities. to illustrate some of the informatics challenges inherent in nhii, the example of its application to homeland security will be used. bioterrorism preparedness in particular is now a key national priority, especially following the anthrax attacks that occurred in the fall of . early detection of bioterrorism is critical to minimize morbidity and mortality. this is because, unlike other terrorist attacks, bioterrorism is usually silent at first. its consequences are usually the first evidence that an attack has occurred. traditional public health surveillance depends on alert clinicians reporting unusual diseases and conditions. however, it is difficult for clinicians to detect rare and unusual diseases since they are neither familiar with their manifestations nor suspicious of the possibility of an attack. also, it is often difficult to differentiate potential bioterrorism from more common and benign manifestations of illness. this is clearly illustrated by the seven cases of cutaneous anthrax that occurred in the new york city area two weeks prior to the "index " case in florida the fall of (lipton & johnson, ) . all these cases presented to different clinicians, none of whom recognized the diagnosis of anthrax with sufficient confidence to notify any public health authority. furthermore, such a pattern involving similar cases presenting to multiple clinicians could not possibly be detected by any of them. it seems likely that had all seven of these patients utilized the same provider, the immediately evident pattern of unusual signs and symptoms alone would have been sufficient to result in an immediate notification of public health authorities even in the absence of any diagnosis. traditional public health surveillance also has significant delays. much routine reporting is still done via postcard and fax to the local health department, and further delays occur before information is collated, analyzed, and reported to state and finally to federal authorities. there is also an obvious need for a carefully coordinated response after a bioterrorism event is detected. health officials, in collaboration with other emergency response agencies, must carefully assess and manage health care assets and ensure rapid deployment of backup resources. also, the substantial increase in workload created from such an incident must be distributed effectively among available hospitals, clinics, and laboratories, often including facilities outside the affected area. the vision for the application of nhii to homeland security involves both early detection of bioterrorism and the response to such an event. clinical information relevant to public health would be reported electronically in near real-time. this would include clinical lab results, emergency room chief complaints, relevant syndromes (e.g., flu-like illness), and unusual signs, symptoms, or diagnoses. by generating these electronic reports automatically from electronic health record systems, the administrative reporting burden currently placed on clinicians would be eliminated. in addition, the specific diseases and conditions reported could be dynamically adjusted in response to an actual incident or even information related to specific threats. this latter capability would be extremely helpful in carefully tracking the development of an event from its early stages. nhii could also provide much more effective medical care resource management in response to events. this could include automatic reporting of all available resources so they could be allocated rapidly and efficiently, immediate operational visibility of all health care assets, and effective balancing of the tremendous surge in demand for medical care services. this would also greatly improve decision making about deployment of backup resources. using nhii for these bioterrorism preparedness functions avoids developing a separate, very expensive infrastructure dedicated to these rare events. as previously stated, the benefits of nhii are substantial and fully justify its creation even without these bioterrorism preparedness capabilities, which would be an added bonus. furthermore, the same infrastructure that serves as an early detection system for bioterrorism also will allow earlier and more sensitive detection of routine naturally occurring disease outbreaks (which are much more common) as well as better management of health care resources in other disaster situations. the application of nhii to homeland security involves a number of difficult informatics challenges. first, this activity requires participation from a very wide range of both public and private organizations. this includes all levels of government and organizations that have not had significant prior interactions with the health care system such as agriculture, police, fire, and animal health. needless to say, these organizations have divergent objectives and cultures that do not necessarily mesh easily. health and law enforcement in particular have a significantly different view of a bioterrorism incident. for example, an item that is considered a "specimen" in the health care system may be regarded as "evidence" by law enforcement. naturally, this wide variety of organizations has incompatible information systems, since for the most part they were designed and deployed without consideration for the issues raised by bioterrorism. not only do they have discordant design objectives, but they lack standardized terminology and messages to facilitate electronic information exchange. furthermore, there are serious policy conflicts among these various organizations, for example, with respect to access to information. in the health care system, access to information is generally regarded as desirable, whereas in law enforcement it must be carefully protected to maintain the integrity of criminal investigations. complicating these organizational, cultural, and information systems issues, bioterrorism preparedness has an ambiguous governance structure. many agencies and organizations have legitimate and overlapping authority and responsibility, so there is often no single clear path to resolve conflicting issues. therefore, a high degree of collaboration and collegiality is required, with extensive pre-event planning so that roles and responsibilities are clarified prior to any emergency. within this complex environment, there is also a need for new types of systems with functions that have never before been performed. bioterrorism preparedness results in new requirements for early disease detection and coordination of the health care system. precisely because these requirements are new, there are few (if any) existing systems that have similar functions. therefore careful consideration to design requirements of bioterrorism preparedness systems is essential to ensure success. most importantly, there is an urgent need for interdisciplinary communication among an even larger number of specialty areas than is typically the case with health information systems. all participants must recognize that each domain has its own specific terminology and operational approaches. as previously mentioned in the public health informatics example, the interlocutor function is vital. since it is highly unlikely that any single person will be able to span all or even most of the varied disciplinary areas, everyone on the team must make a special effort to learn the vocabulary used by others. as a result of these extensive and difficult informatics challenges, there are few operational information systems supporting bioterrorism preparedness. it is interesting to note that all the existing systems developed to date are local. this is most likely a consequence of the same issues previously delineated in the discussion of the advantages of community-based strategies for nhii development. one such system performs automated electronic lab reporting in indianapolis (overhage et al., ) . the development of this system was led by the same active informatics group that developed the lhii in the same area. nevertheless, it took several years of persistent and difficult efforts to overcome the technical, organizational, and legal issues involved. for example, even though all laboratories submitted data in "standard" hl format, it turned out that many of them were interpreting the standard in such a way that the electronic transactions could not be effectively processed by the recipient system. to address this problem, extensive reworking of the software that generated these transactions was required for many of the participating laboratories. another example of a bioterrorism preparedness system involves emergency room chief complaint reporting in pittsburgh (tsui et al., ) . this is a collaborative effort of multiple institutions with existing electronic medical record systems. it has also been led by an active informatics group that has worked long and hard to overcome technical, organizational, and legal challenges. it provides a near real-time "dashboard" for showing the incidence rates of specific types of syndromes, such as gastrointestinal and respiratory. this information is very useful for monitoring the patterns of diseases presenting to the area's emergency departments. note that both of these systems were built upon extensive prior work done by existing informatics groups. they also took advantage of existing local health information infrastructures that provided either available or least accessible electronic data streams. in spite of these advantages, it is clear from these and other efforts that the challenges in building bioterrorism preparedness systems are immense. however, having an existing health information infrastructure appears to be a key prerequisite. such an infrastructure implies the existence of a capable informatics group and available electronic health data in the community. public health informatics may be viewed as the application of biomedical informatics to populations. in a sense, it is the ultimate evolution of biomedical informatics, which has traditionally focused on applications related to individual patients. public health informatics highlights the potential of the health informatics disciplines as a group to integrate information from the molecular to the population level. public health informatics and the development of health information infrastructures are closely related. public health informatics deals with public health applications, whereas health information infrastructures are population-level applications primarily focused on medical care. while the information from these two areas overlaps, the orientation of both is the community rather than the individual. public health and health care have not traditionally interacted as closely as they should. in a larger sense, both really focus on the health of communities-public health does this directly, while the medical care system does it one patient at a time. however, it is now clear that medical care must also focus on the community to integrate the effective delivery of services across all care settings for all individuals. the informatics challenges inherent in both public health informatics and the development of health information infrastructures are immense. they include the challenge of large numbers of different types of organizations including government at all levels. this results in cultural, strategic, and personnel challenges. the legal issues involved in interinstitutional information systems, especially with regard to information sharing, can be daunting. finally, communications challenges are particularly difficult because of the large number of areas of expertise represented, including those that go beyond the health care domain (e.g., law enforcement). to deal with these communication issues, the interlocutor function is particularly critical. however, the effort required to address the challenges of public health informatics and health information infrastructures is worthwhile because the potential benefits are so substantial. effective information systems in these domains can help to assure effective prevention, high-quality care, and minimization of medical errors. in addition to the resultant decreases in both morbidity and mortality, these systems also have the potential to save hundreds of billions of dollars in both direct and indirect costs. it has been previously noted that one of the key differences between public health informatics and other informatics disciplines is that it includes interventions beyond the medical care system, and is not limited to medical and surgical treatments (yasnoff et al., ) . so despite the focus of most current public health informatics activities on population-based extensions of the medical care system (leading to the orientation of this chapter), applications beyond this scope are both possible and desirable. indeed, the phenomenal contributions to health made by the hygienic movement of the th and early th centuries suggest the power of large-scale environmental, legislative, and social changes to promote human health (rosen, ) . public health informatics must explore these dimensions as energetically as those associated with prevention and clinical care at the individual level. the effective application of informatics to populations through its use in both public health and the development of health information infrastructures is a key challenge of the st century. it is a challenge we must accept, understand, and overcome if we want to create an efficient and effective health care system as well as truly healthy communities for all. questions for further study: while some of the particulars are a little dated, this accessible document shows how public health professionals approach informatics problems can electronic medical record systems transform health care? potential health benefits a consensus action agenda for achieving the national health information infrastructure public health informatics: how information-age technology can strengthen public health public health for informaticians public health informatics and information systems the value of healthcare information exchange and interoperability a consensus action agenda for achieving the national health information infrastructure public health informatics: improving and transforming public health in the information age what are the current and potential effects of a) the genomics revolution; and b) / on public health informatics? how can the successful model of immunization registries be used in other domains of public health (be specific about those domains)? how might it fail in others? why? fourteen percent of the us gdp is spent on medical care (including public health). how could public health informatics help use those monies more efficiently? or lower the figure absolutely? compare and contrast the database desiderata for clinical versus public health information systems. explain it from non-technical and technical perspectives make the case for and against investing billions in an nhii what organizational options would you consider if you were beginning the development of a local health information infrastructure? what are the pros and cons of each? how would you proceed with making a decision about which one to use? phi) involves the application of information technology in any manner that improves or promotes human health, does this necessarily involve a human "user" that interacts with the phi application? for example, could the information technology underlying anti-lock braking systems be considered a public health informatics application? key: cord- - otxft authors: altman, russ b.; mooney, sean d. title: bioinformatics date: journal: biomedical informatics doi: . / - - - _ sha: doc_id: cord_uid: otxft why is sequence, structure, and biological pathway information relevant to medicine? where on the internet should you look for a dna sequence, a protein sequence, or a protein structure? what are two problems encountered in analyzing biological sequence, structure, and function? how has the age of genomics changed the landscape of bioinformatics? what two changes should we anticipate in the medical record as a result of these new information sources? what are two computational challenges in bioinformatics for the future? ular biology and genomics-have increased dramatically in the past decade. history has shown that scientific developments within the basic sciences tend to lag about a decade before their influence on clinical medicine is fully appreciated. the types of information being gathered by biologists today will drastically alter the types of information and technologies available to the health care workers of tomorrow. there are three sources of information that are revolutionizing our understanding of human biology and that are creating significant challenges for computational processing. the most dominant new type of information is the sequence information produced by the human genome project, an international undertaking intended to determine the complete sequence of human dna as it is encoded in each of the chromosomes. the first draft of the sequence was published in (lander et al., ) and a final version was announced in coincident with the th anniversary of the solving of the watson and crick structure of the dna double helix. now efforts are under way to finish the sequence and to determine the variations that occur between the genomes of different individuals. essentially, the entire set of events from conception through embryonic development, childhood, adulthood, and aging are encoded by the dna blueprints within most human cells. given a complete knowledge of these dna sequences, we are in a position to understand these processes at a fundamental level and to consider the possible use of dna sequences for diagnosing and treating disease. while we are studying the human genome, a second set of concurrent projects is studying the genomes of numerous other biological organisms, including important experimental animal systems (such as mouse, rat, and yeast) as well as important human pathogens (such as mycobacterium tuberculosis or haemophilus influenzae). many of these genomes have recently been completely determined by sequencing experiments. these allow two important types of analysis: the analysis of mechanisms of pathogenicity and the analysis of animal models for human disease. in both cases, the functions encoded by genomes can be studied, classified, and categorized, allowing us to decipher how genomes affect human health and disease. these ambitious scientific projects not only are proceeding at a furious pace, but also are accompanied in many cases by a new approach to biology, which produces a third new source of biomedical information: proteomics. in addition to small, relatively focused experimental studies aimed at particular molecules thought to be important for disease, large-scale experimental methodologies are used to collect data on thousands or millions of molecules simultaneously. scientists apply these methodologies longitudinally over time and across a wide variety of organisms or (within an organism) organs to watch the evolution of various physiological phenomena. new technologies give us the abilities to follow the production and degradation of molecules on dna arrays (lashkari et al., ) , to study the expression of large numbers of proteins with one another (bai and elledge, ) , and to create multiple variations on a genetic theme to explore the implications of various mutations on biological function (spee et al., ) . all these technologies, along with the genome-sequencing projects, are conspiring to produce a volume of biological information that at once contains secrets to age-old questions about health and disease and threatens to overwhelm our current capabilities of data analysis. thus, bioinformatics is becoming critical for medicine in the twentyfirst century. the effects of this new biological information on clinical medicine and clinical informatics are difficult to predict precisely. it is already clear, however, that some major changes to medicine will have to be accommodated. with the first set of human genomes now available, it will soon become cost-effective to consider sequencing or genotyping at least sections of many other genomes. the sequence of a gene involved in disease may provide the critical information that we need to select appropriate treatments. for example, the set of genes that produces essential hypertension may be understood at a level sufficient to allow us to target antihypertensive medications based on the precise configuration of these genes. it is possible that clinical trials may use information about genetic sequence to define precisely the population of patients who would benefit from a new therapeutic agent. finally, clinicians may learn the sequences of infectious agents (such as of the escherichia coli strain that causes recurrent urinary tract infections) and store them in a patient's record to record the precise pathogenicity and drug susceptibility observed during an episode of illness. in any case, it is likely that genetic information will need to be included in the medical record and will introduce special problems. raw sequence information, whether from the patient or the pathogen, is meaningless without context and thus is not well suited to a printed medical record. like images, it can come in high information density and must be presented to the clinician in novel ways. as there are for laboratory tests, there may be a set of nondisease (or normal) values to use as comparisons, and there may be difficulties in interpreting abnormal values. fortunately, most of the human genome is shared and identical among individuals; less than percent of the genome seems to be unique to individuals. nonetheless, the effects of sequence information on clinical databases will be significant. . new diagnostic and prognostic information sources. one of the main contributions of the genome-sequencing projects (and of the associated biological innovations) is that we are likely to have unprecedented access to new diagnostic and prognostic tools. single nucleotide polymorphisms (snps) and other genetic markers are used to identify how a patient's genome differs from the draft genome. diagnostically, the genetic markers from a patient with an autoimmune disease, or of an infectious pathogen within a patient, will be highly specific and sensitive indicators of the subtype of disease and of that subtype's probable responsiveness to different therapeutic agents. for example, the severe acute respiratory syndrome (sars) virus was determined to be a corona virus using a gene expression array containing the genetic information from several common pathogenic viruses. in general, diagnostic tools based on the gene sequences within a patient are likely to increase greatly the number and variety of tests available to the physician. physicians will not be able to manage these tests without significant computational assistance. moreover, genetic information will be available to provide more accurate prognostic information to patients. what is the standard course for this disease? how does it respond to these medications? over time, we will be able to answer these questions with increasing precision, and will develop computational systems to manage this information. several genotype-based databases have been developed to identify markers that are associated with specific phenotypes and identify how genotype affects a patient's response to therapeutics. the human gene mutations database (hgmd) annotates mutations with disease phenotype. this resource has become invaluable for genetic counselors, basic researchers, and clinicians. additionally, the pharmacogenomics knowledge base (pharmgkb) collects genetic information that is known to affect a patient's response to a drug. as these data sets, and others like them, continue to improve, the first clinical benefits from the genome projects will be realized. . ethical considerations. one of the critical questions facing the genome-sequencing projects is "can genetic information be misused?" the answer is certainly yes. with knowledge of a complete genome for an individual, it may be possible in the future to predict the types of disease for which that individual is at risk years before the disease actually develops. if this information fell into the hands of unscrupulous employers or insurance companies, the individual might be denied employment or coverage due to the likelihood of future disease, however distant. there is even debate about whether such information should be released to a patient even if it could be kept confidential. should a patient be informed that he or she is likely to get a disease for which there is no treatment? this is a matter of intense debate, and such questions have significant implications for what information is collected and for how and to whom that information is disclosed (durfy, ; see chapter ). a brief review of the biological basis of medicine will bring into focus the magnitude of the revolution in molecular biology and the tasks that are created for the discipline of bioinformatics. the genetic material that we inherit from our parents, that we use for the structures and processes of life, and that we pass to our children is contained in a sequence of chemicals known as deoxyribonucleic acid (dna). the total collec- r. b. altman and s. d. mooney tion of dna for a single person or organism is referred to as the genome. dna is a long polymer chemical made of four basic subunits. the sequence in which these subunits occur in the polymer distinguishes one dna molecule from another, and the sequence of dna subunits in turn directs a cell's production of proteins and all other basic cellular processes. genes are discreet units encoded in dna and they are transcribed into ribonucleic acid (rna), which has a composition very similar to dna. genes are transcribed into messenger rna (mrna) and a majority of mrna sequences are translated by ribosomes into protein. not all rnas are messengers for the translation of proteins. ribosomal rna, for example, is used in the construction of the ribosome, the huge molecular engine that translates mrna sequences into protein sequences. understanding the basic building blocks of life requires understanding the function of genomic sequences, genes, and proteins. when are genes turned on? once genes are transcribed and translated into proteins, into what cellular compartment are the proteins directed? how do the proteins function once there? equally important, how are the proteins turned off ? experimentation and bioinformatics have divided the research into several areas, and the largest are: ( ) genome and protein sequence analysis, ( ) macromolecular structure-function analysis, ( ) gene expression analysis, and ( ) proteomics. practitioners of bioinformatics have come from many backgrounds, including medicine, molecular biology, chemistry, physics, mathematics, engineering, and computer science. it is difficult to define precisely the ways in which this discipline emerged. there are, however, two main developments that have created opportunities for the use of information technologies in biology. the first is the progress in our understanding of how biological molecules are constructed and how they perform their functions. this dates back as far as the s with the invention of electrophoresis, and then in the s with the elucidation of the structure of dna and the subsequent sequence of discoveries in the relationships among dna, rna, and protein structure. the second development has been the parallel increase in the availability of computing power. starting with mainframe computer applications in the s and moving to modern workstations, there have been hosts of biological problems addressed with computational methods. the human genome project was completed and a nearly finished sequence was published in . the benefit of the human genome sequence to medicine is both in the short and in the long term. the short-term benefits lie principally in diagnosis: the availability of sequences of normal and variant human genes will allow for the rapid identification of these genes in any patient (e.g., babior and matzner, ) . the long-term benefits will include a greater understanding of the proteins produced from the genome: how the proteins interact with drugs; how they malfunction in disease states; and how they participate in the control of development, aging, and responses to disease. the effects of genomics on biology and medicine cannot be understated. we now have the ability to measure the activity and function of genes within living cells. genomics data and experiments have changed the way biologists think about questions fundamental to life. where in the past, reductionist experiments probed the detailed workings of specific genes, we can now assemble those data together to build an accurate understanding of how cells work. this has led to a change in thinking about the role of computers in biology. before, they were optional tools that could help provide insight to experienced and dedicated enthusiasts. today, they are required by most investigators, and experimental approaches rely on them as critical elements. twenty years ago, the use of computers was proving to be useful to the laboratory researcher. today, computers are an essential component of modern research. this is because advances in research methods such as microarray chips, drug screening robots, x-ray crystallography, nuclear magnetic resonance spectroscopy, and dna sequencing experiments have resulted in massive amounts of data. these data need to be properly stored, analyzed, and disseminated. the volume of data being produced by genomics projects is staggering. there are now more than . million sequences in genbank comprising more than billion digits. but these data do not stop with sequence data: pubmed contains over million literature citations, the pdb contains three-dimensional structural data for over , protein sequences, and the stanford microarray database (smd) contains over , experiments ( million data points). these data are of incredible importance to biology, and in the following sections we introduce and summarize the importance of sequences, structures, gene expression experiments, systems biology, and their computational components to medicine. sequence information (including dna sequences, rna sequences, and protein sequences) is critical in biology: dna, rna, and protein can be represented as a set of sequences of basic building blocks (bases for dna and rna, amino acids for proteins). computer systems within bioinformatics thus must be able to handle biological sequence information effectively and efficiently. one major difficulty within bioinformatics is that standard database models, such as relational database systems, are not well suited to sequence information. the basic problem is that sequences are important both as a set of elements grouped together and treated in a uniform manner and as individual elements, with their relative locations and functions. any given position in a sequence can be important because of its own identity, because it is part of a larger subsequence, or perhaps because it is part of a large set of overlapping subsequences, all of which have different significance. it is necessary to support queries such as, "what sequence motifs are present in this sequence?" it is often difficult to represent these multiple, nested relationships within standard relational database schema. in addition, the neighbors of a sequence element are also critical, and it is important to be able to perform queries such as, "what sequence elements are seen elements to the left of this element?" for these reasons, researchers in bioinformatics are developing object-oriented databases (see chapter ) in which a sequence can be queried in different ways, depending on the needs of the user (altman, ) . the sequence information mentioned in section . . is rapidly becoming inexpensive to obtain and easy to store. on the other hand, the three-dimensional structure information about the proteins that are produced from the dna sequences is much more difficult and expensive to obtain, and presents a separate set of analysis challenges. currently, only about , three-dimensional structures of biological macromolecules are known. these models are incredibly valuable resources, however, because an understanding of structure often yields detailed insights about biological function. as an example, the structure of the ribosome has been determined for several species and contains more atoms than any other to date. this structure, because of its size, took two decades to solve, and presents a formidable challenge for functional annotation (cech, ) . yet, the functional information for a single structure is vastly outsized by the potential for comparative genomics analysis between the structures from several organisms and from varied forms of the functional complex, since the ribosome is ubiquitously required for all forms of life. thus a wealth of information comes from relatively few structures. to address the problem of limited structure information, the publicly funded structural genomics initiative aims to identify all of the common structural scaffolds found in nature and grow the number of known structures considerably. in the end, it is the physical forces between molecules that determine what happens within a cell; thus the more complete the picture, the better the functional understanding. in particular, understanding the physical properties of therapeutic agents is the key to understanding how agents interact with their targets within the cell (or within an invading organism). these are the key questions for structural biology within bioinformatics: . how can we analyze the structures of molecules to learn their associated function? approaches range from detailed molecular simulations (levitt, ) to statistical analyses of the structural features that may be important for function (wei and altman, ). bioinformatics for more information see http://www.rcsb.org/pdb/. . how can we extend the limited structural data by using information in the sequence databases about closely related proteins from different organisms (or within the same organism, but performing a slightly different function)? there are significant unanswered questions about how to extract maximal value from a relatively small set of examples. . how should structures be grouped for the purposes of classification? the choices range from purely functional criteria ("these proteins all digest proteins") to purely structural criteria ("these proteins all have a toroidal shape"), with mixed criteria in between. one interesting resource available today is the structural classification of proteins (scop), which classifies proteins based on shape and function. the development of dna microarrays has led to a wealth of data and unprecedented insight into the fundamental biological machine. the premise is relatively simple; up to , gene sequences derived from genomic data are fixed onto a glass slide or filter. an experiment is performed where two groups of cells are grown in different conditions, one group is a control group and the other is the experimental group. the control group is grown normally, while the experimental group is grown under experimental conditions. for example, a researcher may be trying to understand how a cell compensates for a lack of sugar. the experimental cells will be grown with limited amounts of sugar. as the sugar depletes, some of the cells are removed at specific intervals of time. when the cells are removed, all of the mrna from the cells is separated and converted back to dna, using special enzymes. this leaves a pool of dna molecules that are only from the genes that were turned on (expressed) in that group of cells. using a chemical reaction, the experimental dna sample is attached to a red fluorescent molecule and the control group is attached to a green fluorescent molecule. these two samples are mixed and then washed over the glass slide. the two samples contain only genes that were turned on in the cells, and they are labeled either red or green depending on whether they came from the experimental group or the control group. the labeled dna in the pool sticks or hybridizes to the same gene on the glass slide. this leaves the glass slide with up to , spots and genes that were turned on in the cells are now bound with a label to the appropriate spot on the slide. using a scanning confocal microscope and a laser to fluoresce the linkers, the amount of red and green fluorescence in each spot can be measured. the ratio of red to green determines whether that gene is being turned off (downregulated) in the experimental group or whether the gene is being turned on (upregulated). the experiment has now measured the activity of genes in an entire cell due to some experimental change. figure . illustrates a typical gene expression experiment from smd. computers are critical for analyzing these data, because it is impossible for a researcher to comprehend the significance of those red and green spots. currently scientists are using gene expression experiments to study how cells from different organ- isms compensate for environmental changes, how pathogens fight antibiotics, and how cells grow uncontrollably (as is found in cancer). a new challenge for biological computing is to develop methods to analyze these data, tools to store these data, and computer systems to collect the data automatically. with the completion of the human genome and the abundance of sequence, structural, and gene expression data, a new field of systems biology that tries to understand how proteins and genes interact at a cellular level is emerging. the basic algorithms for analyzing sequence and structure are now leading to opportunities for more integrated analysis of the pathways in which these molecules participate and ways in which molecules can be manipulated for the purpose of combating disease. a detailed understanding of the role of a particular molecule in the cell requires knowledge of the context-of the other molecules with which it interacts-and of the sequence of chemical transformations that take place in the cell. thus, major research areas in bioinformatics are elucidating the key pathways by which chemicals are transformed, defining the molecules that catalyze these transformations, identifying the input compounds and the output compounds, and linking these pathways into bioinformatics networks that we can then represent computationally and analyze to understand the significance of a particular molecule. the alliance for cell signaling is generating large amounts of data related to how signal molecules interact and affect the concentration of small molecules within the cell. there are a number of common computations that are performed in many contexts within bioinformatics. in general, these computations can be classified as sequence alignment, structure alignment, pattern analysis of sequence/structure, gene expression analysis, and pattern analysis of biochemical function. as it became clear that the information from dna and protein sequences would be voluminous and difficult to analyze manually, algorithms began to appear for automating the analysis of sequence information. the first requirement was to have a reliable way to align sequences so that their detailed similarities and distances could be examined directly. needleman and wunsch ( ) published an elegant method for using dynamic programming techniques to align sequences in time related to the cube of the number of elements in the sequences. smith and waterman ( ) published refinements of these algorithms that allowed for searching both the best global alignment of two sequences (aligning all the elements of the two sequences) and the best local alignment (searching for areas in which there are segments of high similarity surrounded by regions of low similarity). a key input for these algorithms is a matrix that encodes the similarity or substitutability of sequence elements: when there is an inexact match between two elements in an alignment of sequences, it specifies how much "partial credit" we should give the overall alignment based on the similarity of the elements, even though they may not be identical. looking at a set of evolutionarily related proteins, dayhoff et al. ( ) published one of the first matrices derived from a detailed analysis of which amino acids (elements) tend to substitute for others. within structural biology, the vast computational requirements of the experimental methods (such as x-ray crystallography and nuclear magnetic resonance) for determining the structure of biological molecules drove the development of powerful structural analysis tools. in addition to software for analyzing experimental data, graphical display algorithms allowed biologists to visualize these molecules in great detail and facilitated the manual analysis of structural principles (langridge, ; richardson, ) . at the same time, methods were developed for simulating the forces within these molecules as they rotate and vibrate (gibson and scheraga, ; karplus and weaver, ; levitt, ) . the most important development to support the emergence of bioinformatics, however, has been the creation of databases with biological information. in the s, structural biologists, using the techniques of x-ray crystallography, set up the protein data bank (pdb) of the cartesian coordinates of the structures that they elucidated (as well as associated experimental details) and made pdb publicly available. the first release, in , contained structures. the growth of the database is chronicled on the web: the pdb now has over , detailed atomic structures and is the primary source of information about the relationship between protein sequence and protein structure. similarly, as the ability to obtain the sequence of dna molecules became widespread, the need for a database of these sequences arose. in the mid- s, the genbank database was formed as a repository of sequence information. starting with sequences and , bases in , the genbank has grown by much more than million sequences and billion bases. the genbank database of dna sequence information supports the experimental reconstruction of genomes and acts as a focal point for experimental groups. numerous other databases store the sequences of protein molecules and information about human genetic diseases. included among the databases that have accelerated the development of bioinformatics is the medline database of the biomedical literature and its paper-based companion index medicus (see chapter ). including articles as far back as and brought online free on the web in , medline provides the glue that relates many high-level biomedical concepts to the low-level molecule, disease, and experimental methods. in fact, this "glue" role was the basis for creating the entrez and pubmed systems for integrating access to literature references and the associated databases. perhaps the most basic activity in computational biology is comparing two biological sequences to determine ( ) whether they are similar and ( ) how to align them. the problem of alignment is not trivial but is based on a simple idea. sequences that perform a similar function should, in general, be descendants of a common ancestral sequence, with mutations over time. these mutations can be replacements of one amino acid with another, deletions of amino acids, or insertions of amino acids. the goal of sequence alignment is to align two sequences so that the evolutionary relationship between the sequences becomes clear. if two sequences are descended from the same ancestor and have not mutated too much, then it is often possible to find corresponding locations in each sequence that play the same role in the evolved proteins. the problem of solving correct biological alignments is difficult because it requires knowledge about the evolution of the molecules that we typically do not have. there are now, however, well-established algorithms for finding the mathematically optimal alignment of two sequences. these algorithms require the two sequences and a scoring system based on ( ) exact matches between amino acids that have not mutated in the two sequences and can be aligned perfectly; ( ) partial matches between amino acids that have mutated in ways that have preserved their overall biophysical properties; and ( ) gaps in the alignment signifying places where one sequence or the other has undergone a deletion or insertion of amino acids. the algorithms for determining optimal sequence alignments are based on a technique in computer science known as dynamic programming and are at the heart of many computational biology applications (gusfield, ) . figure . shows an example of a smith-waterman matrix. unfortunately, the dynamic programming algorithms are computationally expensive to apply, so a number of faster, more heuristic methods have been developed. the most popular algorithm is the basic local alignment search tool (blast) (altschul et al., ) . blast is based on the observations that sections of proteins are often conserved without gaps (so the gaps can be ignored-a critical simplification for speed) and that there are statistical analyses of the occurrence of small subsequences within larger sequences that can be used to prune the search for matching sequences in a large database. another tool that has found wide use in mining genome sequences is blat (kent, ) . blat is often used to search long genomic sequences with significant performance increases over blast. it achieves its -fold increase in speed over other tools by storing and indexing long sequences as nonoverlapping k-mers, allowing efficient storage, searching, and alignment on modest hardware. one of the primary challenges in bioinformatics is taking a newly determined dna sequence (as well as its translation into a protein sequence) and predicting the structure of the associated molecules, as well as their function. both problems are difficult, being fraught with all the dangers associated with making predictions without hard experimental data. nonetheless, the available sequence data are starting to be sufficient to allow good predictions in a few cases. for example, there is a web site devoted to the assessment of biological macromolecular structure prediction methods. recent results suggest that when two protein molecules have a high degree (more than percent) of sequence similarity and one of the structures is known, a reliable model of the other can be built by analogy. in the case that sequence similarity is less than percent, however, performance of these methods is much less reliable. when scientists investigate biological structure, they commonly perform a task analogous to sequence alignment, called structural alignment. given two sets of threedimensional coordinates for a set of atoms, what is the best way to superimpose them so that the similarities and differences between the two structures are clear? such computations are useful for determining whether two structures share a common ancestry and for understanding how the structures' functions have subsequently been refined during evolution. there are numerous published algorithms for finding good structural alignments. we can apply these algorithms in an automated fashion whenever a new structure is determined, thereby classifying the new structure into one of the protein families (such as those that scop maintains). one of these algorithms is minrms (jewett et al., ) . minrms works by finding the minimal root-mean-squared-distance (rmsd) alignments of two protein structures as a function of matching residue pairs. minrms generates a family of alignments, each with different number of residue position matches. this is useful for identifying local regions of similarity in a protein with multiple domains. minrms solves two problems. first, it determines which structural superpositions, or alignment, to evaluate. then, given this superposition, it determines which residues should be bioinformatics considered "aligned" or matched. computationally, this is a very difficult problem. minrms reduces the search space by limiting superpositions to be the best superposition between four atoms. it then exhaustively determines all potential four-atommatched superpositions and evaluates the alignment. given this superposition, the number of aligned residues is determined, as any two residues with c-alpha carbons (the central atom in all amino acids) less than a certain threshold apart. the minimum average rmsd for all matched atoms is the overall score for the alignment. in figure . , an example of such a comparison is shown. a related problem is that of using the structure of a large biomolecule and the structure of a small organic molecule (such as a drug or cofactor) to try to predict the ways in which the molecules will interact. an understanding of the structural interaction between a drug and its target molecule often provides critical insight into the drug's mechanism of action. the most reliable way to assess this interaction is to use experimental methods to solve the structure of a drug-target complex. once again, these experimental approaches are expensive, so computational methods play an important role. typically, we can assess the physical and chemical features of the drug molecule and can use them to find complementary regions of the target. for example, a highly electronegative drug molecule will be most likely to bind in a pocket of the target that has electropositive features. prediction of function often relies on use of sequential or structural similarity metrics and subsequent assignment of function based on similarities to molecules of known r. b. altman and s. d. mooney function. these methods can guess at general function for roughly to percent of all genes, but leave considerable uncertainty about the precise functional details even for those genes for which there are predictions, and have little to say about the remaining genes. analysis of gene expression data often begins by clustering the expression data. a typical experiment is represented as a large table, where the rows are the genes on each chip and the columns represent the different experiments, whether they be time points or different experimental conditions. within each cell is the red to green ratio of that gene's experimental results. each row is then a vector of values that represent the results of the experiment with respect to a specific gene. clustering can then be performed to determine which genes are being expressed similarly. genes that are associated with similar expression profiles are often functionally associated. for example, when a cell is subjected to starvation (fasting), ribosomal genes are often downregulated in anticipation of lower protein production by the cell. it has similarly been shown that genes associated with neoplastic progression could be identified relatively easily with this method, making gene expression experiments a powerful assay in cancer research (see guo, , for review) . in order to cluster expression data, a distance metric must be determined to compare a gene's profile with another gene's profile. if the vector data are a list of values, euclidian distance or correlation distances can be used. if the data are more complicated, more sophisticated distance metrics may be employed. clustering methods fall into two categories: supervised and unsupervised. hand. usually, the method begins by selecting profiles that represent the different groups of data, and then the clustering method associates each of the genes with the be performed automatically. two such unsupervised learning methods are the hierarchical and k-means clustering methods. hierarchical methods build a dendrogram, or a tree, of the genes based on ing close neighbors into a cluster. the first step often involves connecting the closest profiles, building an average profile of the joined profiles, and repeating until the entire tree is built. k-means clustering builds k clusters or groups automatically. the algorithm begins by picking k representative profiles randomly. then each gene is associated with the representative to which it is closest, as defined by the distance metric being employed. then the center of mass of each cluster is determined using all of the member gene's profiles. depending on the implementation, either the center of mass or the nearest member to it becomes the new representative for that cluster. the algorithm then iterates until the new center of mass and the previous center of mass are within some threshold. the result is k groups of genes that are regulated similarly. one drawback of k-means is that one must chose the value for k. if k is too large, logical "true" clusters may be split into pieces and if k is too small, there will be clusters that are bioinformatics commonly applied because these methods require no knowledge of the data, and can supervised learning methods require some preconceived knowledge of the data at representative profile to which they are most similar. unsupervised methods are more their expression profiles. these methods are agglomerative and work by iteratively join-merged. one way to determine whether the chosen k is correct is to estimate the average distance from any member profile to the center of mass. by varying k, it is best to choose the lowest k where this average is minimized for each cluster. another drawback of k-means is that different initial conditions can give different results, therefore it is often prudent to test the robustness of the results by running multiple runs with different starting configurations (figure . ) . the future clinical usefulness of these algorithms cannot be understated. in , van't veer et al. ( found that a gene expression profile could predict the clinical outcome of breast cancer. the global analysis of gene expression showed that some can- r. b. altman and s. d. mooney cers were associated with different prognosis, not detectable using traditional means. another exciting advancement in this field is the potential use of microarray expression data to profile the molecular effects of known and potential therapeutic agents. this molecular understanding of a disease and its treatment will soon help clinicians make more informed and accurate treatment choices. biologists have embraced the web in a remarkable way and have made internet access to data a normal and expected mode for doing business. hundreds of databases curated by individual biologists create a valuable resource for the developers of computational methods who can use these data to test and refine their analysis algorithms. with standard internet search engines, most biological databases can be found and accessed within moments. the large number of databases has led to the development of meta-databases that combine information from individual databases to shield the user from the complex array that exists. there are various approaches to this task. the entrez system from the national center for biological information (ncbi) gives integrated access to the biomedical literature, protein, and nucleic acid sequences, macromolecular and small molecular structures, and genome project links (including both the human genome project and sequencing projects that are attempting to determine the genome sequences for organisms that are either human pathogens or important experimental model organisms) in a manner that takes advantages of either explicit or computed links between these data resources. the sequence retrieval system (srs) from the european molecular biology laboratory allows queries from one database to another to be linked and sequenced, thus allowing relatively complicated queries to be evaluated. newer technologies are being developed that will allow multiple heterogeneous databases to be accessed by search engines that can combine information automatically, thereby processing even more intricate queries requiring knowledge from numerous data sources. the main types of sequence information that must be stored are dna and protein. one of the largest dna sequence databases is genbank, which is managed by ncbi. genbank is growing rapidly as genome-sequencing projects feed their data (often in an automated procedure) directly into the database. figure . shows the logarithmic growth of data in genbank since . entrez gene curates some of the many genes within genbank and presents the data in a way that is easy for the researcher to use (figure . ) . year the exponential growth of genbank total number of bases figure . . the exponential growth of genbank. this plot shows that since the number of bases in genbank has grown by five full orders of magnitude and continues to grow by a factor of every years. in addition to genbank, there are numerous special-purpose dna databases for which the curators have taken special care to clean, validate, and annotate the data. the work required of such curators indicates the degree to which raw sequence data must be interpreted cautiously. genbank can be searched efficiently with a number of algorithms and is usually the first stop for a scientist with a new sequence who wonders "has a sequence like this ever been observed before? if one has, what is known about it?" there are increasing numbers of stories about scientists using genbank to discover unanticipated relationships between dna sequences, allowing their research programs to leap ahead while taking advantage of information collected on similar sequences. a database that has become very useful recently is the university of california santa cruz genome assembly browser (figure . ) . this data set allows users to search for specific sequences in the ucsc version of the human genome. powered by the similarity search tool blat, users can quickly find annotations on the human genome that contain their sequence of interest. these annotations include known variations (mutations and snps), genes, comparative maps with other organisms, and many other important data. although sequence information is obtained relatively easily, structural information remains expensive on a per-entry basis. the experimental protocols used to determine precise molecular structural coordinates are expensive in time, materials, and human power. therefore, we have only a small number of structures for all the molecules characterized in the sequence databases. the two main sources of structural information are the cambridge structural database for small molecules (usually less than atoms) and the pdb for macromolecules (see section . . ), including proteins and nucleic acids, and combinations of these macromolecules with small molecules (such as drugs, cofactors, and vitamins). the pdb has approximately , high-resolution structures, but this number is misleading because many of them are small variants on the same structural architecture (figure . ) . if an algorithm is applied to the database to filter out redundant structures, less than , structures remain. there are approximately , proteins in humans; therefore many structures remain unsolved (e.g., burley and bonanno, ; gerstein et al., ) . in the pdb, figure . . a stylized diagram of the structure of chymotrypsin, here shown with two identical subunits interacting. the red portion of the protein backbone shows α-helical regions, while the blue portion shows β-strands, and the white denotes connecting coils, while the molecular surface is overlaid in gray. the detailed rendering of all the atoms in chymotrypsin would make this view difficult to visualize because of the complexity of the spatial relationships between thousands of atoms. each structure is reported with its biological source, reference information, manual annotations of interesting features, and the cartesian coordinates of each atom within the molecule. given knowledge of the three-dimensional structure of molecules, the function sometimes becomes clear. for example, the ways in which the medication methotrexate interacts with its biological target have been studied in detail for two decades. methotrexate is used to treat cancer and rheumatologic diseases, and it is an inhibitor of the protein dihydrofolate reductase, an important molecule for cellular reproduction. the three-dimensional structure of dihydrofolate reductase has been known for many years and has thus allowed detailed studies of the ways in which small molecules, such as methotrexate, interact at an atomic level. as the pdb increases in size, it becomes important to have organizing principles for thinking about biological structure. scop provides a classification based on the overall structural features of proteins. it is a useful method for accessing the entries of the pdb. the ecocyc project is an example of a computational resource that has comprehensive information about biochemical pathways. ecocyc is a knowledge base of the metabolic capabilities of e. coli; it has a representation of all the enzymes in the e. coli genome and of the compounds on which they work. it also links these enzymes to their position on the genome to provide a useful interface into this information. the network of pathways within ecocyc provides an excellent substrate on which useful applications can be built. for example, they could provide: ( ) the ability to guess the function of a new protein by assessing its similarity to e. coli genes with a similar sequence, ( ) the ability to ask what the effect on an organism would be if a critical component of a pathway were removed (would other pathways be used to create the desired function, or would the organism lose a vital function and die?), and ( ) the ability to provide a rich user interface to the literature on e. coli metabolism. similarly, the kyoto encyclopedia of genes and genomes (kegg) provides pathway datacets for organism genomes. a postgenomic database bridges the gap between molecular biological databases with those of clinical importance. one excellent example of a postgenomic database is the online mendelian inheritance in man (omim) database, which is a compilation of known human genes and genetic diseases, along with manual annotations describing the state of our understanding of individual genetic disorders. each entry contains links to special-purpose databases and thus provides links between clinical syndromes and basic molecular mechanisms (figure . ). the smd is another example of a postgenomic database that has proven extremely useful, but has also addressed some formidable challenges. as discussed previously in several sections, expression data are often represented as vectors of data values. in addition to the ratio values, the smd stores images of individual chips, complete with annotated gene spots (see figure . ). further, the smd must store experimental conditions, the type and protocol of the experiment, and other data associated with the experiment. arbitrary analysis can be performed on different experiments stored in this unique resource. a critical technical challenge within bioinformatics is the interconnection of databases. as biological databases have proliferated, researchers have been increasingly interested in linking them to support more complicated requests for information. some of these links are natural because of the close connection of dna sequence to protein structure (a straightforward translation). other links are much more difficult because the semantics of the data items within the databases are fuzzy or because good methods for linking certain types of data simply do not exist. for example, in an ideal world, a protein sequence would be linked to a database containing information about that sequence's function. unfortunately, although there are databases about protein function, it is not always easy to assign a function to a protein based on sequence information alone, and so the databases are limited by gaps in our understanding of biology. some excellent recent work in the integration of diverse biological databases has been done in connection with the ncbi entrez/pubmed systems, the srs resource, discoverylink, and the biokleisli project. the human genome sequencing projects will be complete within a decade, and if the only raison d'etre for bioinformatics is to support these projects, then the discipline is not well founded. if, on the other hand, we can identify a set of challenges for the next generations of investigators, then we can more comfortably claim disciplinary status for the field. fortunately, there is a series of challenges for which the completion of the first human genome sequence is only the beginning. with the first human genome in hand, the possibilities for studying the role of genetics in human disease multiply. a new challenge immediately emerges, however: collecting individual sequence data from patients who have disease. researchers estimate that more than percent of the dna sequences within humans are identical, but the remaining sequences are different and account for our variability in susceptibility to and development of disease states. it is not unreasonable to expect that for particular disease syndromes, the detailed genetic information for individual patients will provide valuable information that will allow us to tailor treatment protocols and perhaps let us make more accurate prognoses. there are significant problems associated with obtaining, organizing, analyzing, and using this information. there is currently a gap in our understanding of disease processes. although we have a good understanding of the principles by which small groups of molecules interact, we are not able to fully explain how thousands of molecules interact within a cell to create both normal and abnormal physiological states. as the databases continue to accumulate information ranging from patient-specific data to fundamental genetic information, a major challenge is creating the conceptual links between these databases to create an audit trail from molecular-level information to macroscopic phenomena, as manifested in disease. the availability of these links will facilitate the identification of important targets for future research and will provide a scaffold for biomedical knowledge, ensuring that important literature is not lost within the increasing volume of published data. an important opportunity within bioinformatics is the linkage of biological experimental data with the published papers that report them. electronic publication of the biological literature provides exciting opportunities for making data easily available to scientists. already, certain types of simple data that are produced in large volumes are expected to be included in manuscripts submitted for publication, including new sequences that are required to be deposited in genbank and new structure coordinates that are deposited in the pdb. however, there are many other experimental data sources that are currently difficult to provide in a standardized way, because the data either are more intricate than those stored in genbank or pdb or are not produced in a volume sufficient to fill a database devoted entirely to the relevant area. knowledge base technology can be used, however, to represent multiple types of highly interrelated data. knowledge bases can be defined in many ways (see chapter ); for our purposes, we can think of them as databases in which ( ) the ratio of the number of tables to the number of entries per table is high compared with usual databases, ( ) the individual entries (or records) have unique names, and ( ) the values of many fields for one record in the database are the names of other records, thus creating a highly interlinked network of concepts. the structure of knowledge bases often leads to unique strategies for storage and retrieval of their content. to build a knowledge base for storing information from biological experiments, there are some requirements. first, the set of experiments to be modeled must be defined. second, the key attributes of each experiment that should be recorded in the knowledge base must be specified. third, the set of legal values for each attribute must be specified, usually by creating a controlled terminology for basic data or by specifying the types of knowledge-based entries that can serve as values within the knowledge base. the development of such schemes necessitates the creation of terminology standards, just as in clinical informatics. the riboweb project is undertaking this task in the domain of rna biology (chen et al., ) . riboweb is a collaborative tool for ribosomal modeling that has at its center a knowledge base of the ribosomal structural literature. riboweb links standard bibliographic references to knowledge-based entries that summarize the key experimental findings reported in each paper. for each type of experiment that can be performed, the key attributes must be specified. thus, for example, a cross-linking experiment is one in which a small molecule with two highly reactive chemical groups is added to an ensemble of other molecules. the reactive groups attach themselves to two vulnerable parts of the ensemble. because the molecule is small, the two vulnerable areas cannot be any further from each other than the maximum stretched-out length of the small molecule. thus, an analysis of the resulting reaction gives information that one part of the ensemble is "close" to another part. this experiment can be summarized formally with a few features-for example, target of experiment, cross-linked parts, and cross-linking agent. the task of creating connections between published literature and basic data is a difficult one because of the need to create formal structures and then to create the necessary content for each published article. the most likely scenario is that biologists will write and submit their papers along with the entries that they propose to add to the knowledge base. thus, the knowledge base will become an ever-growing communal store of scientific knowledge. reviewers of the work will examine the knowledge-based elements, perhaps will run a set of automated consistency checks, and will allow the knowledge base to be modified if they deem the paper to be of sufficient scientific merit. riboweb in prototype form can be accessed on the web. one of the most exciting goals for computational biology and bioinformatics is the creation of a unified computational model of physiology. imagine a computer program that provides a comprehensive simulation of a human body. the simulation would be a complex mathematical model in which all the molecular details of each organ system would be represented in sufficient detail to allow complex "what if ?" questions to be asked. for example, a new therapeutic agent could be introduced into the system, and its effects on each of the organ subsystems and on their cellular apparatus could be assessed. the side-effect profile, possible toxicities, and perhaps even the efficacy of the agent could be assessed computationally before trials are begun on laboratory animals or human subjects. the model could be linked to visualizations to allow the teaching of medicine at all grade levels to benefit from our detailed understanding of physiological processes-visualizations would be both anatomic (where things are) and functional (what things do). finally, the model would provide an interface to human genetic and biological knowledge. what more natural user interface could there be for exploring physiology, anatomy, genetics, and biochemistry than the universally recognizable structure of a human that could be browsed at both macroscopic and microscopic levels of detail? as components of interest were found, they could be selected, and the available literature could be made available to the user. the complete computational model of a human is not close to completion. first, all the participants in the system (the molecules and the ways in which they associate to form higher-level aggregates) must be identified. second, the quantitative equations and symbolic relationships that summarize how the systems interact have not been elucidated fully. third, the computational representations and computer power to run such a simulation are not in place. researchers are, however, working in each of these areas. the genome projects will soon define all the molecules that constitute each organism. research in simulation and the new experimental technologies being developed will give us an understanding of how these molecules associate and perform their functions. finally, research in both clinical informatics and bioinformatics will provide the computational infrastructure required to deliver such technologies. bioinformatics is closely allied to clinical informatics. it differs in its emphasis on a reductionist view of biological systems, starting with sequence information and moving to structural and functional information. the emergence of the genome sequencing projects and the new technologies for measuring metabolic processes within cells is beginning to allow bioinformaticians to construct a more synthetic view of biological processes, which will complement the whole-organism, top-down approach of clinical informatics. more importantly, there are technologies that can be shared between bioinformatics and clinical informatics because they both focus on representing, storing, and analyzing biological data. these technologies include the creation and management of standard terminologies and data representations, the integration of heterogeneous databases, the organization and searching of the biomedical literature, the use of machine learning techniques to extract new knowledge, the simulation of biological processes, and the creation of knowledge-based systems to support advanced practitioners in the two fields. the proceedings of one of the principal meetings in bioinformatics, this is an excellent source for up-to-date research reports. other important meetings include those sponsored by the this introduction to the field of bioinformatics focuses on the use of statistical and artificial intelligence techniques in machine learning introduces the different microarray technologies and how they are analyzed dna and protein sequence analysis-a practical approach this book provides an introduction to sequence analysis for the interested biologist with limited computing experience this edited volume provides an excellent introduction to the use of probabilistic representations of sequences for the purposes of alignment, multiple alignment this primer provides a good introduction to the basic algorithms used in sequence analysis, including dynamic programming for sequence alignment algorithms on strings, trees and sequences: computer science and computational biology gusfield's text provides an excellent introduction to the algorithmics of sequence and string analysis, with special attention paid to biological sequence analysis problems artificial intelligence and molecular biology this volume shows a variety of ways in which artificial intelligence techniques have been used to solve problems in biology genotype to phenotype this volume offers a useful collection of recent work in bioinformatics another introduction to bioinformatics, this text was written for computer scientists the textbook by stryer is well written, and is illustrated and updated on a regular basis. it provides an excellent introduction to basic molecular biology and biochemistry what ways will bioinformatics and medical informatics interact in the future? will the research agendas of the two fields merge will the introduction of dna and protein sequence information change the way that medical records are managed in the future? which types of systems will be most affected (laboratory, radiology, admission and discharge, financial it has been postulated that clinical informatics and bioinformatics are working on the same problems, but in some areas one field has made more progress than the other why should an awareness of bioinformatics be expected of clinical informatics professionals? should a chapter on bioinformatics appear in a clinical informatics textbook? explain your answers one major problem with introducing computers into clinical medicine is the extreme time and resource pressure placed on physicians and other health care workers. will the same problems arise in basic biomedical research? why have biologists and bioinformaticians embraced the web as a vehicle for disseminating data so quickly, whereas clinicians and clinical informaticians have been more hesitant to put their primary data online? key: cord- - athnjkh authors: etemad, hamid title: managing uncertain consequences of a global crisis: smes encountering adversities, losses, and new opportunities date: - - journal: j int entrep doi: . /s - - -z sha: doc_id: cord_uid: athnjkh nan more and faster, they faced uncertainties concerning the length of time that the higher demand would continue to justify the additional investments. the phenomenon of newly found (or lost) opportunities and associated uncertainties occupied most smes. generally, smes depend intensely on their buyers, supplies, employees, and resource providers without much slack in their optimally tuned value creation system. a fault, disruption, slow-down, strike, or the likes anywhere in the system would travel rapidly upstream and downstream within a value creation cycle with minor differences in its adverse impact on nearly every member. when a crisis strikes somewhere in the value creation stream, all members would soon suffer the pains. consider, for example, the impact of national border closures on international supplies and sales. generally, disruptions in logistics, including international closures, could stop ismes' flow of international supplies and, after the depletion of inventories, shipping and international deliveries would be forced to stop, which in turn would be exposing nearly all other members of the value-net slow-downs and stoppages indirectly, if not directly, sooner rather than later. in spite of many advantages of smes relying on collaborative international networks, the covid- crisis pointed out that all members will need to devise alternative contingency plans for disruptions that may affect them more severely than otherwise. the rapidly emerging evidence suggests that the capable, far-sighted, and innovative enterprises perceived the slow-downs, or stoppages in some cases, as an opportunity for starting, or increasing, their alternative ways of sustaining activities, including on-line and remote activities and involvements, in order to compensate for the shrinkage in their pre-covid demands, while the short-sighted or severely resource-constrained smes faced the difficult decision of closure in favor of "survival or self-preservation" strategy, thus losing expansion opportunities. the silver lining of the covid darkness is that we have collectively learned invaluable lessons that deserve a review and re-examination from entrepreneurial and internationalization perspectives in order to prepare for the next unexpected crises, regardless of its cause, location, magnitude, and timing. in few words, the world experienced a crisis of massive scale for which it was unprepared and even after some months there is no effective remedial strategy (or solution) for crises caused by the covid- pandemic in sight. the inevitable lesson of the above exposition is that even the most-prepared institutions of the society's last resort nearly collapsed. given such societal experiences, the sufferings of industries and enterprises, especially smaller ones, are understandable and the scholarly challenge before us is what should be researched and learned about, or from, this crisis to avoid the near collapse of smaller enterprises and industries, on which the society depends and may not easily absorb another crisis of similar, or even smaller, scale. the main intention of this introduction is not to review the emergence and unfolding of a major global crisis that inflicted massive damage on smes in general and on ismes in particular, but to search for pathways out of the covid- 's darkness to brighter horizons. accordingly, the logical questions in need of answers are: were there strategies that could reduce, if not minimize, the adverse impact of the crisis. could (or should) smes or isme have prepared alternative plans to protect themselves or possibly avoid the crippling impact of the crisis. why were smes affected so badly. are there lessons to be learned to fight the next one, regardless of location, time, and scale? in spite of the dominating context of the ongoing and still unfolding covid- crises, there is a need to learn about the world's both effective and difficult experiences at this point in time, which are beyond the aims and scope of this journal. rather, it aims to analyze and learn about the bright rays of light that can potentially enlighten entrepreneurial and human innovative ingenuity to find pathways from the darker to the brighter side of this global and non-discriminatory crisis, within the scope of international entrepreneurship. naturally, in seeking those pathways, one is expected to encounter barriers, obstacles, and institutional rigidities that could still pose nearly insurmountable challenges to the society, and especially to smes and ismes, have experienced in the past which were partially due to endemic rigidities (aparicio et al. ; north ). on the positive side of the ledger is that many of the above adverse factors are among the host of smaller crisis-like challenges that entrepreneurial enterprises face regularly and manage to bridge across them to realize fertile and promising opportunities. learning how such bridges are built and maintained not only is entrepreneurial, but also may help the causes of humanity by showing the way out of this, and other similar, crises. this will be a noble objective if it can be accomplished, which should motivate many to take up corresponding challenges in spite of the low chances of its success. we will return to this topic at the end of this article. a cautionary note it is very important to note that the next four articles appearing in this issue were neither invited for this issue nor received any special considerations. they are included in this issue as they offer concepts, contents, contexts, and issues that are relevant to the overriding theme of this issue and may assist smes trying to manage a crisis facing them and scholars interested in investigating related issues. without exceptions, they were subjected to the journal's rigorous and routine double-blind peerreview processes prior to their acceptance. they were then published in the journal website's "on-line first" option waiting for placement in an issue with a coherent theme drawing on the research of each and all the selected articles for that issue. the highlights of the four articles that follow are presented in the next section of this article. they offer promising argument and plausible pathways based on their scholarly research relevant to an emerging or unfolding crisis. structurally, this article comprises five parts. a developmental discussion of uncertainties and their types, causes, and remedies as well as enabling topics relating to crisis-like challenges follows this brief introduction in "developmental arguments." a brief highlight of each of the four articles appearing in this issue, and their interrelationships, will be presented in "the summary highlight of articles in this issue." "discussions" provides discussions related to the overriding theme of this article. conclusion and implications for further research, management, and conducive public policy will appear at "conclusion and implications." the extra-ordinary socio-economic pains and the added stress of the covid- crisis exposed entrepreneurs, smes, larger enterprises, and even national governments to unprecedented conditions and issues. as stated earlier, there is a need for understanding how and why it became such a major world crisis and what factors contributed to expanding and amplifying its impact in the early quarters of the year . although the primary aim of this issue is not reviewing the crippling impact of the covid- crisis, for it is done elsewhere (e.g., etemad ; surico and galeotti ), some of its influential factors emerged and stood out in the early days of that effected international business and entrepreneurial institutions from the very beginning; and yet, it took some time to enact defensive private and public actions against it. although covid- was not the first world-wide health crisis, many enterprises, one-afteranother, were defenselessly affected by it, even after a few months. while we have learned about some of the contributing factors to this expansive crisis, we are still in the dark as to why the broader community, and even resourceful enterprises, had failed to foresee the emergence and unfolding of such a crisis (surico and galeotti ) or prepare potential defenses against it. the crisis' high magnitude and broad scope involved nearly everyone worrying and learning about its impact first-hand as it unfolded; but it appears that top management teams (tmts) had not learned fully from the past or taken precautions against the emergence of potential crises, and for this one, in a timely fashion. however, the literature of managing a major crisis of the past, mainly in the large enterprises, has pointed out a few known forces, or potential factors, and issues that had contributed to past crises and are briefly reviewed below as follows. uncertainties as similar broad world-wide crisis-like challenges involving nearly all institutions have not been experienced in the recent past, enterprises and especially smaller firms found themselves unprepared and encountered high levels of discomfort and taxing uncertainties in their daily lives. generally, such effects are more disabling when enterprises are in the earlier stages of their life cycle when they suffer from lack of rich experience to guide them forward, and they do not have access to the necessary capabilities and resources to support them through (etemad a (etemad , b, . entrepreneurial enterprises that have already internationalized, or aspire to internationalize, encounter the customary risks and uncertainties of "foreignness" (e.g., hymer ; zaheer ; zaheer and mosakowski ) , lack of adequate "experiential experience"(e.g., eriksson et al. ; johanson and vahlne ) , "outsidership" (johanson and vahlne ) , and the liability of "newness" (stinchcombe ) . the covid crisis added risks and uncertainties arising from national lockdowns, unprecedented regulatory restrictions, closure of international borders not experienced since the second world war, the near collapse of international supply chains and logistics, among many others, most of which became effective without much prior notice, and each of which alone could push smaller enterprises to the edges of demise due to the consequent shortages, operational dysfunctions, closure, and potential bankruptcies. survival during the heights of the covid- crisis required rapid strategic adaptations, mostly technological, and use of alternative online facilities, capabilities, and novel strategies to reach stakeholders (customers, supplier, employees, investors, and the likes) to quickly compensate, if not substitute, for their previous arrangements that had become dysfunctional. smaller firms that had prepared alternative contingency plans, supported with reserved dynamic capabilities and resources (eisenhardt and martin ; jantunen et al. ) , viewed the dysfunctionality of rivals as opening opportunities and managed rapidly a successful transition to exploit them in a timely fashion, either through their own or through established others' functional "platform-type operations" (e.g., amazon, alibaba, shopify, spotify, and many similar multi-sided platforms). they were viewed by others as exceptional, progressive, even somewhat "disruptive" (utterback and acee ) and creatively destructive to others (chiles et al. ) in some cases as internationalized firms restrategized and refocused on their domestic markets in reaction to the closure of border and international logistics dysfunctions. however, such adaptations, deploying innovative and additive technologies and other innovative industry . (e.g., additive technologies, artificial intelligence, internet of things (iot), robotic, -d printing, and other i. . technologies) (hannibal , forthcoming) or collaboration with established on-line or off-line establishments, faced their own unexpected operational difficulties nationally, while their counterparts experienced them internationally, including "cross-cultural communication and misunderstandings" (noguera et al. ; mitchell et al. ; mcgrath et al. ) , national and international logistic problems, supply chain disruption, among many others, mostly attributable to covid-related restrictions. among such unexpected international factors were forced rapid change in consumer behavior and national preferences in exporting countries (verging on implicit discriminatory practices ), worsening diplomatic relations, rising international disputes, regulatory restriction, and a host of other well-documented causes, exposing firms to unforeseen risks and uncertainties not experienced for decades. therefore, the concepts of risk, uncertainty, and the way for mitigating, or getting over, true or perceived crises deserve discussion as they are pertinent to resolving crisis-like challenges facing smaller firms, regardless of their particular timing and situation. similarly, factors contributing to, or mitigating against, the experience level(s) of ex-ante unknowns, or "un-knowables" (huang and pearce ) , contributing to uncertainties merit equal reportedly, rapidly growing internationalized medium-sized enterprises reconfigured and redeployed parts of their facilities rapidly to fabricate and provide goods locally to reduce shortages in products previously imported from international markets. for example, canada goose, manufacturer of luxury winter clothing, began making personal protective garments for hospital staff (see an article entitled as "toronto -canada goose holdings inc. is moving to increase its domestic production of personal protective equipment for health-care workers across canada at https://globalnews.ca/news/ /canada-goose-production-medicalppe-coronavirus/ visited on april , ). similarly, many other companies, including ccm sporting equipment and yoga jeans, began producing protective visors, glasses, and gowns for essential workers and hospital staff members (see article entitled as "quebec companies answer the call to provide protective equipments" at https://montrealgazette.com/business/local-business/masks-and-ppes...visited on june , ). for all of the above companies, their sales required very different distribution channels, such as pharmacies and hospital supply companies that are far from clothing and sport equipment. the us-based m was ordered not to ship n face mask to canada in march-april . similarly, some chinese suppliers refused to ship previously placed and paid-for ordered supplies. considerations. nearly all articles appearing in this issue relate to such contributing factors and offer different bridging pathways, if not causeways, over the sea of scholarly challenges faced by international entrepreneurs in quests for their success in entrepreneurial internationalization. in the context of the ongoing crisis, the pertinent discussion of uncertainties is extensive (liesch et al. ; bylund and mccaffrey ; coff ; dimov ; dow ; huff et al. ; liesch et al. ; matanda and freeman ; mckelvie et al. ; mcmullen and shepherd ) and ranges from one extreme to another classic view at the other extreme-namely, from the akerlofian cross-sectional (akerlof ) to the knightian longitudinal uncertainties (knight ) . at the root of both is in the absence of objective, or reliable information and knowledge with very different density distributions. the akerlof's cross-sectional uncertainty relates to a relatively shorter term and the information and knowledge (erikson and korsgaard ) discrepancies (between or among agents) favoring those who have more of them and exposing those who have less, or lack of them. consider, for example, the case of buying a used car (or second-hand equipment). generally, the seller has more reliable, if not near perfect, knowledge about the conditions of his offerings in terms of its age, performance, repairs, faults, and the likes than a potential buyer who will have to assume, predict, or perceive the offer's conditions without reliable information in order to justify his decision to either buy the car (or the equipment) or not. the potential buyer may ask for more detailed information about the offers' conditions or seek assurances (or even guarantees) against its dysfunctions to pursue the transaction or not when he is in doubt. the noteworthy point is that the objective information(or knowledge) is available but the buyer cannot access it to assess it objectively-thus, the cross-sectional uncertainty is due to the asymmetric state of information and knowledge among parties involved in a transaction (townsend et al. ) , which clears soon after the transaction is consummated. in williamson's transaction cost approach (williamson ) , such discrepancies are viewed as transaction frictions between the parties, where at least one party acts opportunistically to maximize self-interest at the cost to the other(s), while the other party(ies) is incapable of securing the necessary objective information to form the required knowledge for enabling a prudent decision. within the uncertain state of covid- crisis, both of the above phenomena (asymmetry and opportunistic behavior) were clearly observable and contributed to creating subsidiary crises of different magnitudes-relatively larger ones for smaller enterprises and smaller ones for the larger companies, some of which were unduly amplified due to the lack of objective information and opportunistic behavior at the time. retrospectively, we collectively learned, for example, that there was no worldwide shortage of health-care equipment and supplies but major suppliers, or intermediaries, withheld their supplies and did not ship orders on time as they usually would have previously done, which created the perception of acute shortages forcing prices higher, knowing well that buyers were incapable of assessing the true availability of inventoried supplies for demanding lower prices, especially when the richer buyers (e.g., national governments) were willing to bid-up prices due to urgency of their situations. this is not far from a discriminating monopolist taking advantage of its uninformed buyers. similar situation happens when a small company fails to plan for contingencies to cover for emerging uncertainties by ordering just sufficient for their regularly needed supplies (e.g., the minimum order quantity) to minimize the short-term costs of holding inventory. the longer term overall costs of over-ordering to build contingency supplies is the cumulative cost of holding excess inventory over time, which could be viewed as an insurance premium for avoiding supply shortages, or stock-outs, while the true costs of such internal imprudent strategies become much higher when, for example, potential customer switch to other available brands, or there are uncertain and adverse external conditions, including artificially created shortages, as discussed earlier. generally, the top management of resource-constrained smaller companies aims to ensure the efficiency of their resources, including supplies, and to preserve adequate cash flows, to avoid short-term uncertainties of insolvency, akin to the akerlofian type (akerlof ) . in contrast, the absence of reliable information (or assurances) about steady supplies may contribute to, if not cause, a change in potential buyers' consumer behavior and further contribute to suppliers' over-estimation of buyers' demand trajectory over a longer time period fearing from facing acute adverse conditions such as those discussed in the previous case. however, such internal (e.g., management oversights) or external (e.g., suppliers withholding shipments or change in consumer behavior) causes due to absence of the required information, reliable forecasts or estimates, and imperfect knowledge over time begin to pertain more to knightian uncertainties than akerlofian types (i.e., across transacting agents, which is comparatively shorter term and more frequently encountered uncertainties). the impact of resources and capabilities naturally, a firm's level of resources (wernerfelt (wernerfelt , barney et al. ) or institutional inadequacies and restrictions (bruton et al. ; north dc ; kostova ; yousafzai et al. ) may play influential roles in mitigation of encountered uncertainties. consider, for example, the difference in abilities of smaller resource-constrained enterprises in continual need of minimizing fixed costs and larger and richer institutions (such as national governments) with higher priorities (than costs) to enforce performance contract(s). the richer resources of larger institutions pose a more credible threat of suing the supplier(s) lateron for potential damages of higher costs or delayed shipments than those of smaller firms, thus reducing the temptation for opportunistic behaviors (williamson ) over time. as the transaction cost theory suggests (williamson ) , the ever-presence of such threats may dissuade suppliers from delaying and withholding shipments for the hope of higher revenues. furthermore, even the opportunists may be exposed as other lower costs suppliers may realize the opportunities and respond with lower prices. time, timing, and longer term uncertainties the above demonstrative discussions point to the critical role of timely-planned acquisition of capabilities and resources over time before emergencies, or shortages, become acute. the time dimension of this discussion relates to knight's ( ) longitudinal uncertainties. the future is inherently uncertain; but one's needs and their corresponding transactions costs are more predictable at the time as, for example, transactions can be consummated at the prevailing prices. delaying a transaction in the hope of buying at lower costs exposes the transaction to longitudinal uncertainties, as the uncertainties' ex-post costs and the true prices are only revealed in the due course of time. similarly, the longer term costs of preparedness and security can minimize the short-term costs to individual employees and other corporate persons. accordingly, the intensity of a crisis, and its cumulative costs, may force national and local authorities to bid-up prices and absorb the much higher short-term costs at the time to ensure acquisition of essential supplies in order to avoid the difficult-to-predict costs of longitudinal uncertainties. for smaller enterprises, however, the state of their resources and the extent of prior experiences may influence their decision at the time. this will be discussed below. past experience and the firm's stages of life-cycle generally, smaller and younger companies are short of excess resources and lack rich experience to provide them with a longer and far-sighted outlook for avoiding longitudinal uncertainties of the knightian types by, for example, keeping a level of contingency inventories for difficult conditions and rainy days. however, even smaller start-ups with experienced serial entrepreneurs at the helm can benefit from the past experiences of their founding entrepreneur(s) through what etemad calls as "the carry-over of paternal heritage" (etemad a) to enable planning and providing for their necessary resources. the state of competition on one extreme, a monopolist can control supplies and create artificial shortages to force prices up in normal conditions. under distress and unusual conditions, customers may bid-up prices to have priority access to the available supplies. in the perfect competition, on the other extreme, many suppliers compete to attract demand and prices remain relatively competitive due to highly elastic demands. practically, however, the state of global competition is likely to be closer to a combination of regional oligopolistic (or monopolistic) competition and global competitive conditions, where suppliers perceive to have certain monopolistic powers to manipulate prices (e.g., due to their brand equity, location, or product quality), while they need to compete in a nearly hyper-competitive state (chen et al. ) with other competitors who provide similar offerings. the knowledge of the competitive and institutional structures (jepperson ; yousafzai et al. ; welter and smallbone ) is, therefore, essential, especially to smes for deciding as optimally as possible, which all depends on both the buyers and suppliers state of information, communication, and knowledge, which is further discussed below. the state of communication and information the advanced state of a firm's information and communication technology (ict) is highly likely to enable it to decide prudently. as discussed earlier, uncertainties depend on one's state of reliable information impacting the achievement of optimality, which in turn depends on the state of information at the time. in short, a small firm's potential exposure to cross-sectional and longitudinal risks and uncertainties is also likely to depend on information on a combination of influential factors, some of which are discussed above; prominent similar arguments apply to national preparedness and national security over time to shield individual and corporate citizens from bearing short-term or long-term high costs-the national costs per capita may pale relative to the immeasurable costs of human mortalities paid by the deceased people and their families, the massive unemployment, or high costs related to shortages in major crises, such as the covid- pandemic. among such influential factors is reliable information about their operating context at the time and its probable trajectory in the near future. furthermore, nearly all of the emerging advances in management and production, including additive technologies, depend heavily on information (hannibal , forthcoming) finally, the next section will seek to discuss the above elements within the articles that follow. this part consists of summary highlights of the contribution of the four double-blind, peer-reviewed articles with relevant materials to an emerging or an unfolding crisis. the second article in this issue is entitled "muddling through akerlofian and knightian uncertainty: the role of socio-behavioral integration, positive affective tone, and polychronicity" and is co-authored by daniel leunbach, truls erikson, and max rapp-ricciard. as discussed earlier, uncertainty and risk-taking propensity have been recognized as integral parts of general entrepreneurship for a long time (e.g., gartner and liao ) and this article focuses on studying them as they relate to individual entrepreneurs' affective socio-behavioral and also the way entrepreneurs function, including how they perceive their situation, manage, progress, and adjust their outlook within the environment(s) that exposes them to perceived risks and uncertainties. from an entrepreneurial perspective, a combined interaction of time and flow of information, or lack thereof, forming a knowledge base, is what entrepreneurial decisions depend. when the entrepreneurs need to make decisions without perfect cognition (based on his information and knowledge about the state of affairs at the time) within a relatively short time period, they and their decisions are exposed to an uncertain state of the world. such uncertainty(ies) within a relatively short time span is (are) termed as crosssectional uncertainty. generally, it is difficult to acquire nearly perfect information due to the shortage of time or the cost of searching for the information. george akerlof ( ) suggested that such perceived uncertainty would not be as much due to lack of pertinent information as it would be due to the asymmetric distribution of information (brown ) and the corresponding knowledge among agents-i.e., those who had more potent information and those who did not but needed it. consider a typical entrepreneur in need of acquiring a good, or service, from a supplier or service provider, who has nearly perfect information about, or the knowledge of, the good or service he offers but does not fully disclose them to the entrepreneur in self-interest, which gives rise to the asymmetric distribution of the information (or knowledge) between the supplier and the potential buyer. generally, this is also termed as "akerlofian uncertainty." time is an important factor in entrepreneurial decisions, and the influence of time is as significant as the state of information. for example, the urgency of a decision deprives the entrepreneur of sufficient time for conducting informative research to enrich his state of information and forces the entrepreneur to decide earlier, rather than later, with some discomfort and reservation due to his insufficient information. with more time for acquiring sufficient information for forming a supportive knowledge base, he can comfortably decide to consummate a particular transaction or not. the time cost of switching to another supplier or conducting research may increase the transaction costs and expose the transaction to longitudinal uncertainties as well. in contrast to the asymmetric distribution of information across individuals in the short term (brown ) , the required time to acquire, or develop, the required information about the relevant state of affairs for forming the corresponding knowledge for portraying the future, or the near future, gives rise to longitudinal uncertainty, as suggested by frank knight's article in (knight ) and is viewed as "knightian uncertainty." generally, the future is uncertain and it is not prudent to assume that it will be a linear extension of the current state of affairs, or alternatively, it will be predictable perfectly. again, both the information and time are influential factors as more pertinent information is revealed over time. entrepreneurial start-ups, and new ventures, for example, suffer from both the shortage of time and information and thus offer fertile context for exploring not only the interaction of time and information, but also how new venture teams (nvts) perceive the gravity of risks and uncertainties facing them. therefore, exploring uncertainty within new venture teams, especially those based on new science and technology, which usually encounter higher uncertainties and commercialization risks, enables a deeper understanding of how uncertainties are perceived and managed by the new venture teams. as discussed in some length in the paper, the authors' research methodology enabled them to observe the impact of nvt's socio-behavioral and psychological characteristic and explore their reactions and response to both the shorter and longer term uncertainties facing them. in the context of a major crisis, smes' top management teams (tmts) suffer from both the inadequacies of information and shortage of time in most cases, neither of which they can control or extend into the future. in a major crisis, there are complex uncertainties with unclear prospects and come without prior warning-e.g., what will have an immediate adverse impact, will it increase or subside, how long will it last, and what will be the magnitude of accumulated damage when the crisis is nearly over? these are among many other questions that have no certain answers at the time. similar to young start-ups, where founder-entrepreneurs suffer from shortages of time, knowledge (or reliable information), and resources, in addition to uncertainties associated with consumer behavior and market reactions as well as regulatory restriction, smes, and especially ismes, suffer from complex uncertainties for which they were not prepared, nor would they have sufficient time and resource to deal with the unfolding crisis satisfactorily. furthermore, the normal sources of previous help and advice, including their social networks and support agencies, such as lenders, service providers, and suppliers, would be facing even larger problems of their own and incapable of assisting them in a timely fashion, which call for adequate alternative contingency plans for the rainy days as discussed earlier and further reviewed in a later section. this discussion points to a need for examining the potential role of environmental context in increasing uncertainties or mitigating against them. the next article examines this very topic next. the third article of this issue examines the context within which entrepreneurial decisions are made. it is entitled "home country institutional context and entrepreneurial internationalization: the significance of human capital attributes" and co-authored by the team of vahid jafari-sadeghi, jean-marie nkongolo-bakenda, léo-paul dana, robert b. anderson, and paolo pietro biancone. nearly all decisions are embedded in a context, and the context for most international entrepreneurship decisions is perceived as more complex than those in the home market, as extensively discussed by internationalization literature. internationalization involves at least two contextsone characterized by formal institutional structures and informal socio-cultural values (hofstede (hofstede , (hofstede , hofstede et al. ) at home, both helpful and restrictive, and the other at the host country environment, where each country's institutional structures differ from the others (chen et al. ; li and zahra ) . even in the european union's (eu) single market, where eu has increasing harmonized intercountry-wide regulatory and institutional requirements ever since , different local socio-cultural and behavioral forces influence decisions differently, especially those affecting consumer behavior and market-sensitive aspects, which are more deeply embedded in their country's institutional structures than others, encouraging or restricting certain entrepreneurial actions. generally, international entrepreneurship and their corresponding entrepreneurial actions are deeply embedded in their more complex contexts (barrett, jones, mcevoy, granovetter ; wang and altinay ; yousafzai et al. ) and highly internationalized smes (ismes), and even larger firms, need to respond sensitively to the various local (i.e., contextual) facets and adapt their practices accordingly (welter ) , which in turn add incremental complexities and expose the firm's early entrepreneurial, and especially marketing, to a higher degree of risk and cross-sectional uncertainties than those at home which is more familiar than elsewhere. however, firms learn from their host environment and also from their competitors as to how to mitigate their risks and remove the information (and knowledge) asymmetries over time in order to operate successfully after their early days in the host county environment. naturally, entrepreneurial activities of innovative start-ups face higher risks and uncertainties at the outset, as discussed earlier. although the cross-sectional methodology of this article's research across european country-environments, using structural equation modeling (sem), could not examine the specific impact of various environmental characteristics on entrepreneurial orientation and entrepreneurial practices at the local levels in each country context over time, the overall indicators pointed to contextual influences strongly affecting various facets of internationalization and international entrepreneurship. it is noteworthy that the entrepreneurial intentions and orientations of "non-entrepreneurs" that portray their context had a significant positive influence on the creation of entrepreneurial businesses and their internationalizations. in summary, the findings of this research strongly support the notion that the true, or perceived, state of the firms' environment influences their strategic management of their regular affairs as well as the management of an emerging or unfolding crisis, regardless of its magnitude and timing. the fourth article in this issue complements the previous article through a deeper examination of institutional impacts from a women entrepreneurs' perspective. it is entitled "the neglected role of formal and informal institutions in women's entrepreneurship: a multi-level analysis" and is co-authored by daniela gimenez-jimenez, gimenez-jimenez, andrea calabrò, and david urbano. this article draws on, and extends, the impact of institutional context, discussed above, to include what the authors termed as "informal institutions' impact on women entrepreneurs." as discussed in the context of european countries earlier, socio-cultural and behavioral aspects of the various societies vary and influence different entrepreneurship initiatives differently. in contrast to the tangible influences and effects of formal institutions, the socio-cultural values of a society remain nearly invisible, but quite influential. what the authors call as neglected "informal institutions" are widely portrayed as a society's "software" by cultural anthropologists, such as geert hofstede, among others (hofstede (hofstede , (hofstede , hofstede et al. ) . in contrast to the "hardware" that is structural and tangible, the "software" remains hidden, if not intangible, neglected, and ignored, that it functions consistent with its socio-cultural values and daily behavioral routine, which act as design parameters woven into the software's programs that control social functions quietly. the underlying multi-level research methodology analyzing the entrepreneurial experience of more than , women in countries in this article suggests that both of the societal formal and informal institutions impact entrepreneurship, and especially women entrepreneurs more significantly and profoundly, and yet they have remained "neglected." in the context of a major crisis, facing society in general, and entrepreneurial smes in particular, the question of how do the formal and informal institutions of a society assist or hamper effective crisis management, especially by women executives, resumes high importance. the casual observation of conditions imposed by the covid- crisis in the past months, or so, suggests that both the society's formal and informal institutions of the affected environments imposed higher expectation, if not more responsibilities, on women than their previous family setting transformed into home and office at the same time, which have adversely affected women's time, effort, and attention in effectively managing their firm's crisis and also attending to their family as they did previously. assuming that crisis management requires management's more intensive attention and effort than those of normal times, the important question for women executives is, how should the required additional efforts by busier women executives be assisted? and if they cannot be, who should be bearing the additional costs and the consequent damages to both the women's family and their firms? specifically, what should be the uncodified, but understood, societal expectations of women executives? are they expected to sacrifice their family's wellbeing or not concentrate fully on managing their enterprises' crisis effectively? naturally, the preliminary answer lies in what is consistent with the society's informal sociocultural value systems as well as those formally codified in the society's laws, regulations, and broadly accepted behaviors. this discussion provides a socio-cultural bridge to the next article. the fifth article of this issue is entitled "market orientation and strategic decisions on immigrant and ethnic small firms" and is co-authored by eduardo picanço cruz, roberto pessoa de queiroz falcão, and rafael cuba mancebo. as the title suggests, this research is about entrepreneurs facing a new and possibly different environmental context than their familiar previous one at home, thus exposing them to the fear, if not the uncertainty, of unknowns, including the hidden and intangible socio-cultural value systems. they need to decide about their overall strategy, including marketing orientation, in their newly adopted environment. immigrant entrepreneurs face the challenges of belonging to two environments, one at home, which they left behind, and the other in their unfamiliar new home (the host country), in which they would aim to succeed (etemad b) . when there are significant differences between the two, they face a minor crisis in terms of the uncertainty of if their innate home strategic orientation or that of their host can serve them best. either of the two strategic choices exposes them to certain uncertain costs and benefits, which are not clear at the time. naturally, their familiarity with their previous home's socio-cultural environment, within which they feel comfort and may need nearly no new learning and adaptation, pushes them to operate in a similar environment to home to give them certain advantages and possibly lower risks and uncertainties. this orientation attracts them towards their ethnic and immigrant community, or enclave, based primarily on the perception that their ethnic communities, enclaves, and their market segments in their adopted home still resemble their home environment's context, which in turn suggest that they can capitalize on them by relying on their common ethnic social capital (davidsson and honig ) , using their home language, culture, and routine practices with minimal cognitive and psychic pain of adapting to the new context. however, that perception, or assumption, may not be valid or functional, where the society's sociocultural values encourage rapid adaption and change so that immigrants become like other native citizens. although the market orientation of concentrating on their ethnic community in their adopted country for its advantages, including a lower perceived short-term uncertainty (e.g., akerlofian type of risk and uncertainty), it may not work or prove to be restrictive in longer term due to, for example, the community may be small, decreasing in size and gradually adapting to their host country prevailing sociocultural values, thus posing an uncertainty of knightian type, where the future state is difficult to predict. in contrast, adopting a strategic and market orientation towards attractive market segments of their new home's socio-cultural values and routine practices may expose the young entrepreneurial firm to the other well-documented risks and uncertainties, which are similar to difficulties encountered by firm starting a new operation in a foreign country (hymer ; vahlne , ; zaheer ; zaheer and mosakowski , among many others) . this strategy may also force the nascent firm to compete with the entrenched competition, both of the immigrant and indigenous origins, unless it can offer innovative, or unique, products (or services) similar to other native innovative start-ups. the noteworthy point, and as discussed earlier (in the "introductions" and "developmental arguments" sections), the state of the firm's resources and the entrepreneur's (or the firm's top management team's) extent of experience, information, and knowledge may make the difference between the ultimate success or mere survival in either of the above strategies. the rich multi-method and longitudinal research methodology of this article over a -year period involving interviews, ethnographic observation, and regular data collection among the ethnic and immigrant entrepreneurs in brazilian enclaves world-wide enabled the authors to offer a conceptual framework and complementary insights based on their findings and experiential knowledge. in summary, the research supporting articles in this part are both consistent and supportive of arguments presented in "introductions" and "developmental arguments." they will also serve as a basis for arguments in the following "discussions." as stated in the "introductions" section, this issue's release would coincide with the world in the midst of the coronavirus pandemic. initially and on the face of it, the pandemic was perceived as a health-care problem in china followed by other counties in the east and south-east asia; but it soon turned into a world-wide crisis far-beyond health-care in a few other countries quickly affecting nearly all aspect of the others before inflicting them with the unfolding crises of their own. generally, health-care institutions are viewed as the societies' institution of last resort and are expected to deal with potential crises of others, rather than becoming the epicenters of a crisis, posing challenges to others and to their respective societies as a whole. the health-care system in the publicly financed countries is given resources; held in high regard because of their highly capable human resources; is assumed to be well managed ready to resolve health-care-related problems, if not crises; and consequently expected to effectively solve all health-related challenges as they arise. however, and regardless of their orientation-privately held or publicly supported systems-the health-care system traveled to the brinks of breakdown and collapse, although they had dealt with similar, but smaller, outbreaks of regional and seasonal flu or other epidemics previously-e.g., the hiv/aids outbreak of the late s and now endemic world-wide, sars epidemic of , and n h flu of that became a pandemic, among others, in the near recent memories-but the covid- pandemic overwhelmed them. retrospectively, the health-care institutions, and the system as a whole, were not the only sector experiencing high levels of systematic fatigue, stress, and strains nearing breakdowns, suggesting that some countries were not prepared to deal with a major crisis. naturally, less prominent institutions than the country' healthcare system, and subsequently the government alike, were not spared; and many ad-hoc experimental procedures had to be used and valuable lessons had to be learned in hurry in various institutions and many occasions in the hope of saving the moment pregnant with precarious lives and livelihoods. such rapidly developing phenomena, seemingly beyond control initially, influenced the overall theme of this issue, although already accepted articles waiting to be placed in a regular publication were not written on the topic of crisis management. given the gravity of the covid- pandemic pushing many institutions into their crisis of survival, this issue adopted the overriding thematic topic of crisis management perspective to enable a richer discussion of different components of crisis management with a focus on smes and ismes based on the specific research of each of the articles accepted through the journal rigorous doubleblind review process. expectedly, the resource-constrained small-and medium-sized enterprises (smes) and their internationalized counterparts (ismes) suffered deeply due to lockdown of their customers, employees, and service providers. similarly, the sudden stoppage of major national and international economic activities in many advanced countries, including those in the european and american continent, paralyzed them initially as the early impacts were totally unexpected. the health-care system was not the only sector experiencing dangerous levels of stress and strains. entertainment hotel and lodging industries, performing and creative arts, hospitality and restaurant industries, and tourism and their complementary goods and service providers, mostly smaller enterprises, among many other smes, were caught off-guard and suffered deeply from lack of demand due to rapid economic slow-down, fears of infections, and enforcement of lockdowns in many affected countries. similarly, integrated manufacturing systems, such as automobile industries, where parts had to arrive from different international sources on time, if not just in time, came to a halt because of the near collapse of international supply chains in addition to national protectionism of the past showing its ugly face after sometimes. such conditions were not seen for some seven decades, after the second world war triggering the multi-lateral agreement in bretton wood in to 's multi-national conferences that created the world's enduring world institutions such as gatt (replaced by wto), imf, and world bank. at the socio-cultural and economic levels, the imposed self-isolation and lockdown of cities and communities, to avoid further transmission of the coronavirus to the unsuspecting others, entailed immobility, and the imposition of social distancing disrupted all normal routine behaviors. many industries could no longer operate as safe social distances could not be provided. international, national, and even regional travels were shuttered down as national boarders were nearly closed. as a direct result, small-and medium-sized enterprises in the affected industries, who depend on others intensively, suffered a double whammy massively-their demand had collapsed, and their supplies had stopped. some were ordered closed and others had no reason to operate due to the immobility of employees, customers, and clients alike as well as severe shortages of parts and supplies. consequently, they had to shut down to minimize unproductive fixed costs. in short, the world has been, and in some cases is still, struggling with the covid- crisis at the time of this writing in june . only months earlier in december , not many people imagined the emergence of the crisis in their locale, let alone a massive global pandemic, crippling community after community, which revealed deep socio-cultural, economic, and institutional unattended faults. collectively, they pointed to unpreparedness of many unsuspecting productive enterprises and institutions alike. in contrast, more far-sighted and conservative institutions with alternative contingency plans based on their relatively minor crisis-like experiences previously, such as transportation and labor strikes, not comparable to covid- , activated their relevant contingency plans. consider, for example, that the alternative of online marketing and sales could compensate for the immobility of customers and in-person sales transactions. naturally, enterprises with on-line capabilities either gained market shares or suffered less severely. in short, the overriding lesson of this crisis as discussed in some length in "developmental arguments" and in "the summary highlight of articles in this issue" and in response to queries raised in "introductions" is that the institutional under-and unpreparedness, regardless of the level, location, and size, inflicted far higher harms than the incremental costs of carrying alternative contingency plans for the rainy days as evidenced by the considerable success of on-line expansion and quick reconfiguration of flexible manufacturing to accommodate the unexpected oddities of the unfolding crisis. aside from the global scale of the covid crisis, similar, if not more severe, crises had happened in different locations, some repeatedly, and humanity had suffered and should have learned. consider, for example, the torrential rains and subsequent flooding and mudslides in the temperate regions; massive snow falls in the northern hemisphere shutting down activities for days, if not weeks, at the time; massive earthquakes destroying residential and office buildings without warning (e.g., christ church city, new zealand, the ancient trading city of baam in southeastern iran); heavy ice storm in eastern canada destroying electrical transmission lines that resulted in shutting down cities for many days; the massive and widespread tsunami of indian ocean destroying coastal areas in about countries with quarter-million casualties, among many others, should have served as wake-up calls. while many of the past stricken areas still remain exposed and vulnerable to recurrence, reinforcement and warning systems are in place for a minority of them. for example, the earthquake detection system in the deep seas neighboring tsunami-prone areas has provided ample warning to the vulnerable regions and have avoided major damages. conversely, the massive tsunami of eastern japan that destroyed the fukushima daichi nuclear power plant in addition to large financial, property, and human losses as well as untold missing persons could have been avoided. in an earthquake-prone country such as japan, the design faults in the protective barrier walls should have protected the fukushima nuclear power plant and avoided the release of toxic nucleic emissions. the above discussions point to a few noteworthy lessons and implications as follows: the possibility of recurrence, possibly with higher striking probabilities than before, is not out of the realm of reality, thus calling for planned precautions and, if not adequate, preparations for preparedness. the organs, institutions, and systems weakened by a crisis, regardless of its magnitude and gravity, are in need of rebuilding and re-enforcement to endure the next adverse impacts, which also include smes that reached near demise and their management systems were nearly compromised by the emerging covid and its still uncertain unfolding and post-covid-aftermath. the primary vital support systems, especially the support systems of the last resort, including the first responders, emergency systems, warning and rescue systems, among others, need to develop alternative and functional contingencies and stay near readiness as the timing of the next crisis may remain a true uncertainty (of knightian type discussed earlier). the immediate support systems, agencies, or persons need to have planned redundancies and ready-to-act as backups should their clients be affected by unforeseen events. sustainability and resilience need to become an integral part of all contingency plans as the strength of the collectivity depends on the strength and resilience of the weakest link(s) (blatt ). the prevention of a natural disaster of covid-scale possibly engulfing humanity is in need of supra-national institutions with effective plans, incentives, and sanctions to prevent self-interest at the cost to larger collectivity, if not to the humankind. the immediate implication of the above discussions in the post-mortem analysis of a crisis, regardless of its scale and magnitude, is to learn about causes and the reason for failure to stop and possibly reverse their effects in a timely fashion. in the context of smes and ismes, management training, simulation to test the efficacy and reliability of crisis scenarios for alternative contingency plans, and their feasibility and functionality, among others, are of critical importance, which point to the four equally important efforts: crisis management needs to become an indispensable part of education at all professional levels to enable individuals to protect themselves and assist others in need as well as to reduce the burdens and gravity of the collective harms. the societal backbone institutions and institutional infra-structures on which others depend must be strengthened so that they can stand the impact of the next crisis, regardless of its timing and origin, and support their dependents. the widespread and learned lessons of covid- crisis should be utilized to prepare for a more massive crisis in not so distant future the smes, and especially ismes, as socio-economic institutions with societal impact need to re-examine their dependencies on others and take steps to avoid their recurrence in ways consistent with their long-term aims and objectives. in the final analysis, the experience of the covid- pandemic indicates that humanity is fragile and only collective actions can provide for the necessary capabilities and resources for dealing with the next potential disaster. similarly, the smaller institutions that provide for the basic ingredients, parts, and support for the full functioning of their networks and the livelihood of their respective members need the assurance of mutual support in order to survive and to deliver their vital support needed. on a final note, it is an opportune privilege for the journal of international entrepreneurship to take this invaluable opportunity to reflect on the ongoing crisis with the ability to still inflict further harm and more damages nearly beyond the control of national governments. similarly, and on the behalf of the journal, i invite the scholarly community to take up the challenge of educating and preparing us for the next crisis, regardless of its nature, location, and timing. the journal is prepared to offer thematic and special issue(s) covering the management of crisis in smes and ismes alike. the market for "lemons": quality uncertainty and the market mechanism institutional factors, opportunity entrepreneurship and economic growth: panel data evidence the resource-based view of the firm: ten years after resilience in entrepreneurial teams: developing the capacity to pull through the palgrave encyclopedia of strategic management institutional theory and entrepreneurship: where are we now and where do we need to move in the future? a theory of entrepreneurship and institutional uncertainty navigating hypercompetitive environment: the role of action aggressiveness and tmt integration home country institutions, social value orientation, and the internationalization of ventures beyond creative destruction and entrepreneurial discovery: a radical austrian approach to entrepreneurship how buyers cope with uncertainty when acquiring firms in knowledge-intensive industries: caveat emptor the role of social and human capital among nascent entrepreneurs uncertainty about uncertainty. foundations for new economic thinking experiential knowledge and cost in the internationalization process knowledge as the source of opportunity early strategic heritage: the carryover effect on entrepreneurial firm's life cycle advances and challenges in the evolving field of international entrepreneurship: the case of migrant and diaspora entrepreneurs actions, actors, strategies and growth trajectories in international entrepreneurship ) management of crisis by smes around the world risk-takers and taking risks economic action and social structure: the problem of embeddedness the influence of additive manufacturing on early internationalization: considerations into potential avenues of ie research the cultural relativity of organizational practices and theories culture's consequences: comparing values, behaviors, institutions and organizations across nations cultures and organizations: software of the mind managing the unknowable: the effectiveness of early-stage investor gut feel in entrepreneurial investment decisions a conversation on uncertainty in managerial and organizational cognition. in: uncertainty and strategic decision making the international operations of national firms: a study of direct foreign investment entrepreneurial orientation, dynamic capabilities and international performance institutions, institutional effects, and institutionalism the internationalization process of a firm -a model of knowledge foreign and increasing market commitments the uppsala internationalization process model revisited: from liability of foreignness to liability of outsidership country institutional profiles: concept and measurement formal institutions, culture, and venture capital activity: a cross-country analysis risk and uncertainty in internationalisation and international entrepreneurship studies. the multinational enterprise and the emergence of the global factory effect of perceived environmental uncertainty on exporter-importer interorganizational relationships and export performance improvement elitists, risk-takers, and rugged individualists? an exploratory analysis of cultural differences between entrepreneurs and non-entrepreneurs unpacking the uncertainty construct: implications for entrepreneurial action entrepreneurial action and the role of uncertainty in the theory of the entrepreneur cross-cultural cognitions and the venture creation decision socio-cultural factors and female entrepreneurship institutions, institutional change and economic performance social structure and organizations the economics of a pandemic: the case of covid- uncertainty, knowledge problems, and entrepreneurial action disruptive technologies: an expanded view social embeddedness, entrepreneurial orientation and firm growth in ethnic minority small businesses in the uk contextualizing entrepreneurship-conceptual challenges and ways forward institutional perspectives on entrepreneurial behavior in challenging environments resource-based view of the firm the resource-based view of the firm: ten years after the economics of organization: the transaction cost approach the new institutional economics: taking stock, looking ahead institutional theory and contextual embeddedness of women's entrepreneurial leadership: evidence from countries overcoming the liability of foreignness the dynamics of the liability of foreignness: a global study of survival in financial services publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord- -w sb h authors: schumacher, garrett j.; sawaya, sterling; nelson, demetrius; hansen, aaron j. title: genetic information insecurity as state of the art date: - - journal: biorxiv doi: . / . . . sha: doc_id: cord_uid: w sb h genetic information is being generated at an increasingly rapid pace, offering advances in science and medicine that are paralleled only by the threats and risk present within the responsible ecosystem. human genetic information is identifiable and contains sensitive information, but genetic data security is only recently gaining attention. genetic data is generated in an evolving and distributed cyber-physical ecosystem, with multiple systems that handle data and multiple partners that utilize the data. this paper defines security classifications of genetic information and discusses the threats, vulnerabilities, and risk found throughout the entire genetic information ecosystem. laboratory security was found to be especially challenging, primarily due to devices and protocols that were not designed with security in mind. likewise, other industry standards and best practices threaten the security of the ecosystem. a breach or exposure anywhere in the ecosystem can compromise sensitive information. extensive development will be required to realize the potential of this emerging field while protecting the bioeconomy and all of its stakeholders. genetic information contained in nucleic acids, such as deoxyribonucleic acid (dna), has become ubiquitous in society, enabled primarily by rapid biotechnological development and drastic decreases in dna sequencing and dna synthesis costs (berger and schneck, ; naveed et al., ) . innovation in these industries has far outpaced regulatory capacity and remained somewhat isolated from the information security and privacy domains. a single human whole genome sequence can cost hundreds to thousands of dollars per sample, and when amassed genetic information can be worth millions , . this positions genetic information systems as likely targets for cyber and physical attacks. human genetic information is identifiable lowrence and collins, ) and also contains sensitive health information; yet it is not always defined in these capacities by law. unlike most other forms of data, it is immutable, remaining with an individual for their entire life. sensitive human genetic data necessitates protection for the sake of individuals, their relatives, and ethnic groups; genetic information in general must be protected to prevent national and global threats (sawaya et al., ) . therefore, human genetic information is a uniquely confidential form of data that requires increased security controls and scrutiny. furthermore, non-human biological sources of genetic data are also sensitive. for example, microbial genetic data can be used to create designer microbes with crispr-cas and other synthetic biology techniques (werner, ) , presenting global and national security concerns. several genomics stakeholders have reported security incidents according to news sources , , , and breach notifications , , , , , . the most common reasons were misconfigurations of cloud security settings and email phishing attacks, and one resulted from a stolen personal computer containing sensitive information . the national health service's genomics england database in the united kingdom has been targeted by malicious nation-state actors , and andme's chief security officer said their database of around ten million individuals is of extreme value and therefore "certainly is of interest to nation states" . despite this recognition, proper measures to protect genetic information are often lacking under current best practices in relevant industries and stakeholders. multi-stakeholder involvement and improved understanding of the security risks to biotechnology are required in order to develop appropriate countermeasures (millett et al., ) . towards these goals, this paper expands upon a microbiological genetic information system assessment by fayans et al. (fayans et al., ) to include a broader range of genetic information, as well as novel concepts and additional threats to the ecosystem. confidentiality, integrity, and availability are the core principles governing the secure operation of a system (fayans et al., ; international organization for standardization, ) . confidentiality is the principle of ensuring access to information is restricted based upon the information's sensitivity. examples of confidentiality include encryption, access controls, and authorization. integrity is the concept of protecting information from unauthorized modification or deletion, while availability ensures information is accessible to authorized parties at all times. integrity examples include logging events, backups, minimizing material degradation, and authenticity verification. availability can be described as minimizing the chance of data or material destruction, as well as network, power, and other infrastructure outages. sensitive genetic information, which includes both biological material and digital genetic data, is the primary asset of concern, and associated assets, such as metadata, electronic health records and intellectual property, are also vulnerable within this ecosystem. genetic information can be classified into two primary levels, sensitive and nonsensitive, based upon value, confidentiality requirements, criticality, and inherent risk. sensitive genetic information can be further categorized into restricted and private sublevels. ❖ restricted sensitive genetic information can be expected to cause significant risk to a nation, ethnic group, individual, or stakeholder if it is disclosed, modified, or destroyed without authorization. the highest level of security controls should be applied to restricted sensitive genetic information. examples of restricted sensitive information are material and data sourced from humans, resources humans rely upon, and organisms and microbes that could cause harm to humans or resources humans rely upon. due to its identifiability, human genetic information can be especially sensitive and thus requires special security considerations. ❖ private sensitive genetic information can be expected to cause a moderate level of risk to a nation, ethnic group, individual, or stakeholder if it is disclosed, modified, or destroyed without authorization. genetic information that is not explicitly classified as restricted sensitive or nonsensitive should be treated as private sensitive information. a reasonable level of security controls should be applied to private sensitive information. examples of private sensitive information are intellectual property from research, breeding, and agricultural programs. ❖ nonsensitive (or public) genetic information can be expected to cause little risk if it is disclosed, modified, or destroyed without authorization. while few controls may be required to protect the confidentiality of nonsensitive genetic information, controls should be in place to prevent unauthorized modification or destruction of nonsensitive information. examples of nonsensitive information are material and data sourced from non-human entities that are excluded from the sensitive level if the resulting data are to be made publicly available within reason. the genetic information ecosystem can be compromised in numerous ways, including purposefully adversarial activities and human error. organizations take steps to monitor and prevent error, and molecular biologists are skilled in laboratory techniques; however, they commonly do not have the expertise and resources to securely configure and operate these environments, nor are they enabled to do so by vendor service contracts and documentation. basic security features and tools, such as antivirus software, can easily be subverted, and advanced protections are not commonly implemented. much genetic data is already publicly available via open and semi-open databases, and dissemination practices are not properly addressed by regulations. there are wide-ranging motives behind adversaries targeting non-public genetic information (fayans et al., ) . numerous stakeholders, personnel, and insecure devices are relied upon from the path of sample collection to data dissemination. depending on the scale of an exploit, hundreds to millions of people could be compromised. local attacks could lead to certain devices, stakeholders, and individuals being affected, while supply chain and remote attacks could lead to global-scale impact. widespread public dissemination and lack of inherent security controls equate to millions of individuals and their relatives having substantial risk imposed upon them. genetic data can be used to identify an individual (lin et al., ) and predict their physical characteristics (li et al., ; lippert et al., ) , and capabilities for familial matching are increasing, with the ability to match individuals to distant relatives edge et al., ; ney et al., ) . identifiability of genetic information is a critical challenge leading to growing consumer privacy concerns (baig et al., ) , and behavioral predictions from genetic information are gaining traction to produce stronger predictors year over year (gard et al., ; johnson et al., ) . furthermore, many diseases and negative health outcomes have genetic determinants, meaning that genetic data can reveal sensitive health information about individuals and families (sawaya et al., ) . these issues pale in comparison to the weaponization of genetic information. genetics can inform both a doctor and an adversary in the same way, revealing weaknesses that can be used for treatment or exploited to cause disease (sawaya et al., ) . the creation of bioweapons utilizes the same processes as designing vaccines and medicines to mitigate infectious diseases, namely access to an original infectious organism or microbe and its genetic information (berger and roderick, ) . this alarming scenario was thought to be unlikely only six years ago as the necessary specialized skills and expertise were not widely distributed. since then, access to sensitive genetic data has increased, such as the genome sequences of the novel coronavirus (sars-cov- ) (sah et al., ) , african swine fever (mazur-panasiuk et al., ) , and the spanish influenza a (h n ) (tumpey et al., ) viruses. synthetic biology capabilities, skill sets, and resources have also proliferated (ney et al., ) . sars-cov- viral clone creation from synthetic dna fragments was possible only weeks after the sequences became publicly available (thao et al., ) . this same technology can be utilized to modify noninfectious microbes and microorganisms to create weaponizable infectious agents (berger and roderick, ; chosewood and wilson, ; salerno and koelm, ) . covid- susceptibility, symptoms, and mortality all have genetic components (taylor et al., ; ellinghaus et al., ; nguyen et al., ) , demonstrating how important it will be to safeguard genetic information in the future to avoid targeted biological weapons. additionally, microbiological data cannot be determined to have infectious origins until widespread infection occurs or until it is sequenced and deeply analyzed (chosewood and wilson, ; salerno and koelm, ) ; hence, data that is potentially sensitive also needs to be protected throughout the entire ecosystem. the genetic information ecosystem is a distributed cyber-physical system containing numerous stakeholders (supplementary material, appendix ), personnel, and devices for computing and networking purposes. the ecosystem is divided into the pre-analytical, analytical, and postanalytical phases that are synonymous with: (i) collection, storage, and distribution of biological samples, (ii) generation and processing of genetic data, and (iii) storage and sharing of genetic data (supplementary material, appendix ). this ecosystem introduces many pathways, or attack vectors, for malicious access to information and systems ( figure ). the genetic information ecosystem and accompanying threat landscape. the genetic information ecosystem is divided into three phases: pre-analytical, analytical, and post-analytical. the analytical phase is further divided into wet laboratory preparation, dna sequencing, and bioinformatic pipeline subphases. in its simplest form, this system is a series of inputs and outputs that are either biological material, data, or logical queries on data. every input, output, device, process, and stakeholder are vulnerable to exploitation via the attack vectors denoted by red letters. color schema: purple, sample collection and processing; blue, wet laboratory preparation; green, genetic data generation and processing; yellow, data dissemination, storage, and application. unauthorized physical access or insider threats could allow for theft of assets or the use of other attack vectors on any phase of the ecosystem (walsh and streilein, ) . small independent laboratories do not often have resources to implement strong physical security. large institutions are often enabled to maintain strong physical security, but the relatively large number of individuals and devices that need to be secured can create a complex attack surface. ultimately, the strongest cybersecurity can be easily circumvented by weak physical security. insider threats are a problem for information security because personnel possess deeper knowledge of an organization and its systems. many countries rely on foreign nationals working in biotechnological fields that may be susceptible to foreign influence . citizens can also be susceptible to foreign influence . personnel could introduce many exploits on-site if coerced or threatened. even when not acting in a purposefully malicious manner, personnel can unintentionally compromise the integrity and availability of genetic information through error (us office of the inspector general, ). appropriate safeguards should be in place to ensure that privileged individuals are empowered to do their work correctly and efficiently, but all activities should be documented and monitored when working with sensitive genetic information. sample collection, storage, and distribution processes have received little recognition as legitimate points for the compromise of genetic information. biological samples as inputs into this ecosystem can be modified maliciously to contain encoded malware (ney et al., ) , or they could be degraded, modified, or destroyed to compromise the material's and resulting data's integrity and availability. sample repository and storage equipment are usually connected to a local network for monitoring purposes. a remote or local network attack could sabotage connected storage equipment, causing samples to degrade or be destroyed. biorepositories and the collection and distribution of samples could be targeted to steal numerous biological samples, such as in known genetic testing scams . targeted exfiltration of small numbers of samples may be difficult to detect. sensitive biological material should be safeguarded in storage and transit, and when not needed for long-term biobanking, it should be destroyed following successful analysis. other organizations that handle genetic material could be targeted for the theft of samples and processed dna libraries. the wet laboratory preparation and dna sequencing subphases last several weeks and produce unused waste and stored material. at the conclusion of sequencing runs, the consumables that contain dna molecules are not always considered sensitive. these items can be found unwittingly maintained in many sequencing laboratories. several cases have been documented of dna being recovered and successfully sequenced while aged for years at room temperature and in non-controlled environments (colette et al., ). dna sequencing systems and laboratories are multifaceted in their design and threat profile. dna sequencing instruments have varying scalability of throughput, cost, and unique considerations for secure operation (table ) . sequencing instruments have a built-in computer and commonly have connected computers and servers for data storage, networking, and analytics. these devices contain a number of different hardware components, firmware, an operating system, and other software. some contain insecure legacy versions of operating system distributions. sequencing systems usually have wireless or wired local network connections to the internet that are required for device monitoring, maintenance, data transmission, and analytics in most operations. wireless capabilities and bluetooth technology within laboratories present unnecessary threats to these systems, as any equipment connected to laboratory networks is a potential network entry point. device vendors obtain various internal hardware components from several sources and integrate them into laboratory devices that contain vendor-specific intellectual property and software. generic hardware components are often produced overseas, which is cost effective but leads to insecurities and a lack of hardening for specific end-use purposes. hardware vulnerabilities could be exploited on-site, or they can be implanted during manufacturing and supply-chain processes for widespread and unknown security issues (fayans et al., ; ender et al., ; shwartz et al., ; anderson and kuhn, ) . such hardware issues are unpatchable and will remain with devices forever until newer devices can be manufactured to replace older versions. unfortunately, adversaries can always shift their techniques to create novel vulnerabilities within new hardware in a continual vicious cycle. third-party manufacturers and device vendors implement firmware in these hardware components. embedded device firmware has been shown to be more susceptible to cyber-attacks than other forms of software (shwartz et al., ) . in-field upgrades are difficult to implement, and like hardware, firmware and operating systems of sequencing systems can be maliciously altered within the supply chain (fayans et al., ) . a firmware-level exploit would allow for the evasion of operating system, software, and application-level security features. firmware exploits can remain hidden for long periods, even after hardware replacements or wiping and restoring to default factory settings. furthermore, operating systems have specific disclosed common vulnerabilities and exposures (cves) that are curated by the mitre organization and backed by the us government . with ubiquitous implementation in devices across all phases of the ecosystem, these software issues are especially concerning but can be partially mitigated by frequent updates. however, operating systems and firmware are typically updated every six to twelve months by a field agent accessing a sequencing device on site. device operators are not allowed to modify the device in any way, yet they are responsible for some security aspects of this equipment. additionally, researchers have confirmed the possibility of index hopping, or index misassignment, by sequencing device software, resulting in customers receiving confidential data from other customers (ney et al., ) or downstream data processors inputting incorrect data into their analyses. dna sequencing infrastructure is proliferating. illumina, the largest vendor of dna sequencing instruments, accounted for % of the world's sequencing data in by their own account . in , illumina had , sequencers implemented globally capable of producing a total daily output of tb (erlich, ) , with many of these instruments housed outside of the us and europe. in , technology developed by beijing genomics institute has finally resulted in the $ human genome (drmanac, ) while us prices remain around $ , . overseas organizations can be third-party sequencing service providers for direct-to-consumer (dtc) companies and other stakeholders. shipping internationally for analysis is less expensive than local services (office of the us trade representative, ), indicating that genetic data could be aggregated globally by nation-states and other actors during the analysis phase. https://cve.mitre.org/ https://www.cisa.gov/news/ / / /fbi-and-cisa-warn-against-chinese-targeting-covid- -research-organizations raw signal sequencing data are stored on a sequencing system's local memory and are transmitted to one or more endpoints. transmitting data across a local network requires internal information technology (it) configurations. vendor documentation usually depends upon implementing a firewall to secure sequencing systems, but doing so correctly requires deep knowledge of secure networking and vigilance of network activity. documentation also commonly mentions disabling and enabling certain network protocols and ports and further measures that can be difficult for most small-to medium-sized organizations if they lack dedicated it support. laboratories and dna sequencing systems are connected to many third-party services, and laboratories have little control over the security posture of these connections. independent cloud platforms and dna sequencing vendors' cloud platforms are implemented for bioinformatic processing, data storage, and device monitoring and maintenance capabilities (table ) . a thorough security assessment of cloud services remains unfulfilled in the genomics context. multifactor authentication, role-and task-based access, and many other security measures are not common in these platforms. misconfigurations to cloud services and remote communications are a primary vulnerability to genetic information, demonstrated by prior breaches, remote desktop protocol issues affecting illumina devices , and a disclosed vulnerability in illumina's basespace application program interface . laboratory information management systems (lims) are also frequently implemented within laboratories and connected to sequencing systems and laboratory networks (roy et al., ) . dna sequencing vendors provide their own limss as part of their cloud offerings. even when lims and cloud platforms meet all regulatory requirements for data security and privacy, they are handling data that is not truly anonymized and therefore remains identifiable and sensitive. furthermore, specific cves have been disclosed for dnatools' dnalims product that were actively exploited by a foreign nation-state . phishing attacks are another major threat, as email services add to the attack surface in many ways. sequencing service providers often share links granting access to datasets via email. these email chains are a primary trail of transactions that could be exploited to exfiltrate data on clients, metadata of samples, or genetic data itself. some laboratories transmit raw data directly to an external hard drive per customer or regulatory requirements. reducing network activity in this way can greatly minimize the threat surface of sensitive genetic information. separating networks and devices from other networks, or air gapping, while using hard drives is possible, but even air-gapped systems have been shown to be vulnerable to compromise (guri, ; guri et al., ) . sequencing devices are still required to be connected to the internet for maintenance and are often connected between offline operations. hard drives can be physically secured and transported; however, these methods are time and resource intensive, and external drives could be compromised for the injection of modified software or malware. bioinformatic software has not been commonly scrutinized in security contexts or subjected to the same adversarial pressure as other more mature software. open-source software is widely used across genomics, acquired from several online code repositories, and heavily modified for individual purposes, but it is only secure when security researchers are incentivized to assess these products. in a specialized and niche industry like genomics and bioinformatics this is typically not the case. bioinformatic programs have been found to be vulnerable due to poor coding practices, insecure function usage, and buffer overflows , (ney et al., ) . many researchers have uncovered that algorithms can be forced to mis-classify by intentionally modifying data inputs, breaking the integrity of any resulting outputs (finlayson et al., ) . nearly every imaginable algorithm, model type, and use case has been shown to be vulnerable to this kind of attack across many data types, especially those relevant to raw signal and sequencing data formats (biggio and roli, ) . similar attacks could be carried out in the processing of raw signal data internal to a sequencing system or on downstream bioinformatic analyses accepting raw sequencing data or processed data as an input. alarming amounts of human and other sensitive genetic data are publicly available , , , , . several funding and publication agencies require public dissemination, so researchers commonly contribute to open and semi-open databases (shi and wu, ) . healthcare providers either house their own internal databases or disseminate to third-party databases. their clinical data is protected like any other healthcare information as required by regulations; however, this data can be sold and aggregated by external entities. dtc companies keep their own internal databases closely guarded and can charge steep prices for third-party access. data sharing is prevalent when the price is right. data originators often have access to their genetic data and test results for download in plaintext. these reports can then be uploaded to public databases, such as gedmatch and dna.land, for further analyses, including finding distant genetic relatives with a shared ancestor . a well-known use of such identification tactics was the infamous golden state killer case (edge and coop, ) . data sharing is dependent upon the data controller's wants and needs, barring any legal or business requirements from other involved stakeholders. genetic database vulnerabilities have been well-studied and disclosed (edge and coop, ; ney et al., ; naveed et al., ; erlich and narayanan, ; gymrek et al., ) . for example, the contents of the entire gedmatch database could be leaked by uploading artificial genomes (ney et al., ) . such an attack would violate the confidentiality of more than a million users' and their relatives' genetic data because the information is not truly anonymized. even social media posts can be filtered for keywords indicative of participation in genetic research studies to identify research participants in public databases (liu et al., ) . all told, tens of millions of research participants, consumers, and relatives are already at risk. adversarial targeting of genetic information largely depends upon the sensitivity, quantity, and efficiency of information compromise for attackers, leading to various states in likelihood of a breach or exposure scenario. the impact of a compromise is determined by a range of factors, including the size of the population at risk, negative consequences to stakeholders, and capabilities and scale of adversarial activity. likelihood and impact both ultimately inform the level of risk facing stakeholders during ecosystem phases (figure ). risk to the genetic information ecosystem. quantity is not to scale but is denoted abstractly by width of the second column. likelihood judged by the available threats and opportunities to adversaries and the efficiency of an attack. impact in terms of the number of people affected and the current and emerging consequences to stakeholders. likelihood and impact scores: low (+); moderate (+ +); high (+ + +); very high (+ + + +); extreme (+ + + + +). low to extreme risk is denoted by the hue of red, from light to dark. security is a spectrum; stakeholders must do everything they can to chase security as a best practice. securing genetic information is a major challenge in this rapidly evolving ecosystem. attention has primarily been placed on the post-analytical phase of the genetic information ecosystem for security and privacy, but adequate measures have yet to be universally adopted. the pre-analytical and analytical phases are also highly vulnerable points for data compromise that must be addressed. adequate national regulations are needed for security and privacy enforcement, incentivization, and liability, but legal protection is dictated by regulators' responses and timelines. however, data originators, controllers, and processors can take immediate action to protect their data. genetic information security is a shared responsibility between sequencing laboratories and device vendors, as well as all other involved stakeholders. to protect genetic information, laboratories, biorepositories, and other data processors need to create strong organizational policies and reinvestments towards their physical and cyber infrastructure. they also need to determine the sensitivity of their data and material and take necessary precautions to safeguard sensitive genetic information. data controllers, especially healthcare providers and dtc companies, should reevaluate their data sharing models and methods, with special consideration for the identifiability of genetic data. device vendors need to consider security when their products are being designed and manufactured. many of these recommendations go against the current paradigms in genetics and related industries and will therefore take time, motivation, and incentivization before being actualized, with regulation being a critical factor. in order to secure genetic information and protect all stakeholders in the genetic information ecosystem, further in-depth assessments of this threat surface will be required, and novel security and privacy approaches will need to be developed. sequencing systems, bioinformatics software, and other biotechnological infrastructure need to be analyzed to fully understand their vulnerabilities. this will require collaborative engagement between stakeholders to implement improved security measures into genetic information systems (moritz et al., ; berger and schneck, ) . the development and implementation of genetic information security will foster a healthy and sustainable bioeconomy without damaging privacy or security. there can be security without privacy, but privacy requires security. these two can be at odds with one another in certain contexts. for example, personal security aligns with personal privacy, whereas public security can require encroachment on personal privacy. a similar story is unfolding within genetics. genetic data must be shared for public good, but this can jeopardize personal privacy. however, genetic data necessitates the strongest protections possible for public security and personal security. appropriate genetic information security will simultaneously protect everyone's safety, health, and privacy. the inspiration for this work occurred while performing several security assessments and penetration tests of dna sequencing laboratories and other stakeholders. initially, an analysis of available literature and technical documentation (n= ) was performed, followed by confidential semi-structured interviews (n= ) with key personnel from multiple relevant stakeholders. the study's population consisted of leaders and technicians from government agencies (n= ) and organizations in small, medium, and large enterprises (n= ) across the united states, including california, colorado, district of columbia, massachusetts, montana, and virginia. several stakeholders allowed access to their facilities for observing environments and further discussions. some stakeholders allowed in-depth assessments of equipment, networks, and services. gs, ss, and dn are founders and owners of geneinfosec inc. and are developing technology and services to protect genetic information. geneinfosec inc. has not received us federal research funding. ah declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. genetics stakeholders are categorized based upon their influence, contributions, and handling of biological samples and resulting genetic data (supplementary table ). asymmetries exist between stakeholders in these regards . data originators are humans that voluntarily or involuntarily are the source of biological samples or are investigators collecting samples from nonhuman specimens. examples of data originators include consumers, healthcare patients, military personnel, research subjects, migrants, criminals, and their relatives. data controllers are entities that are legally liable for and dictate the use of biological samples and resulting data. in humanderived contexts, data controllers are typically healthcare providers, researchers, law enforcement agencies, or dtc companies. data processors are entities that collect, store, generate, analyze, disseminate, and/or apply biological samples or genetic data. data processors may also be data originators and data controllers. examples include biorepositories, dna sequencing laboratories, researchers, cloud and other service providers, and supply chain entities responsible for devices, software and materials. regulators oversee this ecosystem and the application and use of biotechnology, biological samples, genetic data, and market/industry trends at the transnational, national, local, and organizational levels. biological samples and metadata from the samples must first be collected once a data originator or controller determines to proceed with genetic testing. biological samples can be sourced from any biological entity relying on nucleic acids for reproduction, replication, and other processes, including non-living microbes (e.g., viruses, prions), microorganisms (e.g., bacteria, fungi), and organisms (e.g., plants, animals). samples are typically de-identified of metadata and given a numeric identifier, but this is largely determined by the interests of data controllers and the regulations that may pertain to various sample types. metadata includes demographic details, inclusion and exclusion criteria, pedigree structure, health conditions critical for secondary analysis, and other identifying information . it can also be in the form of quality metrics obtained during the analysis phase. samples are then stored in controlled environments at decreased temperature, moisture, light, and oxygen to avoid degradation. sample repositories can be internal or third-party infrastructure housing small to extremely large quantities of material for short-and long-term storage. following storage, samples are distributed to an internal or third-party laboratory for dna sequencing preparations. the wet laboratory preparation phase chemically prepares biological samples for sequencing with sequencing-platform-dependent methods. this phase can be performed manually with time-and labor-intensive methods, or it can be highly automated to reduce costs, run-time, and error. common initial preparation steps involve removing contaminants and unwanted material from biological samples and extracting and purifying samples' nucleic acids. if rna is to be sequenced, it is usually converted into complementary dna. once dna has been isolated, a library for sequencing is created via size-selection, sequencing adapter ligation, and other chemical processes. adapters are synthetic dna molecules attached to dna fragments for sequencing and contain sample indexes, or identifiers. indexes allow for multiplexing sequencing runs with many samples at once to increase throughput, decrease costs, and to identify dna fragments to their sample source. to begin sequencing, prepared libraries are loaded into a dna sequencing instrument with the required materials and reagents. laboratory personnel must login to the instrument and any connected services, such as cloud services or information management systems, and configure a run to initiate sequencing. a single sequencing run can generate gigabytes to terabytes of raw sequencing data and last anywhere from a few hours to multiple days, requiring the devices to commonly be left unmonitored during operation. raw data can be stored on the instrument's local memory and are transmitted to one or more of the following endpoints during or following a sequencing run: (i) local servers, computers, or other devices within the laboratory; (ii) cloud services of the vendor or other service providers; and (iii) external hard drives directly tethered to the sequencer. data paths largely depend on the sequencing platform, the laboratory's capabilities and infrastructure, and the sensitivity of data being processed. certain regulations require external hard drive use and offline data storage, analysis, and transmission. bioinformatic pipelines convert raw data through a series of software tools into usable forms. raw signal data include images, chemical signal, electrical current signal, and other forms of signal data dependent upon the sequencing platform. primary analyses convert raw signal data into sequence data with accompanying quality metrics through a process known as basecalling. many sequencing instruments can perform these functions. the length of each dna molecule sequenced is orders of magnitude smaller than genes or genomes of interest, so basecalled sequence data must then be aligned to determine each read's position within a genome or genomic region. this aligned sequence data is then compared to reference genomes sourced from databases through a procedure known as variance detection to determine differences between a sample's data and the accepted normal genomic sequence. only the unique genetic variants of a sample are retained in variance call format (vcfs) files, a common final processed data form. vcf files are vastly smaller than the gigabytes to terabytes of raw data initially produced, making them an efficient format for longterm storage, dissemination, and analysis purposes. however, this file format exists as a security threat for sensitive genetic data because these files are personally identifiable and contain sensitive health information. following data analyses, processed data are integrated with metadata and ultimately interpreted for the data controller's purpose. metadata and genetic data are often housed together, and exploiting this combined information could lead to numerous risks and threats to the data originators, their relatives, and the liable entities involved along the data path. secondary analyses can be performed on datasets by data controllers and third-party data processors to answer any number of relevant research questions, such as in diagnostics or ancestry analysis. genetic research is only powerful when large datasets are created containing numerous data points from thousands to millions of samples. therefore, genetic data is widely distributed and accessible via remote means across numerous databases and stakeholders. low cost attacks on tamper resistant devices i'm hoping they're an ethical company that won't do anything that i'll regret" users perceptions of at-home dna testing companies national and transnational security implications of big data in the life sciences national and transnational security implications of asymmetric access to and use of biological data wild patterns: ten years after the rise of adversarial machine learning biosafety in microbiological and biomedical laboratories. us department of health and human services adverse effect of air exposure on the stability of dna stored at room temperature first $ genome sequencing enabled by new extreme throughput dnbseq platform how lucky was the genetic investigation in the golden state killer case attacks on genetic privacy via uploads to genealogical databases linkage disequilibrium matches forensic genetic records to disjoint genomic marker sets genomewide association study of severe covid- with respiratory failure the unpatchable silicon: a full break of the bitstream encryption of xilinx -series fpgas routes for breaching and protecting genetic privacy identity inference of genomic data using longrange familial searches cyber security threats in the microbial genomics era: implications for public health adversarial attacks on medical machine learning genetic influences on antisocial behavior: recent advances and future directions. current opinion in psychology power-supplay: leaking data from air-gapped systems by turning the power-supplies into speakers brightness: leaking sensitive data from air-gapped workstations via screen brightness identifying personal genomes by surname inference iso/iec : . information technology -security techniques -guidelines for cybersecurity behavioral genetic studies of personality: an introduction and review of the results of + years of research. the sage handbook of personality theory and assessment robust genome-wide ancestry inference for heterogeneous datasets and ancestry facial imaging based on the genomes project. biorxiv genomic research and human subject privacy identification of individuals by trait prediction using whole-genome sequencing data amia annual symposium proceedings identifiability in genomic research the first complete genomic sequences of african swine fever cyber-biosecurity risk perceptions in the biotech sector promoting biosecurity by professionalizing biosecurity privacy in the genomic era computer security risks of distant relative matching in consumer genetic databases computer security, privacy, and dna sequencing: compromising computers with synthesized dna, privacy leaks, and more genotype extraction and false relative attacks: security risks to third-party genetic genealogy services beyond identity inference human leukocyte antigen susceptibility map for sars-cov- next-generation sequencing informatics: challenges and strategies for implementation in a clinical environment complete genome sequence of a novel coronavirus (sars-cov- ) strain isolated in nepal biological laboratory and transportation security and the biological weapons convention. national nuclear security administration artificial intelligence and the weaponization of genetic data an overview of human genetic privacy opening pandora's box: effective techniques for reverse engineering iot devices analysis of genetic host response risk factors in severe covid- patients. medrxiv rapid reconstruction of sars-cov- using a synthetic genomics platform characterization of the reconstructed spanish influenza pandemic virus the fbi dna laboratory: a review of protocol and practice vulnerabilities findings of the investigation into china's acts, policies and practices related to technology transfer, intellectual property, and innovation under section of the trade act of . office of the united states trade representative, executive office of the president security measures for safeguarding the bioeconomy the coming crispr wars: or why genome editing can be more dangerous than nuclear weapons thermo fisher scientific, inc. applied biosystems / xl genetic analyzer user guide thermo fisher scientific, inc. applied biosystems / xl dna analyzers user guide applied biosystems seqstudio genetic analyzer specification sheet illumina document # v illumina proactive | data security sheet illumina document # v illumina document # v illumina document # v , material # illumina document # v , material # nextseq dx instrument site prep guide novaseq sequencing system site prep guide thermo fisher scientific publication #col thermo fisher scientific publication #col ion torrent genexus integrated sequencer performance summary sheet gridion mk site installation and device operation requirements, version oxford nanopore technologies. minion it requirements, version promethion p /p site installation and device operation requirements, version pacific biosciences of california, inc. operations guide -sequel system: the smrt sequencer pacific biosciences of california, inc. operations guide -sequel ii system: the smrt sequencer connect platform | iot connectivity the authors would like to acknowledge the confidential research participants and collaborators on this study for their time, resources, and interest in bettering genetic information security. thank you to cory cranford, arya thaker, ashish yadav, and dr. kevin gifford and dr. daniel massey of the department of computer science, formerly of the technology, cybersecurity and policy program, at the university of colorado boulder for their support of this work. v appendix . overview of the genetic information ecosystem processes (page ) key: cord- - q eg z authors: keller, mikaela; blench, michael; tolentino, herman; freifeld, clark c.; mandl, kenneth d.; mawudeku, abla; eysenbach, gunther; brownstein, john s. title: use of unstructured event-based reports for global infectious disease surveillance date: - - journal: emerg infect dis doi: . /eid . sha: doc_id: cord_uid: q eg z free or low-cost sources of unstructured information, such as internet news and online discussion sites, provide detailed local and near real-time data on disease outbreaks, even in countries that lack traditional public health surveillance. to improve public health surveillance and, ultimately, interventions, we examined primary systems that process event-based outbreak information: global public health intelligence network, healthmap, and epispider. despite similarities among them, these systems are highly complementary because they monitor different data types, rely on varying levels of automation and human analysis, and distribute distinct information. future development should focus on linking these systems more closely to public health practitioners in the field and establishing collaborative networks for alert verification and dissemination. such development would further establish event-based monitoring as an invaluable public health resource that provides critical context and an alternative to traditional indicator-based outbreak reporting. i nternational travel and movement of goods increasingly facilitates the spread of pathogens across and among nations, enabling pathogens to invade new territories and adapt to new environments and hosts ( ) ( ) ( ) . offi cials now need to consider worldwide disease outbreaks when determining what potential threats might affect the health and welfare of their nations ( ) . in industrialized countries, unprecedented efforts have built on indicator-based public health surveillance, and monitoring of clinically relevant data sources now provides early indication of outbreaks ( ) . in many countries where public health infrastructure is rudimentary, deteriorating, or nonexistent, efforts to improve the ability to conduct electronic disease surveillance include more robust data collection methods and enhanced analysis capability ( , ) . however, in these parts of the world, basing timely and sensitive reporting of public health threats on conventional surveillance sources remains challenging. lack of resources and trained public health professionals poses a substantial roadblock ( ) ( ) ( ) . furthermore, reporting emerging infectious diseases has certain constraints, including fear of repercussions on trade and tourism, delays in clearance through multiple levels of government, tendency to err on the conservative side, and inadequately functioning or nonexistent surveillance infrastructure ( ) . even with the recent enactment of international health regulations in , no guarantee yet exists that broad compliance will be feasible, given the challenges associated with reporting mechanisms and multilateral coordination ( ) . in many countries, free or low-cost sources of unstructured information, including internet news and online discussion sites (figure) , could provide detailed local and near real-time data on potential and confi rmed disease outbreaks and other public health events ( , , ( ) ( ) ( ) ( ) ( ) ( ) . these eventbased informal data sources provide insight into new and ongoing public health challenges in areas that have limited or no public health reporting infrastructure but have the highest risk for emerging diseases ( ) . in fact, event-based informal surveillance now represents a critical source of epidemic intelligence-almost all major outbreaks investigated by the world health organization (who) are fi rst identifi ed through these informal sources ( , ) . with a goal of improving public health surveillance and, ultimately, intervention efforts, we (the architects, developers, and methodologists for the information systems described herein) reviewed of the primary active systems that process unstructured (free-text), event-based information on disease outbreaks: the global public health intelligence network (gphin), the healthmap system, and the epispider project (semantic processing and integration of distributed electronic resources for epidemics [and disasters]; www.epispider.net). our report is the result of a joint symposium from the american medical informatics association annual conference in . despite key differences, all systems face similar technologic challenges, including ) topic detection and data acquisition from a high-volume stream of event reports (not all related to disease outbreaks); ) data characterization, categorization, or information extraction; ) information formatting and integration with other sources; and ) information dissemination to clients or, more broadly, to the public. each system tackles these challenges in unique ways, highlighting the diversity of possible approaches and public health objectives. our goal was to draw lessons from these early experiences to advance overall progress in this recently established fi eld of event-based public health surveillance. after summarizing these systems, we compared them within the context of this new surveillance framework and outlined goals for future development and research. background gphin took early advantage of advancements in communication technologies to provide coordinated, near real-time, multisource, and multilingual information for monitoring emerging public health events ( , ) . in , a prototype gphin system was developed in a partnership between the government of canada and who. the objective was to determine the feasibility and effectiveness of using news media sources to continuously gather information about possible disease outbreaks worldwide and to rapidly alert international bodies of such events. the sources included websites, news wires, and local and national newspapers retrieved through news aggregators in english and french. after the outbreak of severe acute respiratory syndrome (sars), a new, robust, multilingual gphin system was developed and was launched november , , at the united nations. the gphin software application retrieves relevant articles every minutes ( hours/day, days/week) from news-feed aggregators (al bawaba [www.albawaba.com] and factiva [www. factiva.com]) according to established search queries that are updated regularly. the matching articles are automatically categorized into > gphin taxonomy categories, which cover the following topics: animal, human, or plant diseases; biologics; natural disasters; chemical incidents; radiologic incidents; and unsafe products. articles with a high relevancy score are automatically published on the gphin database. the gphin database is also augmented with articles obtained manually from openaccess web sites. each day, gphin handles ≈ , articles. this number drastically increases when events with serious public health implications, such as the fi nding of melamine in various foods worldwide, are reported. although the gphin computerized processes are essential for the management of information about health threats worldwide, the linguistic, interpretive, and analytical expertise of the gphin analysts makes the system successful. articles with relevancy below the "publish" threshold are presented to a gphin analyst, who reviews the article and decides whether to publish it, issue an alert, or dismiss it. additionally, the gphin analyst team conducts more in-depth tasks, including linking events in different regions, identifying trends, and assessing the health risks to populations around the world. english articles are machine-translated into arabic, chinese (simplifi ed and traditional), farsi, french, rus- sian, portuguese, and spanish. non-english articles are machine-translated into english. gphin has adopted a best-of-breed approach in selecting engines for machine translation. the lexicons associated with the engines are constantly being improved to enhance the quality of the output. as such, the machine-translated outputs are edited by the appropriate gphin analysts. the goal is not to obtain a perfect translation but to ensure comprehensibility of the essence of the article. users can view the latest list of published articles or query the database by using both boolean and translingual metadata search capabilities. in addition, notifi cations about events that might have serious public health consequences are immediately sent by email to users in the form of an alert. as an initial assessment of data collected during july through august , who retrospectively verifi ed outbreaks, of which % were initially picked up and disseminated by gphin ( ) . outbreaks were reported in countries, demonstrating gphin's capacity to monitor events occurring worldwide, despite the limitation of predominantly english (with some french) media sources. one of gphin's earliest achievements occurred in december , when the system was the fi rst to provide preliminary information to the public health community about a new strain of infl uenza in northern people's republic of china ( ) . during the sars outbreak, declared by who in march , the gphin prototype demonstrated its potential as an early-warning system by detecting and informing the appropriate authorities (e.g., who, public health agency of canada) of an unusual respiratory illness outbreak occurring in guangdong province, china, as early as november , . gphin was further able to continuously monitor and provide information about the number of suspected and probable sars cases reported worldwide on a near real-time basis. gphin's information was ≈ - days ahead of the offi cial who report of confi rmed and probable cases worldwide. in addition to outbreak reporting, gphin has also provided information that enabled public health offi cials to track global effects of the outbreak such as worldwide prevention and control measures, concerns of the general public, and economic or political effects. gphin is used daily by organizations such as who, the us centers for disease control and prevention (cdc), and the un food and agricultural organization. operating since september , healthmap ( , ) is an internet-based system designed to collect and display information about new outbreaks according to geographic location, time, and infectious agent ( ) ( ) ( ) . healthmap thus provides a structure to information fl ow that would otherwise be overwhelming to the user or obscure important elements of a disease outbreak. healthmap.org receives , - , visits/day from around the world. it is cited as a resource on sites of agencies such as the united nations, national institute of allergy and infectious diseases, us food and drug administration, and us department of agriculture. it has also been featured in mainstream media publications, such as wired news and scientifi c american, indicating the broad utility of such a system that extends beyond public health practice ( , ) . on the basis of usage tracking of healthmap's internet site, we can infer that its most avid users tend to come from government-related domains, including who, cdc, european centre for disease prevention and control, and other national, state, and local bodies worldwide. although the question of whether this information has been used to initiate action will be part of an in-depth evaluation, we know from informal communications that organizations (ranging from local health departments to such national organizations as the us department of health and human services and the us department of defense) are leveraging the healthmap data stream for day-to-day surveillance activities. for instance, cdc's biophusion program incorporates information from multiple data sources, including media reports, surveillance data, and informal reports of disease events and disseminates it to public health leaders to enhance cdc's awareness of domestic and global health events ( ) . the system integrates outbreak data from multiple electronic sources, including online news wires (e.g., google news), really simple syndication (rss) feeds, expertcurated accounts (e.g., promed-mail, a global electronic mailing list that receives and summarizes reports on disease outbreaks) ( ) , multinational surveillance reports (e.g., eurosurveillance), and validated offi cial alerts (e.g., from who). through this multistream approach, healthmap casts a unifi ed and comprehensive view of global infectious disease outbreaks in space and time. fully automated, the system acquires data every hour and uses text mining to characterize the data to determine the disease category and location of the outbreak. alerts, defi ned as information on a previously unidentifi ed outbreak, are geocoded to the country scale with province-, state-, or city-level resolution for select countries. surveillance is conducted in several languages, including english, spanish, russian, chinese, and french. the system is currently being ported to other languages, such as portuguese and arabic. after being collected, the data are aggregated by source, disease, and geographic location and then overlaid on an interactive map for user-friendly access to the original report. healthmap also addresses the computational challenges of integrating multiple sources of unstructured information by generating meta-alerts, color coded on the basis of the data source's reliability and report volume. although information relating to infectious disease outbreaks is collected, not all information has relevance to every user. the system designers are especially concerned with limiting information overload and providing focused news of immediate interest. thus, after a fi rst categorization step into locations and diseases, a second round of category tags is applied to the articles to improve fi ltering. the primary tags include ) breaking news (e.g., a newly discovered outbreak); ) warning (initial concerns of disease emergence, e.g., in a natural disaster area; ) follow-up (reference to a past outbreak); ) background/context (information on disease context, e.g., preparedness planning); and ) not disease-related (information not relating to any disease [ - are fi ltered from display]). duplicate reports are also removed by calculating a similarity score based on text and category matching. finally, in addition to providing mapped content, each alert is linked to a related information window with details on reports of similar content as well as recent reports concerning either the same disease or location and links for further research (e.g., who, cdc, and pubmed). healthmap processes an average of . most alerts come from news media ( . %), followed by promed ( . %) and multinational agencies ( . %). the epispider project was designed in january to serve as a visualization supplement to the promed-mail reports. through use of publicly available software, epispider was able to display topic intensity of promed-mail reports on a map. additonally, epispi-der automatically converted the topic and location information of the reports into rss feeds. usage tracking showed, initially, that the rss feeds were more popular than the maps. transforming reports to a semantic online format (w c semantic web) makes it possible to combine emerging infectious disease content with similarly transformed information from other internet sites such as the global disaster alert coordinating system (gdacs) website (www.gdacs.org). the broad effects of disasters often increase illness and death from communicable diseases, particularly where resources for healthcare infrastructure have been lacking ( , ) . by merging these online media sources (promed-mail and gdacs), epispider demonstrates how distributed, event-based, unstructured media sources can be integrated to complement situational awareness for disease surveillance. epispider connects to news sites and uses natural language processing to transform free-text content into structured information that can be stored in a relational database. for promed reports, the following fi elds are extracted: date of publication; list of locations (country, province, or city) mentioned in the report; and topic. epispider parses location names from these reports and georeferences them using the georeferencing services of yahoo maps (http:// maps.yahoo.com), google maps (http://maps.google.com), and geonames (www.geonames.org). each news report that has location information can be linked to relevant demographic-and health-specifi c information (e.g., population, per capita gross domestic product, public health expenditure, and physicians/ , population). epispider extracts this information from the central intelligence agency (cia) factbook (www.cia.gov/library/publications/the-world-factbook/index.html) and the united nations development human development report (http://hdr.undp.org/en) internet sites. this feature provides different contexts for viewing emerging infectious disease information. by using askmedline ( ) , epispider also provides context-sensitive links to recent and relevant scientifi c literature for each promed-mail report topic. after epispider extracts the previously described information, it automatically transforms it to other formats, e.g., rss, keyhole markup language(kml; http://earth.google.com/ kml), and javascript object notation (json, a human-readable format for representing simple data structures; www. json.org). publishing content using those formats enables the semantic linking of promed-mail content to country information and facilitates epispider's redistribution of structured data to services that can consume them. continuing along this transformation chain, the simile exhibit api (http://simile.mit.edu) that consumes json-formatted data fi les enables faceted browsing of information by using scatter plots, google maps, and timelines. recently, epispider began outsourcing some of its preprocessing and natural language processing tasks to external service providers such as opencalais (www.opencalais.com) and the unifi ed medical language system (umls) web service for concept annotation. this action has enabled the screening of noncurated news sources as well. built on open-source software components, epispi-der has been operational since january . in response to feedback from users, additional custom data feeds have been incorporated, both topic oriented (by disease) and format specifi c (kml, rss, georss), as has semantic annotation using umls concept codes. for example, the epispider kml module was developed to enable the us directorate for national intelligence to distribute avian infl uenza event-based reports in google earth kml format to consumers worldwide and also to enable an integrated view of promed and world animal health information database reports. epispider is used by persons in north america, europe, australia, and asia, and it receives - visits/hour, originating from - sites and representing - countries worldwide. epispider has recorded daily visits from the us department of agriculture, us department of homeland security, us directorate for national intelligence, us cdc, uk health protection agency, and several universities and health research organizations. in the latter half of , daily access to graphs and exhibits surpassed access to data feeds. epispider's semantically linked data were also used for validating syndromic surveillance information in openrods (http://openrods. sourceforge.net) and populating disease detection portals, like www.intelink.gov and the research triangle institute (research triangle park, nc, usa). despite their similarities, the described event-based public health surveillance systems are highly complementary; they monitor different data types, rely on varying levels of automation and human analysis, and distribute distinct information. gphin, being the longest in use, is probably the most mature in terms of information extraction. in contrast, healthmap and epispider, being comparatively recent programs, focus on providing extra structure and automation to the information extracted. their differences and similarities, summarized in the table, can be analyzed according to multiple characteristics: what data sources do they consider? how do they extract information from those sources? and in what format is the information redistributed and how? for completeness, the broadest range of sources is critical. gphin's data comes from factiva and al bawaba, which are subscription-only news aggregators. their strategy is to rely on companies that sell the service of collecting event information from every pertinent news stream. in contrast, healthmap's strategy is to rely on open-access news aggregators (e.g., googlenews and moreover) and curated sources (e.g., promed and eurosurveillance). epispider, until recently, has concentrated on curated sources only (e.g., promed, gdacs, and cia factbook). this distinction between free and paid sources raises the question of whether the systems have access to the same event information. after the data sources have been chosen, the next step is to extract useful information among the incoming reports. first, at the level of the report stream, the system must fi lter out reports that are not disease related and categorize the remaining (disease-related) reports into predefi ned sets. then, at a second level of triage, the information within each retrieved alert (e.g., an event's location or reported disease) is assessed. gphin does this data characterization through automatic processing and human analysis, whereas healthmap and epispider rely mainly on automated techniques (although a person per- forms a daily scan of all healthmap alerts and a sample of epispider alerts). after a report in the data stream is determined to be relevant, it is processed for dissemination. gphin automatically translates the reports into different languages and grants its clients access to the database through a custom search engine. gphin also decides which reports should be raised to the status of alerts and sent to its clients by email. healthmap provides a geographic and temporal panorama of ongoing epidemics through an open-access user interface. it automatically fi lters out the reports that do not correspond to breaking alerts. the remaining alerts are prepared for display (time codes and geocodes as well as disease category and data source) to allow faceted browsing and are linked to other information sources (e.g., the wikipedia defi nition of the disease). these data are also provided as daily email digests to users interested in specifi c diseases and locations. although gphin and health-map provide their own user interface, epispider explores conventional formats for reports, adding time-coding, geocoding, and country metadata for automatic integration with other information sources and versatile browsing by using existing open-source software. these reports are displayed under the name of web exhibits and include, for example, a mapping and a timeline view of the reports and a scatter plot of the alerts with respect to the originating country's human development index and gross domestic product per capita. a division arises between the healthmap and epispi-der strategies and the gphin strategy regarding the level of access granted to users. this division is due in part to the access policies of the data sources used by the systems, as discussed previously. a discrepancy also exists in the amount of human expertise, and thus in the cost, required by the systems. these differences also raise the question of whether information from one system is more reliable than that of the others. undertaking an evaluation of the systems in parallel is a critical next step. also, all systems are inherently prone to noise because most of the data sources they use or plan to use (figure) for surveillance are not verifi ed by public health professionals, so even if the system is supervised by a human analyst, it might still generate false alerts. false alerts need to be mitigated because they might have substantial undue economic and social consequences. eventbased disease surveillance may also benefi t from algorithms linked by ontology (formal representation of a set of concepts within a domain and the relationships between those concepts) detecting precursors of disease events. measurement and handling of input data's reliability is a critical research direction. future development should focus on linking these systems more closely to public health practitioners in the fi eld and establishing collaborative networks for alert verifi cation and dissemination. such development would ensure that event-based monitoring further establishes itself as an invaluable public health resource that provides critical context and an alternative to more traditional indicator-based outbreak reporting. emerging and re-emerging infectious diseases emerging infections: microbial threats to health in the united states travel and the emergence of infectious diseases the challenge of emerging and re-emerging infectious diseases implementing syndromic surveillance: a practical guide informed by the early experience syndromic surveillance: adapting innovations to developing settings electronic public health surveillance in developing settings: meeting summary disease surveillance needs a revolution hot spots in a wired world: who surveillance of emerging and re-emerging infectious diseases global infectious disease surveillance and health intelligence offi cial versus unoffi cial outbreak reporting through the internet the new international health regulations: considerations for global public health surveillance rumors of disease in the global village: outbreak verifi cation use of the internet to enhance infectious disease surveillance and outbreak investigation global surveillance, national surveillance, and sars global public health intelligence network (gphin) epidemic intelligence: a new framework for strengthening disease surveillance in europe the internet and the global monitoring of emerging diseases: lessons from the fi rst years of promed-mail global trends in emerging infectious diseases the global public health intelligence network the global public health intelligence network and early warning outbreak detection: a canadian contribution to global public health healthmap: internet-based emerging infectious disease intelligence. in: infectious disease surveillance and detection: assessing the challenges-fi nding solutions. washington: national academy of science surveillance sans frontières: internet-based emerging infectious disease intelligence and the healthmap project world wide wellness: online database keeps tabs on emerging health threats technology and public health: healthmap tracks global diseases get your daily plague forecast public health information fusion for situation awareness the threat of communicable diseases following natural disasters: a public health response infectious diseases of severe weather-related and fl ood-related natural disasters askmedline: a free-text, natural language query tool for medline/pubmed use of trade names is for identifi cation only and does not imply endorsement by the public health service or by the key: cord- -m rv i authors: maserat, elham; jafari, fereshteh; mohammadzadeh, zeinab; alizadeh, mahasti; torkamannia, anna title: covid- & an ngo and university developed interactive portal: a perspective from iran date: - - journal: health technol (berl) doi: . /s - - - sha: doc_id: cord_uid: m rv i on february , iran reported the initial cases of novel coronavirus ( -ncov). as of march , iran had reported , covid- cases, including deaths. one of the best approaches for responding to covid- is rapid detection, early isolation, and quick treatment of the disease. studies have stated that information technology (it) is a powerful tool for detecting, tracking, and responding to pandemic diseases. despite the importance of it, a lack of efficient use of information technology capacity was observed after the emergence of the new cases of covid- in iran. a web-portal can integrate different services and technologies and can support interaction between non-governmental organizations (ngo) and universities. ngos can provide services for public health utilizing technology and its advancements. one of the important duties of these organizations is to inform and provide integrated services to the general public. an interactive portal is one of the advanced technologies that these organizations can use for health management. medical sciences of universities play a vital surveillance role for enhancing the performance quality of ngos. a web-portal can be a collaboration tool between health-related ngos and medical sciences of universities. in this study, an interactive portal was developed by ngos and a university. ngos under the supervision and participation of tabriz university of medical sciences’ center for social factors research in covid- management division of this portal separated classified information into two sections, informatics and services. this portal is accessible to the general public, patients, service providers, and, importantly, policymakers and presents educational and medical research information to all users. for patients and the general public in high-risk environments, increasing information security, reducing confusion regarding finding needed information, and facilitating communication are only part of the portal’s benefit. it seems that web-portal capacity is needed to control covid- in the digital age. the collaboration of academic and university bodies in the context of health portals can play key roles for coverage of the covid- pandemic. coronavirus disease has quickly spread globally [ ] . the novel coronavirus outbreak, which began in wuhan, china, in december , has expanded worldwide [ ] . the first confirmed case of coronavirus infection in iran occurred on february [ , ] . one of the most effective strategies for responding to covid- is rapid identification, early isolation, and quick treatment of the disease [ ] . information technology (it) can be used in various dimensions to manage covid- , and it tools can facilitate prevention, screening, diagnosis, treatment, and follow-up steps. telemedicine, as one of the new technologies being used during epidemic conditions, has high potential to improve disease control without geographic or location restrictions [ ] . expert systems can be employed to predict disease and risk assessment based on geographic area [ ] . studies have stated that it was a powerful tool for detecting, tracking, and responding to the pandemic of influenza a (h n ) virus [ , ] . despite the importance of these systems, a lack of efficient use of information technology capacity was observed after the emergence of the new cases of covid- in iran [ ] . a web-portal can integrate different services and technologies for patients [ ] . one of the critical technologies available is a web-portal that combines technologies into one platform, enabling patients to view electronic health record data, to exchange safe messages with physicians, request visits, and request prescriptions refills if needed [ , ] . web-portals are increasingly a part of modern life. portals receive data from multiple sources seamlessly and help organizations and institutions look at and integrate various applications, software, content, and information from databases [ ] . therefore, users are presented with a web page that provides them with information regarding different servers and/or systems. portal content is accessible from a variety of devices such as pcs and smartphones [ , ] . numerous studies have verified the usefulness of portals in the management of chronic diseases, and it is manifest that the portal be designed to meet the needs of users [ ] . in iran, the lack of efficient use of information technology capacity in public opinion management is considered to have caused fear and other emotional reactions in society [ ] . therefore, the need to integrate these technologies into a comprehensive portal is increasingly felt due to the numerous coherent information management systems related to the disease. thus, considering the benefits of the health portal and its critical role in information interaction and the lack of electronic context for the communication of the various tools that have been provided to manage and monitor covid- , we offered this platform in the interactive portal of non-governmental organizations (ngos), research institutes, and universities. for the first step of the study, we performed a comprehensive search for items used to design core dimensions of an interactive portal. in this step, all relevant books, articles, research projects, theses, manuals, and scientific reports were extracted from medline, ieee, scholar, web of sciences, scopus, proquest, websites, and databases related to interactive portals related to health. some of the highest priorities for designing portals extracted from scientific databases were: & interaction of ngos with each other & ability to upload information regarding each nongovernmental organization separately & easy content management by users users' comments regarding the system should be considered before design requirements of the system model are formulated [ ] . prioritization of requirements for designing is based on users' opinions [ , ] . the second step of the study was a qualitative survey. in-depth and semi-structured interviews and focus group discussions were conducted based on the views of staffs of non-governmental organizations who worked on health care plans. participants were informed about the interactive web-portal. the researcher explained the purpose of the study. in addition, researchers requested consent to audio-record the interviews. researchers explained the study and obtained initial consent for further contact from participants. interviewing involves asking questions and getting answers from participants. in the next step, focus groups with semi-structured discussions were used to analyze and approve the final conceptual models, content structure, and architecture of the portal. table illustrates the participants of this study. ngo participants in this study were active in five areas: education, treatment, prevention, empowerment, and health. various studies have been conducted to develop the system, based on the needs of users, and open source platforms have been selected due to their high availability and security [ ] . in addition, php language, mysql database, content management system of joomla, html, and css were used for development. the joomla management system creates a strong security environment and has high flexibility in user management, support, and classification [ ] . another reason for choosing this content management system is the existence of various and comprehensive plugins and providing the possiblity to develop other features in the future. in the designed system, user authentication processes, user approval, group assignments, and user roles are easily possible. the interactive portal developed by the ngos and university provides a platform for unity and integrity to function and prevents the overlap of the activities of ngos and enables oversight by scientific bodies. ngos, under the supervision of and in participation with tabriz university of medical sciences' center for social factors research in covid- management division of this portal, separated classified information into two sections of informatics and services. the informatics section provides protocols and guidelines for prevention, diagnostics, and treatment approved by universities and academic institutions. this ngos pathway is formulated as an operational plan to monitor the step-by-step activities of the portal, which will be well presented to the public in a very organized way. the process of ngo's work and guidance on this portal regarding covid- provides a valuable facility to the general public, in which they will be able to place their questions and receive answers. this technology has flawlessly accelerated and facilitated the responsibility of ngos. in the portal services section, the information content of ngos services was presented in three essential phases: prevention of, treatment of, and recovery from covid- (this phase included rehabilitation and death separately). in the first section, comprehensive information regarding the process of public training, the epidemic of the disease, and follow-up are the methods provided. additionally, step-bystep information regarding the process of tracking and protecting vulnerable, unprotected people, the elderly and slumdog, which can be covered by some ngos, is presented. furthermore, the number and type of donation items such as masks, gloves, and disinfectants are available on this portal. in the treatment section, ngos provided adequate information regarding equipment, consumables, and the budget/costs for this disease separately, which is based on the evaluation provided by professional academic bodies. additionally, specialist counselling is accessible to the general public and individuals with a high risk of infection. also, shared experiences by recovered patients are available. in the recovery area, the details of companies/communities that are ready to offer free of charge services or discount are introduced. furthermore, the valuable and evidence-based links regarding the information content of social networks and the national systems such as self-assessment of suspected patients, covid- disease pathway detection, urban and rural high-risk system, self-care system, online counselling system, and iranian epidemiological administration system are visually usable for both the general public and medical staff. system performance requirements of this portal are identified as follows: -personalization: this system must specify the type of user access by the administrator -customization: the user must be able to configure how the information is displayed. for example, screen control -grouping: beneficiary groups have privacy if they have group communication. -easy to use: ability to access the system in different and standard browsers and display information in different tools -detail management: each group should have its own management and in addition to one person have several content managers. principle features of active (authorized) users of portal include: -user discussion: ability to discuss information and interact between users based on a specific topic and focused on a specific environment -event calendar: record information about future events -links: links to other web addresses that contain users' favorite information -subject lists: a hierarchy of sub-lists/folders with documents that can only be accessed by active (authorized) users. the architecture of the web-portal is presented in fig. . the interactive portal developed by the ngos and university is accessible to the general public, patients, service providers, and, importantly, policymakers and presents educational and medical research information to all users. moreover, collecting data from various sources, making information available to all types of user groups, and providing the ability that users are capable of uploading data to the portal by themselves are some portal features. this ability would occur according to the level of access, defining personal pages by users or groups, and reducing outpatients' appointments and/or in-person visits. identifying patients and the general public in high-risk environments, increasing information security, reducing confusion regarding finding needed information, and facilitating communication are only part of the portal's benefit (fig. ). an interactive portal elaborated with the continued support of social determinants of the health research center of tabriz university of medical sciences and collaboration of health-related non-government organizations has been developed. in this portal, ngos and the university share everyday experiences and activities in the field of citizen empowerment, government sectors, and citizen advocacy. we describe an interactive portal developed by ngos and a university that provides a platform for unity and integrity to help manage the covid- pandemic and its oversight by scientific bodies. the covid- response requires new information technology approaches to support clinical needs [ ] . a web-based portal related to health provides patients easier access to their healthcare information and services [ ] . our web-portal divided data into two sections, informatics and services. consideration of informatics needs and useful guidance are vital options to pandemic response [ , ] . in addition, media partnerships can prevent societal fear and help manage covid- in low-and middle-income countries [ ] . the informatics section of the interactive portal presents protocols and guidelines for prevention, diagnosis, and treatment. these protocols are approved by universities and academic institutions. the general public are monitoring the activities of ngos in the field of covid- and are asking for their help. one of the big challenges of covid- response is the health information required to safely care for a patient. patients may not be able to select the health care provider for follow up of covid- [ ] . the process of ngo's work and guidance on our portal regarding covid- provides a valuable facility to the general public, in which they will be able to place their questions and receive answers. studies have illustrated public health interventions can improve control of covid- , such as in wuhan, china [ ] . a fig. architecture of the webportal web-based portal facilitates dissemination of public health information [ ] , as is the case in the current study. comprehensive monitoring is needed to notice changes in the pandemic and the effectiveness of public health interventions and their social acceptance [ ] . portals enhance communication and social acceptance. the prevention section of the web-portal will address close information regarding the process of public training, the epidemic of the disease, and follow-up the method provided. in addition, step-by-step information regarding the process of tracking and protecting vulnerable, unprotected people, the elderly and slumdog, which can be covered by some ngos, is presented. ngos activities in web-portal framework are monitored by tabriz university of medical sciences. furthermore, the number and type of donation items such as masks, gloves, and disinfectants are available on this portal. information technology can facilitate accurate health care resource allocation in high-, low-, and middle-income countries. electronic data collection of patients at a population level is one of the models that shows the capabilities of it for resource allocation. we can overcome the challenges of traditional data collection using it tools [ ] . in the treatment section of the portal, ngos provided adequate information regarding equipment, consumables, and the budget/costs for this disease separately, which is based on the evaluation provided by professional academic bodies. furthermore, information technology-based tools can facilitate accountability and transparency in governmental and non-governmental organizations in a country [ ] . the portal developed by the ngos and university can support accountability and transparency in governmental and nongovernmental organizations. a web-portal is an essential tool in supporting the general public needs of a health system managing the covid- pandemic. the collaboration of academic and university bodies in the context of health portals brings together the most upto-date technologies that can play a significant role in attracting public sectors and crisis management. early transmission dynamics in wuhan, china, of novel coronavirusinfected pneumonia the effect of travel restrictions on the covid- battle during the toughest sanctions against iran estimation of covid- burden and potential for international dissemination of infection from iran what to do next to control the -ncov epidemic? from isolation to coordination: how can telemedicine help combat the covid- outbreak? medrxiv an intelligent system for predicting and preventing mers-cov infection outbreak information technology and global surveillance of cases of h n influenza influenza a (h n ) virus, -online monitoring east asia's strategies for effective response to covid- : lessons learned for iran. management strategies in health system implementing webbased e-health portal systems use of an electronic patient portal among disadvantaged populations development of a smart e-health portal for chronic disease management. international conference on algorithms and architectures for parallel processing proven portals: best practices for planning, designing, and developing enterprise portals patient portals and patient engagement: a state of the science review delivering superior health and wellness management with iot and analytics framework for a web based information dissemination and academic support educational portal. proceedings of icomes an effective educational portal for visually impaired persons in india a framework for the design of an extensible modular academic web site some theoretical and practical aspects of educational portal design based on cms system responding to covid- : the uw medicine information technology services experience the who pandemic influenza preparedness framework: a milestone in global governance for health pandemic influenza preparedness and response: a who guidance document: geneva: world health organization managing covid- in lowand middle-income countries balancing health privacy, health information exchange, and research in the context of the covid- pandemic association of public health interventions with the epidemiology of the covid- outbreak in wuhan, china an investigation of specifications for migrating to a web portal framework for the dissemination of health information within a public health network covid- : what is next for public health? global chemotherapy demands: a prelude to equal access improving service quality, accountability and transparency of local government: the intervening role of information technology governance publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations acknowledgements the authors thank all of the social determinants of health research center staff and professionals involved in this study for their cooperation and support. conflict of interest the authors declare that they have no conflict of interest. key: cord- -mfz q authors: kim, hye kyung; ahn, jisoo; atkinson, lucy; kahlor, lee ann title: effects of covid- misinformation on information seeking, avoidance, and processing: a multicountry comparative study date: - - journal: sci commun doi: . / sha: doc_id: cord_uid: mfz q we examined the implications of exposure to misinformation about covid- in the united states, south korea, and singapore in the early stages of the global pandemic. the online survey results showed that misinformation exposure reduced information insufficiency, which subsequently led to greater information avoidance and heuristic processing, as well as less systematic processing of covid- information. indirect effects differ by country and were stronger in the u.s. sample than in the singapore sample. this study highlights negative consequences of misinformation during a global pandemic and addresses possible cultural and situational differences in how people interpret and respond to misinformation. the pandemic caused by the coronavirus disease (hereafter referred to as poses unprecedented threats to global human well-being. because of the high uncertainty associated with the novelty of covid- , many people rely on online health information to learn more about how to protect themselves and their families from the imminent health threat (bento et al., ; garfin et al., ; hernández-garcía & giménez-júlvez, ) . while its prevention and treatment require practices based on scientific evidence, there are myriad sources of incorrect information circulating on the internet about what prevents and cures covid- . this is critical because relying on such misinformation can bring about detrimental health outcomes by encouraging people to engage in ineffective-even harmful-remedies. for example, nearly people have been killed by ingesting methanol based on harmful treatment recommendations that spread across social media in iran (associated press, ) . in south korea, churchgoers were infected with covid- after church leaders sprayed saltwater into their mouths out of a misguided belief that the water would help prevent the spread of covid- ; the spray bottle became contaminated with the virus in the process and spread infection (park, ) . in the united states, rumors spread on social and national media that ingesting bleach might help kill the virus; research suggests that this misinformation contributed to individuals "engaging in non-recommended high-risk practices with the intent of preventing sars-cov- transmission, such as washing food products with bleach, applying household cleaning or disinfectant products to bare skin, and intentionally inhaling or ingesting these products" (gharpure et al., , p. ) . accordingly, the world health organization (who; ) has declared an "infodemic" related to covid- and actively sought to rectify the crisis levels of misinformation spreading online. despite the proliferation of online misinformation, the internet is an important source of information during a disease pandemic as it can be an efficient and expeditious channel for providing necessary information and for correcting misinformation. indeed, risk communication scholars have emphasized the importance of providing timely information in risk contexts to help aid decision making, especially when there is considerable uncertainty about the most effective course of action in a given situation (edgar et al., ; yang, aloe, & feeley, ) . therefore, scholars must find ways to better understand how the power of the internet to misinform is affecting its ability to inform. one important, still unanswered question is whether exposure to misinformation serves to motivate or deter subsequent information seeking, and/or changes the way encountered information is processed. prior theorizing on risk communication has often focused on immediate outcomes of information exposure; yet there is a lack of understanding about the subsequent information management that follows exposure to risk information (so et al., ) . the current study addresses these important gaps in the extant literature in its examination of exposure to misinformation on this study builds on previous research by offering two main contributions. first, guided by the risk information seeking and processing (risp) model (griffin et al., ) , we posit that a reduction in the perceived need for additional information (or information insufficiency) is an important mechanism that underlies adverse consequences of misinformation exposure on subsequent information seeking and processing. given that prior research has often focused on the spread of misinformation (guess et al., ; valenzuela et al., ) , addressing the implications of that spread on information seeking and processing helps to enrich our understanding of misinformation effects. second, this current study examines whether the effects of misinformation exposure are universal across cultures or specific to certain cultural contexts. studies on information seeking and processing have been conducted predominantly in western contexts, and cross-cultural studies, especially studies that involve multiple countries, are lacking in the extant literature. for theory building and refinement, it is important to examine the validity of a theoretical prediction across different cultural contexts and populations. misinformation is defined as objectively incorrect information that is not supported by scientific evidence and expert opinion (nyhan & reifler, ) . while misinformation can persist for a long time without contradiction (kata, ) , for scientific issues, what is true or false can be altered with newly emerging evidence and consensus among experts (vraga & bode, ) . researchers also differentiate misinformation from misperception and disinformation: misperception is holding a belief that is incorrect or false (southwell et al., ) , whereas disinformation is driven by the intention to deceive (wardle, ) . while misinformation is inadvertently false, its propagation or sharing can subsequently be either deliberative or accidental (southwell et al., ) . there is ample evidence on the pervasiveness of misinformation in the context of infectious disease outbreaks. for example, a study on the zika virus found that half of the top news stories were based on misinformation or rumors, and those stories were times more likely to be shared on social media than stories based on facts (sommariva et al., ) . studies on covid- similarly found that misinformation was more frequently tweeted than science-based evidence or public health recommendations (bridgman et al., ; pulido et al., ) . researchers have addressed the potential consequences of misinformation that could undermine the adoption of preventive measures (bridgman et al., ; dixon & clarke, ; tan et al., ) , which could exacerbate the spread of the epidemic. researchers have also suggested that exposure to misinformation can trigger individuals' additional information seeking to verify the information that they suspect to be false (tandoc et al., ) . for example, when individuals cannot verify information on social media based on their own judgment and knowledge, they seek out information from their social circle and other sources to authenticate (tandoc et al., ) . researchers, however, also point out that the motivation for subsequent information seeking may not always be related to accuracy (southwell, ) . as thorson ( ) addressed, misinformation exposure actually may prevent individuals from seeking new information and instead may trigger motivated processing to protect their preexisting attitudes or beliefs. such selective information exposure and motivated reasoning make it difficult to rectify misinformation once false beliefs are deeply held (bode & vraga, ; jerit & barabas, ) . as porter and wood ( ) argued, however, individuals can simultaneously pursue the accuracy on factual matters as well as the goals that serve their pre-extant beliefs. in the context of a novel disease pandemic, it is also important to consider the uncertainty regarding what is true and false about the disease and its prevention, given that the expert consensus and "best available evidence" are subject to change (vraga & bode, , p. ) . such information uncertainty may as well have implications on the public's information behaviors. in a recent study, perceived exposure to covid- misinformation was positively associated with seeking more information and complying with health advisories (hameleers et al., ) . because perceived misinformation exposure takes into account individuals' judgment on the veracity of information, its implications may differ from exposure to misinformation that is based on actual state of scientific evidence (vraga & bode, ) . we thus take the latter conceptualization in the current study to understand the effects of misinformation exposure. guided by the risp model (griffin et al., ) , the current study examines whether and how exposure to misinformation about covid- prevention motivates or deters effortful seeking and processing of relevant information. the risp model is one of the most comprehensive models that seeks to understand social psychological motivators of seeking and processing risk information (yang, aloe, & feeley, ). in the model, various concepts drawn from the theory of planned behavior (ajzen, ) and other works are depicted as having an indirect impact on information seeking and processing through the model's central concept of information insufficiency. however, the risp model does not theorize individuals' prior exposure to risk information within the model, despite the possible implications of prior exposure on information seeking and processing. in risk contexts, information seeking and processing are driven by the motivation to reduce uncertainty. information insufficiency refers to one's subjective assessment of the gap between their perceived current knowledge about a risk and what they feels is sufficient knowledge for adequately coping with the risk (sufficiency threshold). information insufficiency is at the heart of the risp model because uncertainty reduction occurs only when individuals are sufficiently motivated to engage in the tasks needed to achieve the desired judgmental confidence (eagly & chaiken, ) . while little is known about how exposure to misinformation influences individuals' perceived (needed) knowledge, misinformation on covid- can potentially make individuals feel overwhelmed with different and inconsistent recommendations on what prevents and cures the disease (pentina & tarafdar, ) . in turn, this sense of being overloaded with information may manifest as having sufficient information on a given issue (i.e., a lower information insufficiency). indeed, pentina and tarafdar ( ) have argued that the amount and variety of unverified information circulating online can make individuals feel overloaded with information, given individuals' limited capacity to process. researchers have also suggested that the ambiguity, low quality, and novelty of information can trigger the feeling of overload as these attributes make it more difficult for individuals to process information (keller & staelin, ; schneider, ) . thus, we posit our first hypothesis on information insufficiency as follows: hypothesis (h ): exposure to misinformation will be negatively associated with information insufficiency. the risp model theorizes information insufficiency as the primary component that predicts subsequent information seeking and avoidance as well as how the information will be processed. information seeking is defined as a volitional process of acquiring desired information from relevant sources, whereas information avoidance refers to deliberately shunning or delaying the acquisition of available information. we treated information seeking and avoidance as two orthogonal constructs, instead of the opposites on a continuum, as they can coexist under some circumstances that involve uncertainty (yang & kahlor, ) . informed by the heuristic-systematic model (eagly & chaiken, ) , the risp model posits a dual system of information processing. the dual system includes one that requires more effortful and deeper processing (systematic processing) and another that involves more superficial processing and poses fewer cognitive demands on individuals (heuristic processing). griffin et al. ( ) predicted that the drive to overcome information insufficiency motivates individuals to seek more riskrelated information and to systematically process the information, while making it less likely for them to heuristically process. some empirical work has supported the insufficiency principle in this role (griffin et al., ; kahlor, ) , while a meta-analysis of the risp model found limited evidence (yang, aloe, & feeley, ) . the conflicting evidence points to the possibility of information insufficiency serving as a mediator between misinformation exposure and information seeking and processing. prior exposure also may have direct associations with information seeking and processing. past information exposure and related attitudes have been related to other information behaviors, including information avoidance and information sharing (kahlor et al., ; yang & kahlor, ; yang, kahlor, & griffin, ) . furthermore, a study by kalichman et al. ( ) suggested a positive relationship between exposure to misinformation about aids/hiv and information avoidance behaviors. thus, we posit the following direct and indirect effects of misinformation exposure on information seeking and avoidance, as well as systematic and heuristic processing. hypothesis (h ): exposure to misinformation will be associated with (a) reduced information seeking, (b) increased information avoidance, (c) reduced systematic processing, and (d) increased heuristic processing. hypothesis (h ): informational insufficiency will mediate the effect of misinformation on (a) information seeking, (b) information avoidance, (c) systematic processing and (d) heuristic processing. the risp model also addresses several psychosocial factors that predict information insufficiency. the most powerful predictor of risk information seeking to emerge from the risp research is informational subjective norms, that is, perceived pressure from others to engage in a given information behavior (kahlor, ; yang, aloe, & feeley, ) . these norms also constitute an important predictor of information insufficiency (kahlor, ; yang, aloe, & feeley, ) . another important risp concept is risk perception, which comprises subjective probability and perceived severity of harm and is the most commonly examined cognitive component of how individuals assess a given risk (yang, aloe, & feeley, ). the risp model also takes into account affective responses to risk, such as anxiety and fear, which serve as important heuristic cues in making risk decisions (finucane et al., ) . affective responses, which result from risk perceptions, increase an individual's desire for information (griffin et al., ; so et al., ; yang & kahlor, ) . one interesting question is whether misinformation exposure influences these psychosocial factors, which would subsequently affect information insufficiency as well as information seeking and processing. there are two different possibilities. if misinformation exposure increases perceived risk, affective response to risk, and informational subjective norms, then this also would increase information insufficiency, thus counterbalancing the negative implications of misinformation hypothesized in h . in contrast, if misinformation decreases these psychosocial factors, this would further explain h . to examine these possibilities, we pose the following research question: what is the role of risk perception, affective response, and informational subjective norms in the relationship between misinformation exposure and information insufficiency? there is limited understanding of why certain individuals or societies are more or less vulnerable to misinformation (wang et al., ) . researchers suggest that older adults (mitchell et al., ) , those with lower cognitive ability (de keersmaecker & roets, ), and those who are less educated (kalichman et al., ) are more likely to be misinformed than those who are younger, have higher cognitive ability, or are more educated. prior research also points to ideological asymmetries in sharing and believing misinformation. the research suggests that people who prioritize conformity and tradition (i.e., conservatives) also tend to emphasize uncertainty reduction, and thus exaggerate within-group consensus and maintenance of homogenous social relationships, both of which contribute to the spread of misinformation (jost et al., ) . beyond these individual-level characteristics, we lack data comparing the relative susceptibility to misinformation between populations and societies based on cultural differences. research on cultural differences suggests that uncertainty avoidance, which refers to the "extent to which the members of a culture feel threatened by uncertain or unknown situations" (hofstede, , p. ) , is a cultural dimension related to anxiety, security needs, and rule orientation. high-uncertainty avoidance cultures tend to be less tolerant about ambiguity and diversity than low-uncertainty avoidance cultures. because misinformation on covid- prevention is characterized by scientific uncertainty, we suggest that cultural differences in uncertainty avoidance may moderate the effect of misinformation exposure on information seeking and processing. moreover, cultural differences in uncertainty avoidance also may change the relative strength of the relationship between information insufficiency and information seeking and processing. that is, those in high-uncertainty avoidance cultures may be more likely to act on their information insufficiency to seek out and effortfully process relevant information in order to reduce their uncertainty, than those in low-uncertainty avoidance cultures. consistent with this prediction, in the context of climate change, one crosscultural study based on the risp model found the information insufficiencyinformation seeking intention association to be stronger in the u.s. sample (a relatively higher uncertainty avoidance culture) compared to the china sample (a low-uncertainty avoidance culture; yang, kahlor, & li, ) . given the conceptual importance of information insufficiency in the risp model, we extend prior work by comparing the relative strength of the effect of information insufficiency between the u.s. sample and two other countries, one with a higher uncertainty avoidance culture (south korea, index score = ) and the other with a lower uncertainty avoidance culture (singapore, index score = ) compared to the united states (index score = ; hofstede, ; hofstede et al., ) . research question (rq ): do the direct and indirect effects of misinformation exposure on information seeking, avoidance, and processing differ between the united states and south korea or singapore? an online survey was conducted in the early stages of the covid- pandemic in three countries, the united states (march - , ), south korea (february -march , ) and singapore (february -march , ). panel members were recruited from online panel companies: global research in south korea (n = , ) and qualtrics in singapore (n = , ) and the united states (n = ). we employed quota sampling in terms of age, gender, and ethnicity to match with the national profile of singapore, south korea, and the united states. the survey took about minutes to complete and was administered in english in singapore and united states and in korean in south korea. the english survey questionnaire was translated into korean by two bilingual researchers. for the combined samples, respondents ranged in age from to (m = . , sd = . ) and consisted of . % females. the median educational attainment was "some college or an associate's ( -year) degree." the majority of the singapore sample was ethnic chinese ( . %), followed by . % malay. in the u.s. sample, . % self-identified as white and . % identified as black or african american. south korea is a monoethnic country. table presents the sample profile and descriptive statistics by country. information seeking and avoidance. information seeking was measured by five items derived from j.-n. kim et al. ( ) and j.-n. kim and grunig ( ) . sample items, on a -point likert-type scale ( = strongly disagree, = strongly agree), included "i regularly check to see if there is any new information about this problem" and "i spend a lot of time learning about this issue" (m = . , sd = . , α = . ). information avoidance was measured by five items adapted from howell and shepperd ( ) and miles et al. ( ) . on the same likert-type scale, items included "i don't want any more information about covid- " and "i avoid learning about covid- " (m = . , sd = . , α = . ). systematic and heuristic processing. systematic processing was assessed by three items derived from yang et al. ( ) and yang et al. ( ) . on a -point scale ( = not at all, = very much), sample items included "after i encounter information about covid- , i stop and think about it" and "for me to understand about covid- , the more viewpoints i get the better" (m = . , sd = . , α = . ). heuristic processing was assessed by three items from yang et al. ( ) and kahlor et al. ( ) . using the same -point scale, items included, for example, "when i come across information about covid- , i focus on only a few key points" (m = . , sd = . , α = . ). although cronbach's alphas for these scales are relatively weak, they are comparable to the one used by yang et al. ( ) , who suggest that these processing scales are still under development and thus have room for improvement (also see deng & chan, , as yang et al., , reported omega rather than alpha). information insufficiency. to calculate information insufficiency, we separately assessed perceived current knowledge and sufficiency threshold (griffin et al., ; yang et al., ) . to assess perceived current knowledge, participants rated to what extent they currently know about covid- on a scale of (knowing nothing) to (knowing everything; m = . , sd = . ). for sufficiency threshold, participants estimated how much knowledge they would need in order to deal adequately with the risk of covid- on a scale of (need to know nothing) to (need to know everything you could possibly know; m = . , sd = . ). we employed the analysis of partial variance (cohen & cohen, ) to compute information insufficiency (m = . , sd = . ) that contains the residual variance of information threshold accounting for the variance of perceived current knowledge (rosenthal, ) . this approach helps to address the limitations of using a raw difference score (e.g., sensitive to floor and ceiling effects) and the regressed change approach (e.g., the inflated explained variance in information insufficiency). exposure to covid- information. exposure to misinformation was assessed with five claims on covid- prevention measures that were identified as false at the time of data collection (who, ): (a) gargling with mouthwash, (b) eating garlic, (c) avoiding pets, (d) vaccination against pneumonia, and (e) regularly rinsing the nose with saline. on a -point scale ( = not at all, = a lot of times), participants reported how often they had heard that each of the five claims from eight different information sources (e.g., news app or website, social media app or website, medical or health websites, television and radio news; tan et al., ) . cronbach's α for the exposure scales across the five claims ranged from . to . , and items were averaged to create composite scores. the composite scores were further averaged into an index of exposure to misinformation (m = . , sd = . , α = . ). for comparison purpose, we also examined exposure to general covid- information without specifying the information content. on a -point scale ( = never, = very often), participants reported how often they learned about covid- using different information sources (rains, ; e.g., websites or social networking site [sns] of governmental health agencies, print or online newspapers, sns of newspapers, individual sns, television). responses were averaged into a score of exposure to general covid- information (m = . , sd = . , α = . ). exposure to misinformation and exposure to general covid- information were moderately correlated (r = . , p < . ). risk perception. two components of risk perception were assessed: perceived susceptibility and severity. to assess perceived susceptibility (brewer et al., ) , participants estimated their chances of contracting covid- in several weeks if they do not take any preventive actions on a given slider between % and % at % intervals (m = . , sd = . ). on a -point likert-type scale ( = strongly disagree, = strongly agree), we used three items derived from weinstein ( ) to measure perceived severity (e.g., "i think that covid- is a very dangerous disease"; m = . , sd = . , α = . ). based on convention (griffin et al., ; weinstein, ) , we multiplied perceived susceptibility and severity to create an index of risk perception (m = . , sd = . ). affective responses. we assessed negative affective responses experienced during the covid- pandemic such fear, anger, sadness, and anxiety derived from prior work (yang, ; yang et al., ) . these emotions have been reported to be frequently experienced in crises and pandemic situations (jin et al., ; h. k. kim & niederdeppe, ) . on a -point scale ( = not at all, = very much), participants rated their feelings toward the covid- situation on the following emotions themed under fear (afraid, fearful, scared), anger (angry, mad, irritated), sadness (sad, downhearted, unhappy), and anxiety (anxious, worried, concerned). responses were averaged to create a scale of affective responses (m = . , sd = . , α = . ). informational subjective norms. derived from yang and kahlor ( ) , we used four items asking participants' perception of other's expectations about their seeking covid- -related information on a -point scale ( = not at all, = very much; e.g., "most people who are important to me think that i should seek information about covid- "). responses were averaged to create a scale of informational subjective norms (m = . , sd = . , α = . ). to examine our hypotheses and research questions (summarized in figure ), we used hierarchical ordinary least squares regression, which allowed us to enter variables in separate blocks to test the incremental assessment of r in each step as well as the relative effects of variables while accounting for those entered together or in earlier steps (cohen et al., ) . all the analyses controlled for demographic factors (i.e., age, gender, education, country), having a respiratory disease in the past few weeks (yes/no), and presence of local cases of covid- in the subject's city (yes/no). multicollinearity tests showed tolerance values above zero and variance inflation factor values below the conventional cutoff value of for all variables entered in the models (cohen et al., ) . we entered exposure to misinformation and general information on covid- in the first block along with other control factors (testing h and h ), and risp model components (risk perception, affective response, informational subjective norms, and information insufficiency) in the second block. addressing h and rq , we tested a serial mediation model (model ) with process macro (hayes, ) to investigate the indirect effects of misinformation on information seeking, avoidance, and processing, separately mediated through risk perception, affective responses, and information subjective norms, as well as serially via information insufficiency. addressing rq , the conditional indirect effects via information insufficiency by country were analyzed with process macro model . the u.s. sample served as a reference group given its middle position in regard to the level of uncertainty avoidance (hofstede, ) . we estimated confidence intervals (cis) with , bootstrap samples. we hypothesized that exposure to misinformation would be negatively associated with information insufficiency (h ). as shown in table , h was supported. information insufficiency was negatively correlated with misinformation exposure (β = - . , p < . ) and positively with general information on covid- (β = . , p < . ). risk perception (β = . ), affective response (β = . ), and information subjective norms (β = . ) were also positively associated with information insufficiency (all p < . ). we also predicted that exposure to misinformation would be negatively associated with information seeking and systematic processing, and positively associated with information avoidance and heuristic processing (h a-d). as shown in table , h a was not supported, as misinformation was positively associated with information seeking (β = . , p = . ). however, h b and h d were supported, as misinformation was positively associated with information avoidance (β = . , p < . ) and with heuristic processing (β = . , p < . ). as predicted, h c also was supported, as systematic processing was negatively associated with misinformation exposure (β = − . , p = . ). at step , controlling for other risp components, information insufficiency was negatively associated with information avoidance (β = − . , p < . ) and heuristic processing (β = − . , p = . ), whereas it was positively associated with systematic processing (β = . , p < . ). no association was found with information seeking (β = . , p = . ). we predicted indirect effects of misinformation on information seeking, avoidance, and processing via informational insufficiency (h ). in light of the predictions of the risp model, we also included risk perception, affective responses, and informational subjective norms in a serial mediation model using process macro (model ; finally, we sought to explore whether the direct and indirect effects of misinformation are further moderated by country (rq ). we used process macro (model ; table ) to examine the moderated mediation via information insufficiency by country (u.s. sample served as a reference group) on information seeking, avoidance, and processing. figure presents the effects of misinformation exposure by country, and figure presents the effects of information insufficiency by country. in a model predicting information seeking, the direct effect of misinformation differed between singapore and united states (p = . ; south korea-united states comparison, p = . ) such that it was significant only in the singapore sample (Β = . , p = . ; Β us = −. , p = . ; Β kr = . , p = . ). in contrast, the effect of information insufficiency on information seeking was significant only in the us sample (Β = . , p = . ), which was significantly different from that of the singapore sample (p < . ; Β sg = −. , p = . ) but not from the south korea sample (p = . ; Β kr = . , p = . ). the conditional indirect effect was significant only in the us sample ( % ci [−. , −. ]) and this effect statistically differed only from the singapore sample (index of moderated mediation = . , ci [. , . ]). in predicting information avoidance, the direct effect of misinformation was significant across all three countries (Β us = . , Β sg = . , Β kr = . , all p < . ), but the effect size significantly differed only between the u.s. and south korea samples (p = . ; united states-singapore comparison, p = . ). the effect of information insufficiency on information avoidance was significant in the u.s. and south korea samples (Β us = −. , Β kr = −. , all p < . ) but not in the singapore sample (Β sg = −. , p = . ); thus, only the contrast between the united states and singapore was significant (p = . ). the conditional indirect effect was significant only in the u.s. ( % ci [. , . ]) and south korea samples (ci [. , . ]); there was a significant moderated mediation for the united states-singapore contrast (index = −. , ci [−. , −. ]). as for systematic processing, the direct effect of misinformation was significant only for the u.s. sample (Β us = −. , p < . ; Β sg = −. , p = . ; Β kr = −. , p = . ), and it contrasted significantly with the effect in the singapore sample (p = . ) but not the south korea sample (p = . ). the effect of information insufficiency did not differ by country (united states-south korea comparison, p = . ; united states-singapore comparison, p = . ) and all conditional indirect effects were significant on systematic processing regardless of country ( % ci in predicting heuristic processing, the direct effect of misinformation was significant across all countries (Β us = . , Β sg = . , Β kr = . , all p < . ), while the effect size being stronger in the u.s. sample than in the singapore (p = . ) or south korea samples (p = . ). the effect of information insufficiency was significant only in the u.s. sample (Β us = −. , p < . ; Β sg = −. , p = . ; Β kr = −. , p = . ), and its coefficient was significantly different from that of the singapore (p = . ) and south korea samples (p = . ). accordingly, the conditional indirect effect was significant only in the u.s. sample ( % ci [. , . ]) and the moderated mediation was significant for both united states-singapore ( the covid- pandemic represents one of the biggest challenges to global human well-being to date. an epidemic of misinformation makes this formidable challenge even more so by impeding people from getting correct information on how to prevent and curb the spread of the disease. based on a multicountry survey conducted in the early stages of the global pandemic, this study documents that exposure to misinformation demotivates individuals from seeking out and thoughtfully processing information on covid- . our intercountry comparisons suggest, however, that the influence of misinformation exposure may not be equivalent across different populations and cultures. thus, we provide important insights for theory building, as well as for the mitigation of misinformation effects across populations as they all face a common goal-to prevent and curb the spread of disease. this study found that exposure to misinformation was negatively associated with information insufficiency. that is, when people encounter misinformation, they perceived less informational need for adequately preventing and treating covid- . it is noteworthy that exposure to misinformation reduced both sufficiency threshold and current knowledge, when these variables were analyzed separately. yet the decrease in sufficiency threshold was greater than that of current knowledge, which resulted in lower information insufficiency. unlike the association of misinformation, exposure to general information was positively associated with information insufficiency. this suggests that the influence of misinformation is distinguished from that of general information on covid- . in the early stages of a novel disease pandemic, exposure to general information on the unknown risk at hand may make individuals realize that they need more information, whereas the opposite is true for misinformation. information insufficiency served as a significant mediator of the relationship between misinformation exposure and information avoidance, systematic processing, and heuristic processing. that is, when individuals perceived that they know enough about covid- as a result of misinformation exposure, they were more likely to avoid information and heuristically process (rather than systematically process) relevant information. this counters the findings from a study that assessed perceived misinformation exposure (hameleers et al., ) . as vraga and bode ( ) addressed, the public may have different perceptions than what is agreed upon among experts (e.g., who) on misinformation, thus having differential implications on information behaviors. on the other hand, when other mediators suggested in the risp model are taken into account, the total indirect effects were not significant on these information outcomes. notably, affective response and informational subjective norms, both of which could be triggered by exposure to misinformation, appear to counterbalance the mediating role of information insufficiency. while information insufficiency was not directly linked to information seeking, misinformation exposure was indirectly associated with information seeking when mediated by affective response and informational subjective norms. across all three countries, exposure to misinformation had a significant direct association with information avoidance and heuristic processing. this relationship was stronger in the south korea sample for information avoidance (vs. united states) and in the u.s. sample for heuristic processing (vs. south korea and singapore). it is also noteworthy that exposure to misinformation had a direct relationship with information seeking only in the singapore sample, and with systematic processing only in the u.s. sample. these results suggest that exposure to misinformation may have different implications on information seeking or processing depending on culture or population. that is, misinformation exposure may have more implications on how americans process information, whereas for south koreans or singaporeans it will be reflected in their information seeking or avoidance. these may reflect cultural differences in how people manage uncertainties or contextual factors such as partisanship and information sources. in particular, the stronger relationship between misinformation and information avoidance in the south korea sample (vs. united states) may reflect the high-uncertainty avoidance culture of south korea (hofstede et al., ) . in light of the direct positive association with both seeking and avoidance in the singapore sample, singaporeans may be uniquely motivated to deal with misinformation having ambivalent responses toward such information. interestingly, only in the u.s. sample, information insufficiency served as a constant predictor across all information outcomes. in the south korea sample, information insufficiency was significantly associated only with information avoidance and systematic processing. in the singapore sample, only systematic processing was associated with information insufficiency. similar to our findings, yang, kahlor, and li ( ) also found a stronger relationship between information insufficiency and information seeking intention in the u.s. sample than the china sample. collectively, western populations may be more likely to be influenced by epistemic motivation than eastern populations, regardless of uncertainty avoidance tendencies. instead, cultural differences in perceptions of personal control or ability to seek, process, and retain information may be closely related to the differential effects of information insufficiency. alternatively, given that the risp model was developed in the western context, the model and its measurements may better reflect westerners' seeking and processing tendencies. future work should examine these possibilities in cross-cultural contexts. this study has several limitations to note. first, the sample sizes were not balanced among the three countries examined and these countries were differently affected by covid- at the time when the surveys were conducted. while we controlled for risk characteristics and relevant experience, it was not possible to account for all contextual factors that could confound the results. nonetheless, we believe that it is imperative to document public sentiment and responses during an actual pandemic, and thus this work could have unique value in studying misinformation effects. second, this study cannot confirm causal orderings proposed in the conceptual model due to the cross-sectional nature of the data. future work should consider employing a longitudinal or experimental design to support causal statements about the proposed relationships here. last, while there are multiple types of covid- -related misinformation circulating on the internet, we focused on five false claims relevant to the prevention of covid- that reflect the state of scientific evidence at the time of data collection. it would be beneficial for future studies to investigate additional types of misinformation to better understand misinformation effects. despite these limitations, this study makes important contributions to the extant literature on misinformation and information seeking and processing. first, this is the first study that examined implications of misinformation exposure on information seeking and processing. because information seeking and processing constitute important components in managing uncertainty and risk situations (griffin et al., ) , understanding the mechanisms of how misinformation affects these informational behaviors offers crucial insights into human tendencies under uncertainty. second, this is one of just a few studies to make comparisons across multiple countries that are simultaneously affected by a common risk in studying information seeking and processing. studies in these areas have often focused on one cultural, mostly western, context, and intercultural comparisons have been scarce. comparing the relative predictive utility of a theoretical framework across different cultural contexts and populations is important for theory development. on the practical front, critical assessment of information as well as active seeking of quality information are crucial for mitigating the false beliefs that could be formed based on misinformation. given that misinformation demotivates individuals from these important information activities during a disease pandemic, it is necessary to minimize exposure to such incorrect information and to deliver evidence-based health advisories. to this end, risk communicators and government authorities should continuously monitor and clarify emerging misinformation on various online platforms to prevent the public's misperception and engagement in fake remedies or scientifically unproven measures. in light of the counterbalancing role of informational subjective norm we found, it would be beneficial to emphasize the social expectation on keeping up with health advisories to minimize the adverse effects of misinformation exposure. in dealing with global pandemics, like covid- , it would be essential for international and local health agencies to take into account differences in culture in communicating risk. for example, compared to lowuncertainty avoidance cultures (e.g., singapore, sweden), high-uncertainty avoidance cultures (e.g., south korea, japan, germany) may be less tolerant about information uncertainty (misinformation) as well as changes in health advisories, which are inevitable in most pandemic situations. in high-uncertainty avoidance culture, clear and consistent risk communications as well as implementation of formal governing structures (e.g., laws) could be particularly beneficial for mitigating uncertainty. the author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. the author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: this work was supported by the ministry of education of the republic of korea, the national research foundation of korea (nrf- s a a ), and singapore ministry of education academic research fund tier . hye kyung kim https://orcid.org/ - - - x notes . during data collection, local cases of covid- increased from , to , in the united states (out of . million total population), from to , in south korea (out of . million), and from to in singapore (out of . million). . in keeping with cohen and cohen ( ) , we used perceived knowledge to predict sufficiency threshold, and noted the unstandardized regression slope (Β). then, information insufficiency was computed by subtracting perceived knowledge * Β from sufficiency threshold. . in march , who removed avoiding pets and gargling with mouthwash from myth busters in response to the emerging evidence on these measures. there is no evidence that pets have infected humans with covid- , but an infected dog was found in hong kong. as well, there is no proof that gargling prevents respiratory infections caused by covid- , but health experts note that there is little downside in gargling. the theory of planned behavior evidence from internet search data shows information-seeking responses to news of local covid- cases in related news, that was wrong: the correction of misinformation through related stories functionality in social media risk perceptions and their relation to risk behavior the causes and consequences of covid- misperceptions: understanding the role of news and social media applied multiple regression/correlation analysis for the behavioral sciences appplied multiple regression/correlation analysis for the behavioral sciences fake news": incorrect, but hard to correct. the role of cognitive ability on the impact of false information on social impressions testing the difference between reliability coefficients alpha and omega the effect of falsely balanced reporting of the autismvaccine controversy on vaccine safety perceptions and behavioral intentions the psychology of attitudes resource use in women completing treatment for breast cancer the affect heuristic in judgment of risks and benefits the novel coronavirus (covid- ) outbreak: amplification of public health consequences by media exposure knowledge and practices regarding safe household cleaning and disinfection for covid- prevention: united states proposed model of the relationship of risk information seeking and processing to the development of preventive behaviors linking the heuristicsystematic model and depth of processing after the flood: anger, attribution, and the seeking of information less than you think: prevalence and predictors of fake news dissemination on facebook feeling "disinformed" lowers compliance with covid- guidelines: evidence from the us, uk, netherlands and germany introduction to mediation moderation and conditional process analysis: a regression-based approach assessment of health information about covid- prevention on the internet: infodemiological study national cultures in four dimensions: a research-based theory of cultural differences among nations. international studies of management & organization cultures and organizations: software of the mind cultures and organizations: software of the mind establishing an information avoidance scale partisan perceptual bias and the information environment toward a publics-driven, emotion-based conceptualization in crisis communication: unearthing dominant emotions in multi-staged testing of the integrated crisis mapping (icm) model ideological asymmetries in conformity, desire for shared reality, and the spread of misinformation. current opinion in psychology an augmented risk information seeking model: the case of global warming prism: a planned risk information seeking model ethics information seeking and sharing among scientists: the case of nanotechnology studying heuristic-systematic processing of risk communication health information on the internet and people living with hiv/aids: information evaluation and coping styles a postmodern pandora's box: anti-vaccination misinformation on the internet effects of quality and quantity of information on decision effectiveness the role of emotional response during an h n influenza pandemic on a college campus problem solving and communicative action: a situational theory of problem solving what makes people hot? applying the situational theory of problem solving to hot-issue publics psychologic predictors of cancer information avoidance among older adults: the role of cancer fear and fatalism source monitoring and suggestibility to misinformation: adult age-related differences when corrections fail: the persistence of political misperceptions coronavirus: saltwater spray infects church-goers in south korea. south china morning post from "information" to "knowing": exploring the role of social media in contemporary news consumption false alarm covid- infodemic: more retweets for science-based information on coronavirus than for false information perceptions of traditional information sources and use of the world wide web to seek health information: findings from the health information national trends survey measuring differentials in communication research: issues with multicollinearity in three methods information overload: causes and consequences information seeking upon exposure to risk messages: predictors, outcomes, and mediating roles of health information seeking spreading the (fake) news: exploring health messages on social media and the implications for health professionals using a case study social networks and popular understanding of science and health: sharing disparities misinformation among mass audiences as a focus for inquiry exposure to health (mis)information: lagged effects on young adults' health behaviors and potential pathways audiences' acts of authentication in the age of fake news: a conceptual framework belief echoes: the persistent effects of corrected misinformation the paradox of participation versus misinformation: social media, political engagement, and the spread of misinformation defining misinformation and understanding its bounded nature: using expertise and evidence for describing misinformation systematic literature review on the spread of health-related misinformation on social media fake news. it's complicated perceived probability, perceived severity, and health-protective behavior coronavirus disease (covid- ) advice for the public: myth busters altruism during ebola: risk perception, issue salience, cultural cognition, and information processing risk information seeking and processing model: a meta-analysis fearful conservatives, angry liberals: information processing related to the presidential election and climate change what, me worry? the role of affect in information seeking and avoidance i share, therefore i am: a us−china comparison of college students' motivations to share information about climate change a united states-china comparison of risk information-seeking intentions from information processing to behavioral intentions: exploring cancer patients' motivations for clinical trial enrollment hye kyung kim (phd, cornell university) is an assistant professor in the wee kim wee school of communication and information at nanyang technological university in singapore. her research applies communication and social psychological theories to understand the processing and effects of communicative interactions in health and risk. her research ultimately seeks to develop theory-driven communication strategies that improve persuasion.jisoo ahn (phd, the university of texas at austin) is a research professor at health and new media research institute at hallym university. her research studies health and environmental and disaster communication and seeks effective ways to deliver information to the public by using interactive media technologies. austin. her research looks at communication in the context of sustainability and the environment. she focuses on the ways message components in environmental communication campaigns influence environmental attitudes, beliefs, and behaviors.lee ann kahlor (phd, university of wisconsin-madison) is an associate professor at stan richards school of advertising & public relations in the university of texas at austin. her primary research interest is in health and environmental risk communication with an emphasis on information seeking, avoiding, and sharing. key: cord- - uzl jpu authors: li, peisen; wang, guoyin; hu, jun; li, yun title: multi-granularity complex network representation learning date: - - journal: rough sets doi: . / - - - - _ sha: doc_id: cord_uid: uzl jpu network representation learning aims to learn the low dimensional vector of the nodes in a network while maintaining the inherent properties of the original information. existing algorithms focus on the single coarse-grained topology of nodes or text information alone, which cannot describe complex information networks. however, node structure and attribution are interdependent, indecomposable. therefore, it is essential to learn the representation of node based on both the topological structure and node additional attributes. in this paper, we propose a multi-granularity complex network representation learning model (mnrl), which integrates topological structure and additional information at the same time, and presents these fused information learning into the same granularity semantic space that through fine-to-coarse to refine the complex network. experiments show that our method can not only capture indecomposable multi-granularity information, but also retain various potential similarities of both topology and node attributes. it has achieved effective results in the downstream work of node classification and the link prediction on real-world datasets. complex network is the description of the relationship between entities and the carrier of various information in the real world, which has become an indispensable form of existence, such as medical systems, judicial networks, social networks, financial networks. mining knowledge in networks has drown continuous attention in both academia and industry. how to accurately analyze and make decisions on these problems and tasks from different information networks is a vital research. e.g. in the field of sociology, a large number of interactive social platforms such as weibo, wechat, facebook, and twitter, create a lot of social networks including relationships between users and a sharp increase in interactive review text information. studies have shown that these large, sparse new social networks at different levels of cognition will present the same smallworld nature and community structure as the real world. then, based on these interactive information networks for data analysis [ ] , such as the prediction of criminal associations and sensitive groups, we can directly apply it to the real world. network representation learning is an effective analysis method for the recognition and representation of complex networks at different granularity levels, while preserving the inherent properties, mapping high-dimensional and sparse data to a low-dimensional, dense vector space. then apply vector-based machine learning techniques to handle tasks in different fields [ , ] . for example, link prediction [ ] , community discovery [ ] , node classification [ ] , recommendation system [ ] , etc. in recent years, various advanced network representation learning methods based on topological structure have been proposed, such as deepwalk [ ] , node vec [ ] , line [ ] , which has become a classical algorithm for representation learning of complex networks, solves the problem of retaining the local topological structure. a series of deep learning-based network representation methods were then proposed to further solve the problems of global topological structure preservation and high-order nonlinearity of data, and increased efficiency. e.g., sdne [ ] , gcn [ ] and dane [ ] . however, the existing researches has focused on coarser levels of granularity, that is, a single topological structure, without comprehensive consideration of various granular information such as behaviors, attributes, and features. it is not interpretable, which makes many decision-making systems unusable. in addition, the structure of the entity itself and its attributes or behavioral characteristics in a network are indecomposable [ ] . therefore, analyzing a single granularity of information alone will lose a lot of potential information. for example, in a job-related crime relationship network is show in fig. , the anti-reconnaissance of criminal suspects leads to a sparse network than common social networks. the undiscovered edge does not really mean two nodes are not related like p and p or (p and p ), but in case detection, additional information of the suspect needs to be considered. the two without an explicit relationship were involved in the same criminal activity at a certain place (l ), they may have some potential connection. the suspect p and p are related by the attribute a , the topology without attribute cannot recognize why the relation between them is generated. so these location attributes and activity information are inherently indecomposable and interdependence with the suspect, making the two nodes recognize at a finer granularity based on the additional information and relationship structure that the low-dimensional representation vectors learned have certain similarities. we can directly predict the hidden relationship between the two suspects based on these potential similarities. therefore, it is necessary to consider the network topology and additional information of nodes. the cognitive learning mode of information network is exactly in line with the multi-granularity thinking mechanism of human intelligence problem solving, data is taken as knowledge expressed in the lowest granularity level of a multiple granularity space, while knowledge as the abstraction of data in coarse granularity levels [ ] . multi-granularity cognitive computing fuses data at different granularity levels to acquire knowledge [ ] . similarly, network representation learning can represent data into lower-dimensional granularity levels and preserve underlying properties and knowledge. to summarize, complex network representation learning faces the following challenges: information complementarity: the node topology and attributes are essentially two different types of granular information, and the integration of these granular information to enrich the semantic information of the network is a new perspective. but how to deal with the complementarity of its multiple levels and represent it in the same space is an arduous task. in complex networks, the similarity between entities depends not only on the topology structure, but also on the attribute information attached to the nodes. they are indecomposable and highly non-linear, so how to represent potential proximity is still worth studying. in order to address the above challenges, this paper proposes a multigranularity complex network learning representation method (mnrl) based on the idea of multi-granularity cognitive computing. network representation learning can be traced back to the traditional graph embedding, which is regarded as a process of data from high-dimensional to lowdimensional. the main methods include principal component analysis (pca) [ ] and multidimensional scaling (mds) [ ] . all these methods can be understood as using an n × k matrix to represent the original n × m matrix, where k m. later, some researchers proposed isomap and lle to maintain the overall structure of the nonlinear manifold [ ] . in general, these methods have shown good performance on small networks. however, the time complexity is extremely high, which makes them unable to work on large-scale networks. another popular class of dimensionality reduction techniques uses the spectral characteristics (e.g. feature vectors) of a matrix that can be derived from a graph to embed the nodes. laplacian eigenmaps [ ] obtain low-dimensional vector representations of each node in the feature vector representation graph associated with its k smallest non-trivial feature values. recently, deepwalk was inspired by word vec [ ] , a certain node was selected as the starting point, and the sequence of the nodes was obtained by random walk. then the obtained sequence was regarded as a sentence and input to the word vec model to learn the low-dimensional representation vector. deep-walk can obtain the local context information of the nodes in the graph through random walks, so the learned representation vector reflects the local structure of the point in the network [ ] . the more neighboring points that two nodes share in the network, the shorter the distance between the corresponding two vectors. node vec uses biased random walks to make a choose between breadthfirst (bfs) and depth-first (dfs) graph search, resulting in a higher quality and more informative node representation than deepwalk, which is more widely used in network representation learning. line [ ] proposes first-order and secondorder approximations for network representation learning from a new perspective. harp [ ] obtains a vector representation of the original network through graph coarsening aggregation and node hierarchy propagation. recently, graph convolutional network (gcn) [ ] significantly improves the performance of network topological structure analysis, which aggregates each node and its neighbors in the network through a convolutional layer, and outputs the weighted average of the aggregation results instead of the original node's representation. through the continuous stacking of convolutional layers, nodes can aggregate high-order neighbor information well. however, when the convolutional layers are superimposed to a certain number, the new features learned will be over-smoothed, which will damage the network representation performance. multi-gs [ ] combines the concept of multi-granularity cognitive computing, divides the network structure according to people's cognitive habits, and then uses gcn to convolve different particle layers to obtain low-dimensional feature vector representations. sdne [ ] directly inputs the network adjacency matrix to the autoencoder [ ] to solve the problem of preserving highly nonlinear first-order and second-order similarity. the above network representation learning methods use only network structure information to learn low-dimensional node vectors. but nodes and edges in real-world networks are often associated with additional information, and these features are called attributes. for example, in social networking sites such as weibo, text content posted by users (nodes) is available. therefore, the node representation in the network also needs to learn from the rich content of node attributes and edge attributes. tadw studies the case where nodes are associated with text features. the author of tadw first proved that deepwalk essentially decomposes the transition probability matrix into two low-dimensional matrices. inspired by this result, tadw low-dimensionally represents the text feature matrix and node features through a matrix decomposition process [ ] . cene treats text content as a special type of node and uses node-node structure and node-content association for node representation [ ] . more recently, dane [ ] and can [ ] uses deep learning methods [ ] to preserve poten-tially non-linear node topology and node attribute information. these two kinds of information provide different views for each node, but their heterogeneity is not considered. anrl optimizes the network structure and attribute information separately, and uses the skip-gram model to skillfully handle the heterogeneity of the two different types of information [ ] . nevertheless, the consistent and complementary information in the topology and attributes is lost and the sensitivity to noise is increased, resulting in a lower robustness. to process different types of information, wang put forward the concepts of "from coarse to fine cognition" and "fine to coarse" fusion learning in the study of multi-granularity cognitive machine learning [ ] . people usually do cognition at a coarser level first, for example, when we meet a person, we first recognize who the person is from the face, then refine the features to see the freckles on the face. while computers obtain semantic information that humans understand by fusing fine-grained data to coarse-grained levels. refining the granularity of complex networks and the integration between different granular layers is still an area worthy of deepening research [ , ] . inspired by this, divides complex networks into different levels of granularity: single node and attribute data are microstructures, meso-structures are role similarity and community similarity, global network characteristics are extremely macro-structured. the larger the granularity, the wider the range of data covered, the smaller the granularity, the narrower the data covered. our model learns the semantic information that humans can understand at above mentioned levels from the finest-grained attribute information and topological structure, finally saves it into low-dimensional vectors. let g = (v, e, a) be a complex network, where v represents the set of n nodes and e represents the set of edges, and a represents the set of attributes. in detail, a ∈ n×m is a matrix that encodes all node additional attributes information, and a i ∈ a describes the attributes associated with node represents an edge between v i and v j . we formally define the multi-granularity network representation learning as follows: , we represent each node v i and attribute a i as a low-dimensional vector y i by learning a functionf g : |v | and y i not only retains the topology of the nodes but also the node attribute information. definition . given network g = (v, e, a). semantic similarity indicates that two nodes have similar attributes and neighbor structure, and the lowdimensional vector obtained by the network representation learning maintains the same similarity with the original network. e.g., if v i ∼ v j through the mapping function f g to get the low-dimensional vectors y i = f g (v i ), y j = f g (v j ), y i and y j are still similar, y i ∼ y j . complex networks are composed of node and attribute granules (elementary granules), which can no longer be decomposed. learning these grains to get different levels of semantic information includes topological structure (micro), role acquaintance (meso) and global structure (macro). the complete low-dimensional representation of a complex network is the aggregation of these granular layers of information. in order to solve the problems mentioned above, inspired by multi-granularity cognitive computing, we propose a multi-granularity network representation learning method (mnrl), which refines the complex network representation learning from the topology level to the node's attribute characteristics and various attachments. the model not only fuses finer granular information but also preserves the node topology, which enriches the semantic information of the relational network to solve the problem of the indecomposable and interdependence of information. the algorithm framework is shown in fig. . firstly, the topology and additional information are fused through the function h, then the variational encoder is used to learn network representation from fine to coarse. the output of the embedded layer are low-dimensional vectors, which combines the attribute information and the network topology. to better characterize multiple granularity complex networks and solve the problem of nodes with potential associations that cannot be processed through the relationship structure alone, we refine the granularity to additional attributes, and designed an information fusion method, which are defined as follows: where n (v i ) is the neighbors of node v i in the network, a i is the attributes associated with node v i . w ij > for weighted networks and w ij = for unweighted networks. d(v j ) is the degree of node v j . x i contains potential information of multiple granularity information, both the neighbor attribute information and the node itself. to capture complementarity of different granularity hierarchies and avoid the effects of various noises, our model in fig. is a variational auto-encoder, which is a powerful unsupervised deep model for feature learning. it has been widely used for multi-granularity cognitive computing applications. in multi-granularity complex networks, auto-encoders fuse different granularity data to a unified granularity space from fine to coarse. the variational auto-encoder contains three layers, namely, the input layer, the hidden layer, and the output layer, which are defined as follows: here, k is the number of layers for the encoder and decoder. σ (·) represents the possible activation functions such as relu, sigmod or tanh. w k and b k are the transformation matrix and bias vector in the k-th layer, respectively. y k i is the unified vector representation that learning from model, which obeys the distribution function e, reducing the influence of noise. e ∼ ( , ) is the standard normal distribution in this paper. in order to make the learned representation as similar as possible to the given distribution,it need to minimize the following loss function: to reduce potential information loss of original network, our goal is to minimize the following auto-encoder loss function: wherex i is the reconstruction output of decoder and x i incorporates prior knowledge into the model. to formulate the homogeneous network structure information, skip-gram model has been widely adopted in recent works and in the field of heterogeneous network research, skip-grams suitable for different types of nodes processing have also been proposed [ ] . in our model, the context of a node is the low-dimensional potential information. given the node v i and the associated reconstruction information y i , we randomly walk c ∈ c by maximizing the loss function: where b is the size of the generation window and the conditional probability p (v i+j |y i ) is defined as the softmax function: in the above formula, v i is the node context representation of node v i , and y i is the result produced by the auto-encoder. directly optimizing eq. ( ) is computationally expensive, which requires the summation over the entire set of nodes when computing the conditional probability of p (v i+j |y i ). we adopt the negative sampling approach proposed in metapath vec++ that samples multiple negative samples according to some noisy distributions: where σ(·) = /( + exp(·)) is the sigmoid function and s is the number of negative samples. we set p n (v) ∝ d v as suggested in wode vec, where d v is the degree of node v i [ , ] . through the above methods, the node's attribute information and the heterogeneity of the node's global structure are processed and the potential semantic similarity kept in a unified granularity space. multi-granularity complex network representation learning through the fusion of multiple kinds of granularity information, learning the basic granules through an autoencoder, and representing different levels of granularity in a unified low-dimensional vector solves the potential semantic similarity between nodes without direct edges. the model simultaneously optimizes the objective function of each module to make the final result robust and effective. the function is shown below: in detail, l re is the auto-encoder loss function of eq. ( ), l kl has been stated in formula ( ), and l hs is the loss function of the skip-gram model in eq. ( ) . α, β, ψ, γ are the hyper parameters to balance each module. l v ae is the parameter optimization function, the formula is as follows: where w k ,ŵ k are weight matrices for encoder and decoder respectively in the kth layer, and b k ,b k are bias matrix. the complete objective function is expressed as follows: mnrl preserves multiple types of granular information include node attributes, local network structure and global network structure information in a unified framework. the model solves the problems of highly nonlinearity and complementarity of various granularity information, and retained the underlying semantics of topology and additional information at the same time. finally, we optimize the object function l in eq. ( ) through stochastic gradient descent. to ensure the robustness and validity of the results, we iteratively optimize all components at the same time until the model converges. the learning algorithm is summarized in algorithm . algorithm . the model of mnrl input: graph g = (v, e, a), window size b, times of walk p, walk length u, hyperparameter α, β, ψ, γ, embedding size d. output: node representations y k ∈ d . : generate node context starting p times with random walks with length u at each node. : multiple granularity information fusion for each node by function h (·) : initialize all parameters : while not converged do : sample a mini-batch of nodes with its context : compute the gradient of ∇l : update auto-encoder and skip-gram module parameters : end while : save representations y = y k datasets: in our experiments, we employ four benchmark datasets: facebook , cora, citeseer and pubmed . these datasets contain edge relations and various attribute information, which can verify that the social relations of nodes and individual attributes have strong dependence and indecomposability, and jointly determine the properties of entities in the social environment. the first three datasets are paper citation networks, and these datasets are consist of bibliography publication data. the edge represents that each paper may cite or be cited by other papers. the publications are classified into one of the following six classes: agents, ai, db, ir, ml, hci in citeseer and one of the three classes (i.e., "diabetes mellitus experimental", "diabetes mellitus type ", "diabetes mellitus type ") in pubmed. the cora dataset consists of machine learning papers which are classified into seven classes. facebook dataset is a typical social network. nodes represent users and edges represent friendship relations. we summarize the statistics of these benchmark datasets in table . to evaluate the performance of our proposed mnrl, we compare it with baseline methods, which can be divided into two groups. the former category of baselines leverage network structure information only and ignore the node attributes contains deepwalk, node vec, grarep [ ] , line and sdne. the other methods try to preserve node attribute and network structure proximity, which are competitive competitors. we consider tadw, gae, vgae, dane as our compared algorithms. for all baselines, we used the implementation released by the original authors. the parameters for baselines are tuned to be optimal. for deepwalk and node vec, we set the window size as , the walk length as , the number of walks as . for grarep, the maximum transition step is set to . for line, we concatenate the first-order and second-order result together as the final embedding result. for the rest baseline methods, their parameters are set following the original papers. at last, the dimension of the node representation is set as . for mnrl, the number of layers and dimensions for each dataset are shown in table . table . detailed network layer structure information. citeseer - - - - - - pubmed - - - - cora - - - - facebook - - - - to show the performance of our proposed mnrl, we conduct node classification on the learned node representations. specifically, we employ svm as the classifier. to make a comprehensive evaluation, we randomly select %, %, % nodes as the training set and the rest as the testing set respectively. with these randomly chosen training sets, we use five-fold cross validation to train the classifier and then evaluate the classifier on the testing sets. to measure the classification result, we employ micro-f (mi-f ) and macro-f (ma-f ) as metrics. the classification results are shown in table , , respectively. from these four tables, we can find that our proposed mnrl achieves significant improvement compared with plain network embedding approaches, and beats other attributed network embedding approaches in most situations. experimental results show that the representation results of each comparison algorithm perform well in node classification in downstream tasks. in general, a model that considers node attribute information and node structure information performs better than structure alone. from these three tables, we can find that our proposed mnrl achieves significant improvement compared with single granularity network embedding approaches. for joint representation, our model performs more effectively than most similar types of algorithms, especially in the case of sparse data, because our model input is the fusion information of multiple nodes with extra information. when comparing dane, our experiments did not improve significantly but it achieved the expected results. dane uses two auto-encoders to learn and express the network structure and attribute information separately, since the increase of parameters makes the optimal selection in the learning process, the performance will be better with the increase of training data, but the demand for computing resources will also increase and the interpretability of the algorithm is weak. while mnrl uses a variational auto-encoder to learn the structure and attribute information at the same time, the interdependence of information is preserved, which handles heterogeneous information well and reduces the impact of noise. in this subsection, we evaluate the ability of node representations in reconstructing the network structure via link prediction, aiming at predicting if there exists an edge between two nodes, is a typical task in networks analysis. following other model works do, to evaluate the performance of our model, we randomly holds out % existing links as positive instances and sample an equal number of non-existing links. then, we use the residual network to train the embedding models. specifically, we rank both positive and negative instances according to the cosine similarity function. to judge the ranking quality, we employ the auc to evaluate the ranking list and a higher value indicates a better performance. we perform link prediction task on cora datasets and the results is shown in fig. . compared with traditional algorithms that representation learning from a single granular structure information, the algorithms that both on structure and attribute information is more effective. tadw performs well, but the method based on matrix factorization has the disadvantage of high complexity in large networks. gae and vgae perform better in this experiment and are suitable for large networks. mnrl refines the input and retains potential semantic information. link prediction relies on additional information, so it performs better than other algorithms in this experiment. in this paper, we propose a multi-granularity complex network representation learning model (mnrl), which integrates topology structure and additional information, and presents these fused information learning into the same granularity semantic space that through fine-to-coarse to refine the complex network. the effectiveness has been verified by extensive experiments, shows that the relation of nodes and additional attributes are indecomposable and complementarity, which together jointly determine the properties of entities in the network. in practice, it will have a good application prospect in large information network. although the model saves a lot of calculation cost and well represents complex networks of various granularity, it needs to set different parameters in different application scenarios, which is troublesome and needs to be optimized in the future. the multi-granularity complex network representation learning also needs to consider the dynamic network and adapt to the changes of network nodes, so as to realize the real-time information network analysis. social structure and network analysis network representation learning: a survey virtual network embedding: a survey the link-prediction problem for social networks community discovery using nonnegative matrix factorization node classification in social networks recommender systems deepwalk: online learning of social representations node vec: scalable feature learning for networks line: large-scale information network embedding deep learning deep attributed network embedding structural deep network embedding semi-supervised classification with graph convolutional networks dgcc: data-driven granular cognitive computing granular computing data mining, rough sets and granular computing structural deep embedding for hypernetworks principal component analysis the isomap algorithm and topological stability laplacian eigenmaps for dimensionality reduction and data representation network representation learning based on multi-granularity structure word vec explained: deriving mikolov et al'.s negativesampling word-embedding method harp: hierarchical representation learning for networks sparse autoencoder network representation learning with rich text information a general framework for content-enhanced network representation learning anrl: attributed network representation learning via deep neural networks granular computing with multiple granular layers for brain big data processing an approach for attribute reduction and rule generation based on rough set theory metapath vec: scalable representation learning for heterogeneous networks grarep: learning graph representations with global structural information co-embedding attributed networks key: cord- -gi mug p authors: montesi, michela title: understanding fake news during the covid- health crisis from the perspective of information behaviour: the case of spain date: - - journal: nan doi: . / sha: doc_id: cord_uid: gi mug p the health crisis brought about by covid- has generated a heightened need for information as a response to a situation of uncertainty and high emotional load, in which fake news and other informative content have grown dramatically. the aim of this work is to delve into the understanding of fake news from the perspective of information behaviour by analysing a sample of fake news items that were spread in spain during the covid- health crisis. a sample of fake news items was collected from the maldita.es website and analysed according to the criteria of cognitive and affective authority, interactivity, themes and potential danger. the results point to a practical absence of indicators of cognitive authority ( . %), while the affective authority of these news items is built through mechanisms of discrediting people, ideas or movements ( . %) and, secondarily, the use of offensive or coarse language ( . %) and comparison or reference to additional information sources ( . %). interactivity features allow commenting in . % of the cases. the dominant theme is society ( . %), followed by politics ( . %) and science ( . %). finally, fake news, for the most part, does not seem to pose any danger to the health or safety of people – the harm it causes is intangible and moral. the author concludes by highlighting the importance of a culture of civic values to combat fake news. the covid- pandemic of is leaving a profound wound in our society, and many think that our lives will never be the same again, with implications at all levels, including for library and information services. the avalanche of fake news and hoaxes that has accompanied the health crisis since its very beginning has converted an information issue into a topic of public opinion and debate, with pressure on the library community to give a satisfactory answer to the problem of how to recognize truthful and useful information (xie et al., ) . explicit actions against disinformation have been taken since the electoral campaign for the us presidency, often in the form of guidelines and recommendations, whilst the library community has been debating about possible solutions to a problem that, sullivan ( ) argues, we do not yet fully understand. so far, libraries have responded by reaffirming traditional library values and, as an immediate solution to what has been called an 'infodemic' (marquina, ) , the international federation of library associations and institutions ( ) updated its eight-step 'how to spot fake news' checklist on march , recommending additionally the exercise of critical thinking as an essential competence in media literacy. the novelty of this new avalanche of fake news goes hand in hand with the novelty of the health crisis caused by the covid- pandemic, which has converted fake news and information into a matter of social concern. many social actors have contributed to a heated debate that has paralleled the health crisis, including the spanish national police ( ), which, on march , announced on its website the publication of a 'guide against fake news'. this guide, in the style of the international federation of library associations and institutions' directions, recommends, among other strategies to check the veracity of information, relaunching google searches, comparing the information found, being suspicious, verifying the author and to avoid sharing. apart from the police, since march many spanish professionals from different sectors have intervened to address the issue and encourage the population to break the chain of dissemination of clearly adulterated news. in his blog, the psychologist soler sarrió ( ) recommends googling possible fake news and applying common sense. according to soler sarrió ( ) , fake news aims to provoke fear and panic among the population, while borondo ( ) , from the newspaper el correo, stresses that it cannot only cause internet saturation, but could even put lives at risk. from the university of barcelona, vincent ( ) highlights the manipulative purposes of fake news, which seeks to scare, confuse and fuel divisions among the population, encouraging distrust of information from the government and other official sources. emotional manipulation has been highlighted by newtral ( ) as well, a journalistic website that is devoted to selecting and filtering information. however, the real social implications of the problem emerged from a survey by the centro de investigaciones sociológicas published on april. a sample of spanish citizens was asked whether fake news should be prohibited and only official sources on the pandemic be permitted, and % of the respondents agreed that 'it would be necessary to limit and control the information, establishing only one official source of information'. this caused protests against an alleged attack on the freedom of the press (marcos, ) , though it also threatens the library principle of unfettered access to information (sullivan, ) . although it is questionable whether fake news alone can generate division and social unrest, since, according to some sources, they would rather thrive on it and proliferate it in times of difficulties (tandoc et al., ) , it clearly introduces manipulative intentions in the consumption of information. its purposes are both financial, seeking to increase the number of visits and clicks and consequently advertising revenues, and ideological, usually discrediting certain ideas and people in favour of others (bakir and mcstay, ; tandoc, ) . lazer et al. ( lazer et al. ( : define 'fake news' as 'fabricated information that mimics news media content in form but not in organizational process or intent', which differs from both 'misinformation' -that is, false or misleading information -and 'disinformation' -that is, false information that is disseminated intentionally to deceive people. bernal-triviño and clares-gavilán ( ), citing the european commission, indicate that it would be more appropriate to speak of 'disinformation' because the term 'fake news' has been used to discredit the critical stance of certain information media that published truthful information. according to bakir and mcstay ( ) , disinformation consists in deliberately creating and disseminating false information, while misinformation is the practice of those who, without being aware, disseminate false information -a phenomenon that has been little studied, the authors explain. according to tandoc ( ) , fake news can be considered a type of disinformation, whose main features include falsity, the intention to deceive and the attempt to look like real news. rubin ( ) reiterates that the difference between misinformation and disinformation is intentionality, with both behaviours being supported by the highly technological affordances of our society. social networks and online communication, together with the financial reasons mentioned above, are the basic foundations for the dissemination of false news (blanco-herrero and arcila-calderón, ). according to rubin ( ) , who applies an epidemiology-based model to the spread of fake news, social networks act as a means of transmission of the pathogen -the false news -whereas information-overloaded readers, with little time and without the appropriate digital skills, are the carriers. the warnings of the world health organization ( : ) go along the same lines, and this institution has been speaking of epidemics of rumours or an 'infodemic' in reference to 'the rapid spread of information of all kinds, including rumours, gossip and unreliable information', as a new threat to public health. among the other motivations for spreading disinformation, bakir and mcstay ( ) underscore the affective dimensions of fake news, which rouses strong emotions, such as outrage, and takes advantage, among the other characteristics of online communication, of anonymity. tandoc ( ) , in order to explain people's reasons for believing in fake news, discusses 'confirmation bias', or the inclination to believe in information confirming pre-existing beliefs, and 'selective exposure', or being exposed to content and information sources that are more attuned to one's preexisting attitudes and interests. however, according to pennycook and rand ( ) , who measured the propensity to engage in analytical reasoning in a sample of participants who had been exposed to a set of fake and real news items, it was the participants' willingness to engage in analytical thinking rather than confirmation bias that may have explained the difference in their ability to discern fake news from real news. in this study, analytical thinking allowed the participants to reject or disbelieve even politically concordant fake news articles. a lot has been written during the covid- crisis of in an attempt to fight against disinformation. an important part of the research has focused on the analysis of all kinds of information spread via social media (cinelli et al., ; ferrara, ; singh et al., ) , whilst others have suggested interventions for improving news and science literacy as empowering tools for users to identify, consume and share high-quality information (vraga et al., b) . the present contribution aims to understand the phenomenon of fake news from the perspective of information behaviour, pointing to uncertainty as a notable emotion in the context produced by the covid- health crisis. all models of human behaviour in the consumption of information emphasize uncertainty as the factor that triggers the search for information itself, although traditionally it has been conceptualized more as a cognitive than an emotional trigger, at least in certain literature that has underscored the attributes of individuals above context and sociocultural frameworks in the study of information behaviour (pettigrew et al., ) . since the s, information behaviour has been studied in the framework of communicative processes and in connection with contextual factors of a social, cultural and ideological order, among others, including values and meanings (pettigrew et al., ) . the evolutionary perspective of spink and cole ( ) also refers to the environment or context when they point to the ability to obtain and exchange information as intimately linked to human survival. in the theory of spink and cole ( ) , a behaviour of a constant searching for and collection of information from the environment, together with the architecture of the brain, has allowed the adaptation and survival of human beings. human beings have been collecting and seeking information constantly, and not always consciously, in order to adapt to their environment and survive. from this perspective, informationrelated behaviour appears as an instinct, not always conscious, and a basic need of all human beings. applied to the situation produced by the covid- crisis, it can be said that the great uncertainty and the strong emotional charge regarding health, economic and social issues have created a heightened need for information as a strategy to cope with and adapt to an unusual and unexpected situation. uncertainty as well as other emotions have been given attention in the study of information behaviour as influencing factors that interact with cognitive factors. in the kuhlthau ( kuhlthau ( , model, emotions such as uncertainty, anxiety, optimism or worry fluctuate according to the different stages of the information-search process, accompanying the respective cognitive and decision-making processes. nahl ( ) describes the synergy between cognition, emotions and the sensorimotor system in interactions with information technologies, explaining that adapting to environments with high information density implies a 'load' in all three dimensions. in nahl's ( a nahl's ( , b theory of affective load in human information behaviour, affective processes interact with cognitive processes, providing the energy and motivation necessary to adapt to information technologies, for example, or regulating certain decisions, such as those regarding whether to use the information or not. even in these models where emotions and other non-cognitive factors are assigned a role in information behaviour, decisions are made at the level of thinking and cognition. however, information decisions can also be made based on non-rational criteria and guided by emotions, corporeality and affect (montesi and Álvarez bornstein, ) . these non-rational factors guide people's judgement about the information they consume on a day-to-day basis and in situations of a lack of information and knowledge, which occur either because people enter specialized fields or because science and experts cannot always provide all the answers, as in situations of conflict between different sources of information -a phenomenon that has been studied in health information (montesi, ) . in the initial stages of the covid- crisis, and even later, experts and science were not able to provide all the answers that society expected. at the same time, uncertainty and the need for information were great, creating an important information gap in which other sources of knowledge came into play. research on the search for health information teaches us that the information of health professionals, endorsed by health authorities, is usually complemented by what is called 'experiential knowledge' -that is, knowledge acquired as a consequence of experience (either personal experience or other people's experience) in relevant situations (montesi, ) . this type of knowledge is usually exchanged when interacting with people (also on social media) and is closely related to social support, as it contributes to explaining and attributing meaning to the experiences that are being lived (barbarin et al., ; rubenstein, ) . experiential knowledge is especially valuable when facing situations of uncertainty and adaptation, not only for individuals but also for communities. baillergeau and duyvendak ( ) argue that 'experiential knowledge', as an alternative to expert knowledge, can guide policy responses in situations of high levels of uncertainty, specifically in the field of mental health policies. an important role is also recognized for experiential knowledge in climate change adaptation policies, where 'local knowledge' or 'indigenous knowledge', as it is referred to in this area, covers all the knowledge developed over a considerable period of time and shared by a community with respect to a specific locality. by its nature, local knowledge concerns adaptation mechanisms to changing environments, for both climatic and other factors, at the household and community levels (naess, ) . experiential knowledge, as an alternative to official and authoritative knowledge from health systems, contributes to people being capable of making decisions about their health, and health literacy is, according to samerski ( ), a social practice based on different sources and forms of knowledge, co-produced within the framework of social relations. despite the fact that it can guide in situations of uncertainty and adaptation, and that it empowers people to manage their health, experiential knowledge is still not recognized as evidence, and expert knowledge continues to condition the discourse and definitions of health and social problems (popay, ) . in short, during the covid- health crisis, fake news has been spread in a context of great uncertainty and emotional load that has generated a heightened need for information as a mechanism for understanding and adapting to an unprecedented and threatening event. the urgency of the situation has pushed us to look for quick solutions as a response to fake news, misinformation and disinformation -such as guidelines in bulleted points or automatic checks via google or other information 'authorities' such as factcheck.org (bernal-triviño and clares-gavilán, ) -and a significant proportion of the spanish population surveyed by the centro de investigaciones sociológicas ( ) supported the idea of a single official information source. in the end, the wide spread of false news calls into question all the knowledge that is produced outside official communication channels, as well as the rights of citizens to exercise their judgement on the information they consume. delegating decisions on information to authorities, whether health, scientific or others, is a common choice to assess trustworthiness and credibility, but during the covid- crisis it has been more pronounced and potentially harmful, as it might suppress all other sources of information, not only news media. saunders and budd ( ) remind us that, in library education, the credibility of information sources is assessed by looking at the credentials of who has produced them and their publication track record, reinforcing existing biases in the production of knowledge, including gender bias, in favour of institutionalized knowledge, and underestimating the need to train future library and information professionals on the evaluation of scientific information and contents. in other words, rather than checklists or predefined recipes for fighting fake news, misinformation and disinformation, it is necessary to develop critical thinking skills to apply to the content and information to which we are exposed on a daily basis. vraga et al. ( a) propose an initiative of news literacy combined with expert corrections of misinformation. however, fact-checking initiatives intended to debunk fake news might also fail as a strategy, as most people might ignore evidence or even continue to hold onto their pre-existing ideas, even after exposure to it (tandoc, ) . in addition, monopolizing control over information might have undesirable consequences and pave the way for censorship (sullivan, ) . it is also important to defend the legitimacy of information and knowledge that is acquired and shared outside institutionalized settings and as a result of experience, as it might be a meaningful complement to scientific knowledge, according to the vast research on health information behaviour. on the basis of these assumptions, better knowledge of disinformation is needed not only to improve research into the automatic detection of anomalous information (zhang and ghorbani, ) , but also to avoid fast and potentially harmful solutions. such research addresses the following questions in particular: does fake news rely on experiential knowledge? how does it manage to appear 'authoritative' to people who contribute to its dissemination? to what degree is it harmful? characterizing and understanding false news can help us to recognize and reject it based on the exercise of critical thinking. with this objective, in this work a set of false news items spread during the covid- crisis in spain is analysed. the sample of fake news analysed was obtained from the maldita.es website, a project that is part of the international fact-checking network initiative, which has been collecting fake news since (bernal-triviño and clares-gavilán, ). the methodology, on the basis of which it is established whether a news item is considered false, is described on the website (maldita.es, ) ; it focuses mainly on the verification process while omitting details regarding the news selection process. all the fake news is discussed thoroughly on maldita.es and refuted on the basis of additional public sources. in total, fake news items were classified. as of april , when the analysis of fake news was initiated, the site had collected news items that had been produced during the covid- health crisis alone. by the end of april, when the classification reached its end, the collection numbered almost items and was continuing to grow. the fake news on maldita.es does not follow a chronological order, and consecutive chunks of news were classified at the beginning, at the end and in the middle of the series during the month of april. the fake news collected from maldita.es on april included all false news reports about covid- , with the exception of three, which the intelligence centre against terrorism and organized crime ( ) of the ministry of internal affairs collected in a report that was published on march , providing a certain guarantee of coverage of the main hoaxes that were spread in the course of the health crisis. in order to classify the news against a set of quality criteria, i first looked at the literature on health-information seeking and the criteria that come into play when evaluating health information. among the elements that influence the quality of health information on the web, sbaffi and rowley ( ) highlight the website's design, the authority of the person/institution responsible for the site and the possibility to make contact, as well as the availability of other channels of interaction. similarly, zhang et al. ( ) emphasize the importance of factors related to web design -in particular, interactivity and the possibility of exchanging information with other people, expansion through social media, the presence of an internal search engine, multimedia documents and the availability of explicit disclaimers. at an operational level, the work of sun et al. ( ) was also taken into account. they define quality as 'fitness for use' -that is, quality information must serve the user's needs -and, in order to 'measure' it, sun et al. ( ) differentiate 'criteria', or rules, that people apply to information objects to determine their value -reliability, experience, objectivity, transparency, popularity or understandability, among others -from 'indicators' -that is, perceptible elements of the information objects that allow their quality to be determined. the set of indicators that sun et al. ( ) propose is deployed in three broad sections. indicators related to content cover both the information and the presentation, and include aspects such as themes and concepts, writing, presentation, references, authorship, audience, current events and the presence of advertisements. design-related indicators refer to the appearance and structure of the website or application, and the possibilities of interaction it provides. finally, the indicators related to the source include who creates, hosts and distributes the content, and the site typology, as well as its popularity and other systems' recommendations. unfortunately, many of these indicators are not applicable in this research. usually, fake news is spread outside of a website's context and as direct falsifications of official documents or informal communication devices such as tweets or social media accounts, among others. for this reason, many information literacy programmes addressing fake news may be ineffective (sullivan, ) , whilst artificial intelligence can be used to digitally manipulate video and audio files to deliver what has been called 'deep fakes' (tandoc, ) . from this literature on the evaluation of health information, i have retained two concepts -authority and interactivity -which i have measured as explained below. the concept of cognitive authority is one of the most studied in information-related behaviour (rieh, ) . as neal and mckenzie ( ) explain, currently the 'cognitive authority' of an information source is conceived as the result of social practices that allow a certain community to negotiate what counts as an authorized source of information. citing the framework for information literacy for higher education of the association of college and research libraries, saunders and budd ( ) add that cognitive authority is not only constructed, but also contextual, depending on the information needs of the situation, and that it covers not only traditional indicators of authority, such as subject expertise and societal position, but also lived experiences, such as those shared on blogs or social media. with reference to experiential knowledge, a second affective dimension of authority comes into play, which builds on the subjective properties of the information being shared, such as appropriateness, empathy, emotional supportiveness and aesthetic pleasure (neal and mckenzie, ) . as lynch and hunter ( ) point out, cognitive authority alone might be insufficient to deal with misinformation, and reflection on each individual's social and emotional factors might cast light on the dynamics of affective authority. according to montesi and Álvarez bornstein ( ) , from an affective point of view, decisions about information also rely on non-rational and not always conscious indicators originating from senses, emotions and intrapersonal knowledge, especially when decisions need to be made in situations of conflict among different information sources and points of view. following neal and mckenzie ( ) , the affective authoritativeness of experiential information sources rests on the account of the experience itself and its details, the similarity of the experience narrated with the reader's experience and, finally, the ability to comfort or inspire that personal experience provides over mere information. although they do not explicitly mention the affective dimension of authority, hirvonen et al. ( ) , who analyse a health forum for young women, add that the reliability of experiential knowledge is judged on the grounds of an array of elements, ranging from data related to the author to the way of arguing and tone (including language and style), the veracity or coherence with the reader's prior knowledge, and verification through comparison of various sources. it is important to differentiate this 'affective authority' of the content being disseminated from the affective authority of those who disseminate information, including false news, since, as montero-liberona and halpern ( ) point out, much false health news comes precisely from acquaintances and trusted people. in the classification of fake news, i have taken into account aspects that were relatively easy to detect which could allow a classification out of context, pointing to properties of the news that might have convinced the reader. specifically, and after a first informal browsing of the set of news items being classified, i have tried to operationalize the above concepts of cognitive and affective authority in the following way. regarding cognitive authority, it has been determined whether the information provided in the fake news ( ) derived from direct and firsthand experience, justified in the way the studies mentioned above describe (experiential knowledge); ( ) relied on subject expertise without institutional endorsement or other types of endorsement (the name, surname and professional qualification were provided and no more); or ( ) derived from subject expertise endorsed by an institution or a publication track record. additionally, i coded ( ) direct falsifications and ( ) the total absence of indicators of cognitive authority. capturing the affective component of authority based on the news itself and out of context is more difficult. in order to be able to locate cues of affective authority, i exploited, on the one hand, the concepts of fake news being used to discredit opponents (bakir and mcstay, ; tandoc, ) and of conflict among information sources as a condition for making 'affective' decisions about information (montesi and Álvarez bornstein, ) . on the other hand, i used some of the strategies described in hirvonen et al. ( ) to weigh experiential knowledge, in particular those pertaining to language and the comparison of sources. as a result, the following elements have been recorded: ( ) whether the news discredited people, ideas or movements in favour of others that were supposedly common to the recipients of the hoax; ( ) if coarse or offensive language was used; and ( ) if additional sources were mentioned or opportunities for further study were offered. i understood that the comparison of sources denotes a legitimate and genuine intention to transfer the knowledge acquired through personal experience. these were considered the easiest and most objective elements to detect, bearing in mind that it was not always possible to access the primary source. as mentioned previously, the literature on health-information seeking on the web points to interactivity as an important element to consider when evaluating information. according to sun et al. ( ) , interactivity is all the possibilities within a site to communicate with the system or other users, and to adjust content to consumer needs. this broad definition covers a varied range of features, such as internal search functions, devices for commenting on content and allowing user input and information exchange, multimedia content or personalization tools. the breath of the concept makes it difficult to use interactivity in fake news classification, especially if we consider that fake news is often disseminated outside of a website and that it is often ephemeral in nature. indeed, the literature dealing with the topic of interactivity supports a complex conception of it. oh and sundar ( ) differentiate 'modality interactivity', or 'tools or modalities available on the interface for accessing and interacting with information' ( ), from 'message interactivity', or 'the degree to which the system affords users the ability to reciprocally communicate with the system' ( ). yang and shen ( ) employ a meta-analysis to determine the effects of interactivity on cognition (as knowledge elaboration, information processing and message retrieval), enjoyment, attitude and behavioural intentions. the inconsistent conclusions they reach, pointing to a positive effect in all dimensions except cognition, suggest that interactivity might influence users' experience via two different routes: a cognitive route and an affective route. yang and shen ( ) conclude that, even if web interactivity does not support user cognition, it might raise affective responses, such as enjoyment, developing as a consequence favourable attitudes and behavioural intentions. according to oh and sundar ( ) , actions such as clicking, swiping and dragging allow users to exert greater control over the content and to feel absorbed and immersed cognitively and emotionally in it. without systematically processing the website's message, users may express a more positive attitude towards its content by feeling absorbed in interactive devices. although it might be difficult to identify interactivity clearly, it appears to be related to the affective dimension of authority that i discussed earlier, and it is therefore pertinent to devote some attention to it in this research, even if with limitations. basing the analysis exclusively on the news does not allow us to understand user perspectives on interactivity (sohn and choi, ) , and it is impossible to measure for fake news features such as search capabilities or personalization tools. all fake news is, to a certain extent, interactive, as it is precisely thanks to certain interactivity features that it goes viral. in this classification, i adopted a simple conception of interactivity and recorded whether the interactivity supported was simply one-click interactivity (forward, share, hashtag or like) or interactivity that at least allowed a certain degree of interaction and dialogue through the comments option. one-click interactivity covers all audio or text messages, images and videos sent via whatsapp, television programmes, and certain news published in web magazines and media that did not allow commenting. when it was not possible to determine whether the news allowed commenting, interactivity was coded as 'impossible to determine'. i have classified fake news into three themes: politics, science and society. although it tends to be predominant in politics, health fake news is also common (montero-liberona and halpern, ) and it was expected that, in the covid- crisis, it was being widely disseminated. in this study, health news was classified under science. a previous analysis of fake news collected from maldita.es revealed that, out of news items, most had politics as the main theme ( %), whilst the rest were distributed among people ( %), immigration and racism ( %), gender ( %) and science ( %) (bernal-triviño and clares-gavilán, ). i decided to remove 'people' as a category because many fake news pieces specifically attack people in the world of politics and thus have a political intention. according to tandoc et al. ( ) , in most cases, fake news is ignored and does not lead readers to further action, except for in some anecdotal cases, although concerns have been expressed about its ability to influence election results and confuse readers. however, in some exceptional cases, it can lead to extreme episodes of violence (tandoc, ) . in addition, much fake news has a certain sense of humour -something that can convince us of its harmless nature. however, the real danger it can result in is unknown. therefore, based on the information provided by maldita. es and complementary information searches, i attempted to determine whether fake news could result in potential danger to people's health or safety. all of the news that maldita.es had collected as fake was treated as such, with a few exceptions. phishing emails were excluded, as usually they are not intentionally spread in the same way as fake news, and so were some clear mistakes, such as an audio message from a doctor recorded for her family when leaving a meeting, which had been spread virally. i understood that these cases were not false information. maldita.es has also collected as fake news interpretations of information that are not always clear. this was the case, for instance, for the controversy about whether children were allowed a walk in italy or not between the end of march and the beginning of april . all of these cases were included in the analysed cases but they were not classified. in what follows, descriptive statistics are presented for authority, interactivity, themes and potential danger. in some cases, the possible association of some news features with others was tested by applying the chi-square test, such as the association between the use of offensive and coarse language and the theme of the news item. when the null hypothesis could be rejected, it is indicated in the text. the data was processed using excel and ibm spss statistics. table shows that the sample of classified fake news presents, in most cases, no cues of cognitive authority. in more than half of the cases ( . %), the information provided is not based on personal or professional experience. in . % of the news items, the authors introduce themselves by giving their name, surname and professional qualification, but do not mention any institutional affiliation or other type of endorsement. clear falsifications account for . % of all cases. the falsified news included all the alleged declarations of well-known people -such as bill gates, noam chomsky or pope francis -counterfeit tweets or other social media content published by major news media outlets, the spanish national police, ministries, or other departments of local and national government. in only instances ( . %) did i find some type of endorsement. this was the case with journalists publishing incorrect information, which was often rectified in news media outlets by members of parliament or experts such as thomas cowan, the author of several books. regarding affective authority, . % of the news discredits people, ideas or movements, whilst . % does so using coarse or offensive language. although in . % of the cases other sources are mentioned or referred to, often these sources do not exist as a result of falsifications or their removal. even so, this strategy may be enough to confer a certain affective authority on the news. in . % of the cases, the hoaxes use one-click 'interactivity', such as forward, share or hashtags. in . % of the cases, comments are also allowed, especially from twitter accounts or for youtube videos. many of the comments had been disabled on the date i accessed the news (especially on youtube). where the comments had not be disabled, often it was mentioned that the news was false or incorrect (see table ). one-click interactivity occurs more frequently when the hoax is a direct falsification (pearson's chi-squared = . , df = , p < . ) and when it does not compare or refer to additional information sources (pearson's chisquared = . , df = , p < . ). in the classification by theme, society accounts for . % of all cases, followed by politics ( . %) and science ( . %). all the fake news published in the category of science concerned health topics and, despite having all been published during the covid- health crisis, which should have emphasized health over the other categories, science accounted for less hoaxes than politics and society. the most common topics of the health fake news classified in the category of science included home remedies for treating, preventing or diagnosing covid- ; explanations about the origin of the virus, including the names of scientists allegedly responsible for the pandemic; vaccines; or advice regarding masks and hygiene procedures to avoid infection. popular news in politics often targeted members of the government and, secondarily, other politicians, who were accused of having preferential access to the health system's resources, breaching the lockdown or underestimating the impact of the pandemic based on the alleged evidence. all measures that limited the freedom of citizens were often misinterpreted and inflated. finally, society fake news was concerned with well-known people and companies, especially supermarket chains and social media companies; often had a racist background; showed images of animals in deserted urban scenes; and sometimes had an ironic tone (see table ). more frequently than society or science fake news, politics fake news used coarse or offensive language (pearson's chi-squared = . , df = , p < . ) and discredited people, movements or ideas (pearson's chisquared = . , df = , p < . ). as can be seen in table , it is clear that the vast majority of the news does not imply any danger to people's health or safety, since only of the news items that could be classified according to this criterion represent certain types of danger either for public safety or people's health. among the cases that were classified as potentially dangerous for health, meaningful examples include a supposed vaccine against covid- that could be used to manipulate the population, advertisements showing people offering to be infected with the virus, or the alleged minor vulnerability of smokers to covid- . on the side of fake news that was potentially dangerous for the security of people, examples include all news dealing with the impulsive behaviour of people rushing to supermarkets and stockpiling food or other commodities, which could invite people to reproduce similar behaviour. in this research, a sample of fake news items collected by the maldita.es project during the covid- health crisis in spain was classified according to the criteria of authority, interactivity, theme and potential danger. with regard to authority, no single news item was based on personal firsthand experience and only . % of the pieces were based on professional expertise supported by an affiliation or a publication track record. more than half of the sample ( . %) did not present any elements whatsoever that permitted mention of cognitive authority. in the rest of the cases, the information provided was either a clear falsification ( . %) or came from alleged professionals who, with their name, surname and professional qualification but no other endorsement, intended to contribute their knowledge ( . %). from the perspective of affective authority, hoaxes created 'complicity' with their recipients through strategies of discrediting people, ideas or movements ( . %), often using coarse or offensive language ( . %), pointing to the connection of affective responses with situations of polarization or conflict among information sources. both strategies were related to fake news whose main theme was politics. additionally, in more than a quarter of the cases ( . %), the fake news used a strategy of apparent transparency by comparing or referring to additional information sources, which probably helped to gain the trust of the recipients. as for interactivity, . % of the fake news items allowed comments and, in theory, an exchange of information with the author of the news or other people, while . % only allowed some type of one-click interactivity, such as like, share or forward. for . % of the news items, it was impossible to determine whether they supported commenting. one-click interactivity was related to falsifications more often than expected, whilst commenting was related to comparison or reference to additional information sources more often than expected, which means that interactivity features appeared to be related to different strategies of constructing authority. when authority rests on falsified author credentials, interactivity tends to be minimal -just enough to allow the spread of the news. when authority is built through a strategy of comparison and references to additional information sources, as usually happens when experiential knowledge is shared, it might support comments and, with these, a certain degree of participation. it is important to stress that most often the additional or referenced sources are also false or do not exist. what i am counting here is the act of referencing and supporting the news. research into interactivity has not be conclusive on the cognitive effects of physical and click-based interactivity (yang and shen, ), though apparently it can create significant changes in cognitive and emotional processing, as well as in attitudes and behaviours related to the information processed (oh and sundar, ) . social media per se and their interactive features do not always support real dialogue and communication, especially when they are used with political purposes (pérez curiel and garcía-gordillo, ), and even if likes or shares are often taken as indicators not only of interactivity and engagement but even of bi-directional and participative communication (sáez-martín and caba-pérez, ) . it is important to remember that, in the context of the covid- crisis, the need to consume information might have been much higher than usual, and even simple one-click actions might have allowed some kind of engagement and participation in information exchanges. cinelli et al. ( : ) , who looked at million comments and posts over a time span of days on five social media platforms during the covid- crisis, meaningfully observe that the spread patterns of questionable information do not differ from those of reliable information, concluding that 'information spreading is driven by the interaction paradigm imposed by the specific social media or/and by the specific interaction patterns of groups of users engaged with the topic'. future research should pursue a clearer definition of all these concepts and investigate how interactivity cooperates in supporting authority, on the one hand, and communication and participation, on the other. fake news items with society ( . %) as the theme outnumbered those on both politics ( . %) and science ( . %). it was surprising that science, which covered health, was the least popular subject in the middle of an unprecedented health crisis. health and politics discussions during the crisis might have followed different patterns, as ferrara ( ) explains on the basis of . million english tweets about covid- , concluding that tweets generated by bots were different from those generated by human users in that the former presented political connotations whereas the latter were concerned mainly with health and welfare issues. it might also be some feature of scientific information itself that explains this difference, such as the availability of valuable health information or the high level of specialization required to access and make use of scientific information, even in a manipulative way. scientific information is also based on peer review, which is, to a certain extent, a participative process, leading to the agreement of what counts as evidence and reducing conflict and polarization. finally, the vast majority of fake news does not result in any danger to the health or safety of people, which can lead us to consider it as harmless. indeed, some fake news is quite inoffensive. it does not cause any harm to claim that deer are trotting around in a spanish village when the video was actually shot in italy, because the images remain astonishing and worth sharing for their aesthetic value. however, taking as evidence the affective authority mechanisms mentioned above, there is some damage that disinformation might cause, which is not only intangible but also of a moral nature. sullivan ( ) insists that the problem is not the existence of disinformation itself but what it might do to our minds. the literature on the subject emphasizes that consumers of information tend to prefer information that confirms their preexisting attitudes and visions of the world, and give preference to information that is gratifying over that which calls into question their expectations (lazer et al., ; montero-liberona and halpern, ) . this phenomenon has been called 'confirmation bias' (tandoc, ) . however, i contend that this inclination towards the familiar can be conditioned by previous or prior knowledge -that is, all the information we have stored as a result of our experiences and as members of a certain society, and that we need in order to process and make sense of new information (renkema and schubert, ) . in a certain sense, it is to be expected that we prefer what is coherent with our prior knowledge and can be made sense of, even if our mental frameworks can sometimes distort facts according to socially and culturally shaped schemas of the world. perhaps, instead of correcting this natural inclination of human beings, we should correct the very concept we have of knowledge and start to include, apart from facts, values and meanings, as research into climate adaptation suggests (bremer and meisch, ) . if fake news is an indicator of social tension and divisions, as mentioned above following tandoc et al. ( ) , what it is showing, by discrediting without foundation people, ideas and movements, falsifying, and using coarse and offensive language, is a failure of civic values in contemporary society. and this does not only affect those creating the fake news, but also all those who are contributing to its dissemination. it is not enough to combat fake news from a purely cognitive angle, recommending checklists and honest expert control (rodríguez-ferrándiz, : ), or rectifying and correcting misinformation through news literacy interventions (vraga et al., b) . it is necessary to look for a solution within the complexity of human beings and our society that not only promotes critical thinking but also encompasses values and beliefs. libraries are proposing to broaden the ideological spectrum of their collections, highlighting the pluralism of the society they serve (lópez-borrull et al., ) . however, sullivan ( ) points to a tension in traditional library values that, on the one hand, aim to provide unrestricted access to information and, on the other, offer 'epistemological protection' by selecting information according to an unquestionable concept of quality. the solution to the apparently unsolvable problem of fake news probably requires a much deeper redefinition of values than simply making room for more pluralism and, according to the results of this study, affective nuances of knowledge and authority should be thoroughly explored and understood in order to take further steps in the fight against fake news. the author received no financial support for the research, authorship and/or publication of this article. michela montesi https://orcid.org/ - - - experiential knowledge as a resource for coping with uncertainty: evidence and examples from the netherlands. health fake news and the economy of emotions: problems, causes, solutions good or bad, ups and downs, and getting better: use of personal health data for temporal reflection in chronic illness uso del móvil y las redes sociales como canales de verificación de fake news: el caso de maldita.es deontología y noticias falsas: estudio de las percepciones de periodistas españoles los bulos sobre el coronavirus más extendidos (y cómo detectarlos) en whatsapp y redes sociales co-production in climate change research: reviewing different perspectives barómetro especial de abril the covid- social media infodemic. arxiv.org. epub ahead of print what types of covid- conspiracies are populated by twitter bots? first monday the cognitive authority of user-generated health information in an online forum for girls and young women international federation of library associations and institutions ( ) how to spot fake news inside the search process: information seeking from the user's perspective kuhlthau's information search process the science of fake news fake news: ¿amenaza u oportunidad para los profesionales de la información y la documentación? el profesional de la using the trump administration's responses to the epa climate assessment report to teach information literacy el cis pregunta si hay que mantener la 'libertad total' de información sobre el coronavirus ¿qué es la infodemia de la que habla la oms? available at factores que influyen en compartir noticias falsas de salud online comportamiento informacional en la búsqueda de información sobre salud defining a theoretical framework for information seeking and parenting: concepts and themes from a study with mothers supportive of attachment parenting the role of local knowledge in adaptation to climate change affective and cognitive information behavior: interaction effects in internet use theories of information behavior. medford, nj: information today social-biological information technology: an integrated conceptual framework putting the pieces together: endometriosis blogs, cognitive authority, and collaborative information behavior ocho claves para detectar noticias falsas how does interactivity persuade? an experimental test of interactivity on cognitive absorption, elaboration, and attitudes what happens when you click and drag: unpacking the relationship between on-screen interaction and user engagement with an anti-smoking website lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning política de influencia y tendencia fake en twitter: efectos postelectorales ( d) en el marco del procés en cataluña conceptual frameworks in information behavior what will it take to get the evidential value of lay knowledge recognised? introduction to discourse studies judgment of information quality and cognitive authority in the web posverdad y fake news en comunicación política: breve genealogía they are always there for me': the convergence of social support and information in an online breast cancer community disinformation and misinformation triangle: a conceptual model for 'fake news' epidemic, causal factors and interventions using social media to enhance citizen engagement with local government: twitter or facebook? health literacy as a social practice: social and empirical dimensions of knowledge on health and healthcare examining authority and reclaiming expertise trust and credibility in web-based health information: a review and agenda for future research a first look at covid- information and misinformation sharing on twitter. arxiv.org. epub ahead of print measuring expected interactivity: scale development and validation avalancha de bulos y fake news durante la crisis del covid- spanish national police ( ) guía contra las fake news a human information behavior approach to a philosophy of information why librarians can't fight fake news consumer evaluation of the quality of online health information: systematic literature review of relevant criteria and indicators the facts of fake news: a research review defining 'fake news': a typology of scholarly definitions cómo combatir las fake news en la era del covid- . youtube, march creating news literacy messages to enhance expert corrections of misinformation on twitter. communication research. epub ahead of print empowering users to respond to misinformation about covid- managing epidemics: key facts about major deadly diseases journal of the association for information science and technology. epub ahead of print effects of web interactivity: a metaanalysis an overview of online fake news: characterization, detection, and discussion quality of health information for consumers on the web: a systematic review of indicators, criteria, tools, and evaluation results michela montesi is associate professor at the complutense university of madrid. her area of expertise covers information behaviour, health information, and scientific communication. key: cord- - sww qnj authors: franke, günter title: management nicht-finanzieller risiken: eine forschungsagenda date: - - journal: schmalenbachs z betriebswirtsch forsch doi: . /s - - -z sha: doc_id: cord_uid: sww qnj the management of non-financial risks such as esg-, sustainability- and compliance risks poses a great challenge for companies. in contrast to financial risks the information on non-financial risks is very limited. this renders management quite difficult. companies incurred big losses due to non-financial risks in recent years. corporate governance of these risks raises many unresolved questions. this paper delineates potential answers and hypotheses about the impact of information quality. practitioners complain about the lack of support from academia. a cooperation of practitioners and academics to resolve these questions presents attractive research fields for academia. thus, this paper also presents a research agenda for academia. zahlreiche bisher wenig erforschte fragen der corporate governance auf, die in der praxis eingehend diskutiert werden. diese lassen sich einteilen in planungs-/ entscheidungsfragen und in steuerungs-und kontrollfragen, auch wenn eine scharfe trennung dieser beiden fragenkomplexe kaum möglich ist. zur planung/ entscheidung gehören risikoanalyse und risikomessung, setzen von prioritäten bei nicht-finanziellen risiken, steuerung des risikoappetits der entscheidungsträger, konzepte der entscheidungsfindung. zur steuerung und kontrolle gehören die aufbau-und prozessorganisation. dies schließt ein: training von mitarbeitern für wahrnehmung und beurteilung von risiken, schaffung einer adäquaten risikokultur unter einschluss von anreizsystemen, koordinierte umsetzung von entscheidungen durch diverse mitarbeiter, kontrolle des umsetzungserfolgs und des gemanagten risikos, laufende beobachtung der umwelt, um die information über die risiken zu verbessern und entscheidungen anzupassen, verhaltenskontrolle über mehrere verteidigungslinien. die informationsdefizite sind bei verschiedenen nicht-finanziellen risiken unterschiedlich groß, wie noch gezeigt wird. diesen unterschieden sollte die corporate governance rechnung tragen. dies führt zur ersten these dieses papiers, die später erläutert wird. diese these wie auch die folgenden thesen sind "ceteris paribus" zu verstehen. andere risikofaktoren, denen das unternehmen unterworfen ist, können den postulierten zusammenhang überlagern, so dass eventuell andere zusammenhänge die beobachtungen dominieren. in diesem beitrag werden einige mögliche vorgehensweisen für den umgang mit nicht-finanziellen risiken erörtert, ebenso werden forschungsfragen aufgezeigt. zudem werden thesen über den einfluss der verfügbaren information auf die corporate governance formuliert. umgekehrt postulieren andere thesen unterschiede in den informationsanforderungen zur effektiven lösung unterschiedlicher teilaufgaben des managements. diese thesen bedürfen einer eingehenden theoretischen und empirischen Überprüfung. wichtige hinweise gibt richard friberg mit seinem buch: managing risk and uncertainty -a strategic approach ( ) . er erörtert verschiedene vorgehensweisen von unternehmen bei wenig verfügbarer information. die verbindung von finanzwirtschaftlich konzipierten planungs-/entscheidungsmodellen und organisationskonzepten wurde bereits in den fünfziger und sechziger jahren z. b. von marschak ( / ) , radner ( / ) und hax ( untersucht, später z. b. von baldenius/reichelstein ( ) . als sehr fruchtbar erwies sich die agency theorie, die ausgehend von informationsasymmetrie optimale anreiz-und kontrollsysteme entwickelte. laux ( ) resümiert zahlreiche modelle, auch bei delegation von entscheidungen an mehrere entscheidungsträger. dabei handelt es sich stets um wohldefinierte problemstellungen. inzwischen existiert auch eine umfangreiche empirische literatur zur corporate governance, in der meist einzelne aspekte untersucht werden, siehe auch den survey von shleifer und vishny ( ) . der beitrag ist wie folgt gegliedert: im nächsten abschnitt werden aufgaben der unternehmensplanung definiert und hinsichtlich der erforderlichen information ver-k glichen. organisationsfragen werden am rande angesprochen. der dritte abschnitt veranschaulicht ergebnisse des zweiten abschnitts an statischen entscheidungsmodellen unter einschluss von organisationsfragen. zudem präsentiert dieser abschnitt konzepte zur vorgehensweise bei wenig verfügbarer information und versucht, minimale informationsanforderungen für eine verlässliche problemlösung anzugeben. im vierten abschnitt werden modelle der flexiblen planung und fragen des designs der unternehmensstrategie erörtert. im fünften abschnitt werden einzelne nichtfinanzielle risiken näher beschrieben und fragen der corporate governance skizziert, die in der praxis eingehend diskutiert werden. der letzte abschnitt resümiert herausforderungen für praxis und hochschule und präsentiert einige gedanken zu möglichen vorgehensweisen der forschung. aufgabe der unternehmensleitung ist es, ein portfolio von unternehmensaktivitäten zusammen zu stellen, seine implementation laufend zu überwachen und das portfolio entsprechend neuen erkenntnissen anzupassen und weiter zu entwickeln. hierbei kann die unternehmensleitung einem engeren fokus wie z. b. dem shareholder value approach folgen oder einem breiteren stakeholder approach, wie er zunehmend von der politik gefordert wird, so mit der esg-direktive. ebenso gewinnen nachhaltigkeitserfordernisse an bedeutung, z. b. gemäß den nachhaltigkeitszielen der un. die optimierung eines portfolios von unternehmensaktivitäten gemäß risiko und ertrag gehorcht einem portfolio-ansatz . das portfoliorisiko lässt sich umso effektiver managen, je zuverlässiger die stochastischen eigenschaften von finanziellen und nicht-finanziellen risiken wie auch die zwischen ihnen bestehenden zusammenhänge abgeschätzt werden können. hier liegt in anbetracht der mageren information über nicht-finanzielle risiken eine große herausforderung. ein evidenzbasiertes management nicht-finanzieller risiken startet mit einer identifizierung und klassifizierung dieser risiken: in welchen geschäftsfeldern des unternehmens können welche ereignisse eintreten, die die zielerreichung des unternehmens oder gar seine existenz gefährden? welcher art können diese ereignisse sein? welche risikofaktoren fördern den eintritt solcher ereignisse und die dadurch verursachten schäden? welche schäden wurden beobachtet? welche evidenz gibt es von anderen unternehmen? nach dem bottom up-approach zur fundierung einer taxonomie nicht-finanzieller risiken und ihrer risikotreiber bedarf es eines top down-approaches, der versucht, übergreifende konzepte zum geeigneten umgang mit nicht-finanziellen eine isolierte bewertung einzelner geschäftlichen aktivitäten wäre nur bei wertadditivität zulässig. bei wertadditivität ergibt sich der wert des unternehmens als summe der werte der einzelnen aktivitäten. dann erübrigt sich jedoch auch ein risikomanagement. k risiken zu entwickeln und empirisch zu überprüfen. während für den bottom up-approach die in unternehmen gesammelten erfahrungen unerlässlich sind, ist der top down-approach auch eine aufgabe für akademische forschung. dabei ist eine enge kooperation zwischen unternehmen und hochschulen geboten, um die notwendige bodenhaftung akademischer forschung zu sichern. das management nicht-finanzieller risiken setzt nicht nur die schätzung von potentiellen schäden aus unternehmensaktivitäten voraus, sondern ebenso von deren ertragspotenzial. risiken werden in kauf genommen, um damit erträge zu erzielen. zwar mag es nicht-finanzielle risiken geben, die jenseits eines risiko-ertrag-kalküls ausgeschlossen werden, wie z. b. kriminelle aktivitäten (zero tolerance). andere aktivitäten des unternehmens werden jedoch so geplant, dass damit erträge erzielt werden, die unter inkaufnahme des risikos als vorteilhaft betrachtet werden. dabei zeigt sich ein problem, das entscheidungen sehr erschwert. betriebliche vorsichtsmaßnahmen sollen helfen, potentielle schäden aus nicht-finanziellen risiken zu vermindern oder zu vermeiden. vermiedene schäden wie auch schadensminderungen sind jedoch im allgemeinen nicht beobachtbar. dies erschwert nicht nur die entscheidung über vorsichtsmaßnahmen, sondern auch die spätere Überprüfung ihrer wirkungen. die aktivitäten eines unternehmens lassen sich unterscheiden in operative tätigkeiten zur implementierung des geschäftsmodells ebenso wie absicherungsmaßnahmen. diese umschließen spezifische maßnahmen, mit denen ergebnisschwankungen aus operativen tätigkeiten (partiell) neutralisiert werden sollen, ebenso wie die unspezifische absicherung von liquidität und solvenz durch liquiditäts-und kapitalreserven . zu den spezifischen absicherungsmaßnahmen gehören operative und finanzielle hedgingmaßnahmen einschließlich des abschlusses von versicherungskontrakten. unspezifisches absicherungspotential kann durch liquiditäts-und kapitalreserven aufgebaut werden. es soll helfen, potentielle finanzielle anspannungen abzufedern und eine potentielle insolvenz mit den zugehörigen kosten zu vermeiden. zudem sollen die reserven die ungestörte weiterführung erfolgreicher operativer tätigkeiten ebenso wie die umsetzung bereits geplanter längerfristiger operativer maßnahmen auch bei ungünstigen entwicklungen sichern. unspezifische absicherung dient der vorsorge gegen ) schäden aus nicht abgesicherten finanziellen risiken, ) schäden aus nicht-finanziellen risiken, ) schäden aus risiken, die so selten eintreten, dass sie in der planung nicht explizit erfasst werden, ) basisrisiken aus spezifischen absicherungsmaßnahmen. dieser unternehmensinternen perspektive steht die des regulators/aufsehers gegenüber. deren aufgabe ist es vor allem, negative externe effekte des unternehmensgeschehens abzuwehren. während ein unternehmen liquiditäts-und kapitalreserven gemäß seinen eigenen (internen) kosten und erträgen wählt, wird der regulator/aufseher primär externe (insbes. systemische) kosten und erträge, die die allgemeinheit treffen, berücksichtigen und dabei von einer "gemeinwirtschaftlichen wohlfahrtsfunktion" ausgehen. dementsprechend wird er kapital-und liquiditätsreserven bemessen und auf ihrer umsetzung bestehen. außerdem gehören zur absicherungspolitik weitere ex ante-und ex post-maßnahmen, die geplant werden, bevor über die operativen und die absicherungsmaßnahmen entschieden wird. häufig wird dieses vorgehen durch unterschiedliche expertise der beteiligten personen, die zudem in unterschiedlichen abteilungen/ unternehmen arbeiten, begründet. die vorab geplanten maßnahmen sollen potentielle schäden vermindern und rahmenbedingungen für die spätere planung von operativen und absicherungsmaßnahmen schaffen. unabhängig von letzteren sollten die vorab geplanten maßnahmen vorteilhaft sein. ex ante-maßnahmen sind vorsichtsmaßnahmen, die vor eintritt eines schadensereignisses umgesetzt werden, um den eintritt zu erschweren oder gar zu verhindern und/oder die potentielle schadenshöhe zu vermindern. zu den ex post-maßnahmen zählen alle vorab geplanten maßnahmen, die nach eintritt eines schadensereignisses umgesetzt werden, um den schaden einzudämmen. so werden häufig vorab notfallpläne entwickelt, die dann im schadensfall sehr schnell umgesetzt werden. banken z. b. müssen vorab notfallpläne erstellen, damit bei einem schock möglichst viele operationen der bank weitergeführt werden können und bankkunden möglichst wenig geschädigt werden. gleichzeitig sollen notfallpläne klären, welche mitarbeiter des unternehmens und welche externen personen im schadensfall welche aufgaben übernehmen und wie ihre effektive und rasche zusammenarbeit gesichert wird. da nicht für alle möglichen schadensereignisse detaillierte vorgehensweisen vorab geplant werden können, werden sie erst nach beobachtung eines schadensereignisses konkret ausgestaltet. gleichzeitig erfordert die spätere umsetzung der notfallpläne häufig Änderungen in der corporate governance, die bereits vor eintritt von schadensereignissen durchgeführt werden müssen. wie unterscheiden sich die für die verschiedenen planungsschritte erforderlichen informationen? wird simultan über tätigkeiten zur operativen umsetzung des geschäftsmodells und über hedgingmaßnahmen entschieden, dann ist infolge der portfolioeffekte oft nicht klar, welche operativen maßnahmen der umsetzung des geschäftsmodells und welche der absicherung dienen. das ist anders bei sukzessiver planung. nicht selten werden im ersten schritt operative tätigkeiten zur umsetzung des geschäftsmodells beschlossen und erst danach absicherungsmaßnahmen. auch wenn theoretisch eine simultanplanung einer sukzessiven vorzuziehen sein mag, so unterbleibt eine simultanplanung in der praxis häufig, weil . eine simultanplanung komplexer als eine sukzessive ist und damit höhere administrative kosten verursachen kann, und . , weil in zahlreichen unternehmen unterschiedliche abteilungen schäden aus risiken, die so selten eintreten, dass sie in der planung nicht explizit erfasst werden, beeinträchtigen these nicht. sie werden numerisch nicht abgeschätzt, so dass lediglich ein pauschaler vorsorgebetrag gemäß der risikokultur im unternehmen eingeplant werden kann. wie beeinflussen nicht-finanzielle risiken den einsatz von spezifischen und unspezifischen absicherungsmaßnahmen? ein versicherer kann versuchen, solche risiken von vielen unternehmen zu poolen und so eine effektive diversifikation zu realisieren. bietet er unternehmen an, solche risiken preisgünstig zu versichern, dann ermöglicht dies einen effektiven hedge. andere spezifische absicherungsinstrumente existieren kaum, weil diese risiken kaum "rechenbar" sind und dritte daher diese risiken ohne pooling kaum übernehmen werden. soll bei erheblichen nicht-finanziellen risiken das ergebnisrisiko heruntergefahren werden, dann bietet sich oft nur eine einschränkung der operativen maßnahmen an. ein substitut für spezifische absicherung können liquiditäts-und kapitalreserven sein. sie gleichen zwar stochastische ergebnisse der operativen maßnahmen nicht aus, können aber die unternehmenspolitik und -existenz wirksam stabilisieren. damit gewinnen liquiditäts-und kapitalreserve an bedeutung, wenn nicht-finanzielle risiken zunehmen. dies motiviert these : bei zunahme nicht-finanzieller risiken wachsen die liquiditäts-und kapitalreserven des unternehmens, während spezifische hedgingmaßnahmen, ausgenommen versicherungskontrakte, an bedeutung verlieren. these sollte bereits bei der planung des operativen portfolios berücksichtigt werden. jedes portfolio sollte frühzeitig daraufhin überprüft werden, ob die entsprechende reservebildung möglich und erwünscht ist. dies setzt eine abstimmung zwischen den experten für nicht-finanzielle risiken im operativen bereich und den für die reservebildung zuständigen experten im finanzbereich voraus. in der entscheidungstheorie werden zahlreiche modelle vorgeschlagen, ausgehend von umfassender information bis hin zu verschiedenen modellen bei eingeschränkter information. es geht darum, aus den modellen bei umfassender information "rationale" anhaltspunkte für entscheidungen bei schlechterer information zu gewinnen, so auch für das management nicht-finanzieller risiken. dies erscheint wichtig, um persönlichen idiosynkrasien oder missverständnissen von entscheidungsträgern bei nicht-finanziellen risiken vorzubeugen. in diesem abschnitt sollen die vorangehenden Überlegungen anhand eines statischen modells (zwei-zeitpunkt-modell) veranschaulicht und dabei auch auf organisationsfragen eingegangen werden. als beispiel dient ein exporteur, der sich wechselkurs-und absatzrisiken gegenübersieht. zunächst wird unterstellt, der exporteur verfüge über umfassende information. sodann wird diese prämisse aufgeweicht und gefragt, welche modelle bei schlechterer information infrage kommen. schließlich wird die frage aufgeworfen, welche minimalen informationsanforderungen erfüllt sein müssen, damit eine entscheidung besser fundiert ist als bei kaffeesatzlesen. ein risikoscheuer exporteur, der lediglich ein gut im heimatland herstellt und in ein anderes land exportiert, sieht sich im einfachsten fall lediglich einem risiko gegenüber, dem wechselkursrisiko. zunächst wird vom absatz(mengen)risiko abstrahiert. der exporteur kennt die wahrscheinlichkeitsverteilung des wechselkurses. damit ist das wechselkursrisiko ein finanzielles risiko. im ausgangsfall wählt er lediglich die exportmenge x, um seinen erwarteten nutzen zu maximieren. produktions-und exportmenge stimmen überein. hierzu existiert eine umfangreiche literatur (z. b. benninga, eldor und zilcha , adam-müller . sei w der wechselkurs im zeitpunkt , definiert als einheiten in heimatwährung pro einheit fremdwährung, kf die fixen kosten in heimatwährung, k(x) die variablen stückkosten in heimatwährung mit k (x) > , p(x) der exportpreis in fremdwährung mit p (x) < und c der im zeitpunkt anfallende cashflow in heimatwährung. u(c) sei die konkave nutzenfunktion des cashflow c(x). dann lautet das entscheidungsproblem im zeitpunkt , wenn hedging ausgeschlossen ist: hadar und seo ( ) haben gezeigt, dass das optimale riskante engagement bei einer verbesserung gemäß stochastischer dominanz . grades generell nur dann wächst, wenn die relative risikoaversion des entscheidungsträgers kleiner als ist, wenn er also wenig risikoscheu ist. noch schärfere voraussetzungen gelten bei einer verbesserung gemäß stochastischer dominanz . grades, da die menge der möglichen verteilungsverbesserungen größer ist (siehe auch gollier , s. , prop. ) . diese ergebnisse sollten eine mahnung für die manager finanzieller und nichtfinanzieller risiken sein, sich nicht blindlings durch intuition leiten zu lassen. bei risiken mit wenig information kann neue information diverse verteilungsänderungen und damit fragwürdige entscheidungsänderungen suggerieren. jetzt wird das entscheidungsmodell um hedging ergänzt. besteht ein terminkontrakt auf den wechselkurs, so optimiert der exporteur im zeitpunkt seine exportmenge x und die im terminmarkt verkaufte menge an fremdwährung y. ist f der terminkurs im zeitpunkt , dann ergibt sich als cashflow c(x,y) im zeitpunkt k die letzte zeile der gleichung zeigt: das entscheidungsproblem lässt sich auch anders begreifen. der exporteur verkauft seinen exporterlös zum festen terminkurs im terminmarkt, so dass sein exporterlös in heimatwährung deterministisch ist. zusätzlich kann er im wechselkurs mit dem verkauf von yn einheiten spekulieren. der exporteur wählt jetzt seine optimale exportmenge x* so, dass seine grenzkosten mit dem grenzerlös, umgerechnet zum terminkurs in heimatwährung, übereinstimmen, k(x*) + k (x*) x* = [p(x*) + p (x*) x*] f. dieser sachverhalt wird als fisher separation bezeichnet. aufschlussreich ist die Änderung in den informationsanforderungen, die durch den terminkontrakt entsteht. für die exportentscheidung muss der exporteur lediglich den terminkurs kennen, nicht aber die wahrscheinlichkeitsverteilung des wechselkurses. die operative entscheidung wird dadurch sehr vereinfacht (broll und wahl ) . da die fisher separation aus arbitrage-Überlegungen folgt, bleibt sie gültig, wenn die information über die verteilung des wechselkurses dürftig ist. für den exporteur bleibt die frage, wie viel wechselkursrisiko er mit yn nehmen will. er kann dieses leicht vermeiden, indem er einen full hedge yn = wählt: er verkauft insgesamt genau den stochastischen exporterlös im terminmarkt. dies ist optimal, wenn der terminkurs mit dem erwarteten kassakurs übereinstimmt, d. h. die wechselkurs-risikoprämie gleich ist. wiederum ist eine information über die wahrscheinlichkeitsverteilung des wechselkurses überflüssig, allerdings benötigt der exporteur eine glaubwürdige information über die höhe der risikoprämie. ist die risikoprämie ungleich , so ist eine spekulative position yn ¤ im wechselkurs optimal. hedging-und spekulationsentscheidung sind zwei seiten derselben medaille. während die exportentscheidung den erwarteten nutzen des exporteurs (und den marktwert des unternehmens) erheblich beeinflusst, ändert die spekulative position im wechselkurs zwar auch den erwarteten nutzen, kaum aber den marktwert des unternehmens, da ihr marktwert abgesehen von transaktionskosten gleich ist. das spekulationsergebnis beläuft sich auf die spekulative position yn, multipliziert mit der differenz von terminkurs im zeitpunkt und kassakurs im zeitpunkt . das risiko daraus ist im allgemeinen erheblich kleiner als das aus dem ungehedgten exporterlös p w x*. dieser sachverhalt eröffnet dem exporteur unterschiedliche verhaltensweisen hinsichtlich informationsbeschaffung, risikoübernahme und gestaltung der organisation. bei einem full hedge sind die organisatorischen erfordernisse bescheiden, weil das wechselkursrisiko keine rolle spielt und die optimale exportmenge leicht zu errechnen ist. ist es für den exporteur billig, sich verlässlich über die wahrscheinlichkeitsverteilung des wechselkurses zu informieren, dann kann er dies tun und seine spekulative position im wechselkurs optimieren. allerdings muss er organisatorische vorkehrungen treffen, um die entscheidung sorgfältig vorzubereiten und später das risiko zu überwachen. sind informationsbeschaffung und organisation allerdings teuer, dann kann er darauf weitgehend verzichten und seine offene position im wechselkurs stark einschränken, ohne seine operative politik zu ändern. selbst wenn er sich aus dem wechselkursrisiko vollständig durch einen full hedge zurückzieht, ist der ihm dadurch entgehende erwartete nutzen eher gering. daher lohnt sich für den exporteur eine spekulative position im wechselkurs nur bei geringen informati-ons-und organisationskosten. folglich wird er im vergleich zur exportentscheidung ohne hedging weniger information beschaffen, wenn es lediglich um die spekulationsentscheidung geht, das stützt these . zahlreiche deutsche exporteure haben sich nach den währungsturbulenzen in den er jahren aus der wechselkursspekulation zurückgezogen, weil sie diese nicht als ihr kerngeschäft betrachten und der devisenmarkt außerordentlich kompetitiv geworden ist. dementsprechend werden personal-und andere organisationskosten eingespart. aufgrund der fisher separation ist die produktionsentscheidung in diesem beispiel einfach, so dass in diesem beispiel simultan-und sukzessivplanung ähnlich sind, damit auch die informationserfordernisse, abweichend von folgethese . das entscheidungsproblem des exporteurs wird erheblich komplizierter, wenn er sich auch einem absatzerlösrisiko und ggf. noch weiteren risiken gegenübersieht (adam-müller , kap. ). das erlösrisiko lässt sich im allgemeinen nicht finanziell hedgen, da keine kontrakte darauf gehandelt werden. dann gilt die fisher separation nicht mehr. im newsboy problem (eeckhoudt et al. ) produziert der exporteur die menge x, kann aber damit gegebenenfalls die tatsächliche nachfrage x nicht decken oder er bleibt auf einem teil der produktion sitzen. im ersten fall entgehen ihm gewinne aus entgangener nachfrage (x -x), im zweiten fall entgehen ihm erlöse aus nicht verkauften einheiten (x -x), abgesehen von möglichen entsorgungskosten . die stochastik der nachfrage und ggf. andere risiken "stochastifizieren" das exportergebnis. so kann im beispiel bei gegebenem wechselkurs die höhe des (bedingten) cashflows c(x,y|w) einem additiven störfaktor und/oder einem multiplikativen störfaktor unterworfen werden der bedingte erwartungswert des cashflows würde durch die störfaktoren nicht verändert, wenn a = b = wäre. diese störfaktoren erzeugen sog. "hintergrundrisiken". wenn die nutzenfunktion die plausible eigenschaft abnehmender und konvexer absoluter risikoaversion (standard risk aversion) aufweist, erhöhen additive störfaktoren mit einem erwartungswert b ≤ die risikoaversion des entscheidungsträgers (eeckhoudt et al. , s. , prop. . ) . multiplikative störfaktoren können je nach nutzenfunktion seine risikoaversion erhöhen oder senken (franke et al. ) . bei bestimmten kombinationen von additivem und multiplikativem hintergrundrisiko kann die risikoaversion auch gleich bleiben (franke et al. ). für mehrperiodige entscheidungsprobleme gibt es eine umfangreiche literatur zur optimalen produktions-und lagerhaltung, so sasieni et al. ( , ch. ) . geht man von einer hara-nutzenfunktion mit abnehmender absoluter risikoaversion und relativer risikoaversion von mehr als aus, dann ist die risikoaversion der indirekten nutzenfunktion geringer (höher) zu als die der ursprünglichen nutzenfunktion, wenn die relative risikoaversion der ursprünglichen nutzenfunktion im vermögen steigt (sinkt). entgegen der intuition ist es also möglich, dass die risikoaversion bei multiplikativem hintergrundrisiko sinkt. in anhang a wird gezeigt, wie bei umfassender information das absatzmengenrisiko als multiplikatives hintergrundrisiko modelliert werden kann. auch wenn die unverkäuflichkeit von produkten in schlechten absatzzuständen die optimale produktionsmenge senkt, so wirft die gemeinsame optimierung von produktionsmenge und wechselkursspekulation mit terminkontrakten offene forschungsfragen auf. die optimierung wird noch komplizierter, wenn es neben linearen auch nichtlineare absicherungsinstrumente wie optionen gibt (brown und toft ) . welche schlussfolgerungen ergeben sich für die wahl von liquiditäts-und kapitalreserve? das eigenkapital im zeitpunkt ergibt sich aus dem im zeitpunkt , korrigiert um gewinn/verlust abzüglich gewinnausschüttungen des unternehmens und kapitalzuführungen/entnahmen. die liquidität des unternehmens im zeitpunkt , die z. b. am geldvermögen des unternehmens gemessen werden kann, ergibt sich aus der im zeitpunkt , korrigiert um den cashflow des unternehmens und monetären kapitalzuführungen/entnahmen abzüglich gewinnausschüttungen. die höhe der liquiditätsreserve bemisst sich nach der wahrscheinlichkeitsverteilung des geldvermögens im zeitpunkt , die kapitalreserve nach der des eigenkapitals im zeitpunkt . ausgehend von der wahrscheinlichkeitsverteilung des geldvermögens/eigenkapitals im zeitpunkt kann eine reserve nach dem value at risk, also einem quantil der wahrscheinlichkeitsverteilung, festgelegt werden. kommt es bei unterschreiten dieses quantils zur insolvenz, so treffen die zusätzlichen kosten weniger den exporteur, vielmehr dritte. daher mag der aufseher das reserveerfordernis nach dem expected shortfall bemessen, also nach der höhe der erwarteten kosten, die über die dem expected shortfall zugrundeliegende schwelle von geldvermögen/ eigenkapital hinaus anfallen und dritte treffen. hedgt der exporteur die exportrisiken nicht, dann schlagen sie voll auf das zukünftige geldvermögen/eigenkapital durch. die wahrscheinlichkeit des ergebnisses zeigt dann einen ausgeprägten negativen tail. je mehr der exporteur seine risiken hedgt und je weniger er im wechselkurs spekuliert, umso geringer sind liquiditätsund kapitalreserve. und umso weniger bedeutsam und damit weniger lohnend ist die schätzung des negativen tails für die reservebildung. da es bei der bemessung von reserven nur auf den negativen tail ankommt, sind insoweit die informationsbeschaffung und organisatorischen erfordernisse weniger aufwändig als bei entscheidungen, bei denen es auf die gesamte wahrscheinlichkeitsverteilung des ergebnisses ankommt. entscheidungsregeln können an die qualität der verfügbaren entscheidungsrelevanten information gekoppelt werden. bisher wurde umfassende information unterstellt. wie reagiert der entscheidungsträger auf eine verschlechterung der informationsqualität? die modelle, die in der literatur vorgeschlagen werden, bieten möglicherweise eine orientierung für den umgang mit nicht-finanziellen risiken. vier modelle zum vorgehen bei geringer informationsqualität werden im folgenden k skizziert. um diese modelle zu erläutern, wird von einer üblichen ergebnismatrix mit endlich vielen zuständen und handlungsalternativen ausgegangen, die für jeden zustand der natur und für jede handlungsalternative das zugehörige ergebnis zeigt. bei umfassender information sind die wahrscheinlichkeiten der zustände bekannt. eine verschlechterung der informationsqualität kann den entscheidungsträger veranlassen, bei unveränderter ergebnismatrix und unveränderten wahrscheinlichkeiten seine risikoneigung der qualität anzupassen (pauschalmodell). oder er passt die nutzenfunktion des ergebnisses in den einzelnen zuständen an die qualität an (hintergrundrisiko-modell) oder er passt die zustandswahrscheinlichkeiten an (ambiguitätsmodell) oder er unterstellt lediglich qualitative wahrscheinlichkeiten. welche vorgehensweise unter welchen bedingungen am sinnvollsten ist, bleibt zu klären. zu qualitativen wahrscheinlichkeiten existieren jeweils konsistente quantitative wahrscheinlichkeiten. wie in anhang b verdeutlicht wird, bestehen auch bei qualitativen wahrscheinlichkeiten für jedes ereignis eine minimale und eine maximale quantitative wahrscheinlichkeit und damit ein intervall. das intervall ist größer für ein ereignis a als für ein ereignis b, wenn ereignis a qualitativ wahrscheinlicher ist; es wächst auch, wenn die zahl der anderen, vordefinierten ereignisse abnimmt. ausgehend von axiomen untersucht bühler ( bühler ( , dieses modell. im ergebnis kommt er wie gilboa und schmeidler ( ) zu einer pessimistischen maxmin-entscheidungsregel. siehe auch izhakian und brenner ( ) . k welche der skizzierten vorgehensweisen unter welchen bedingungen am sinnvollsten ist, bleibt offen. dazu bedarf es weiterer forschung. unabhängig davon lohnt sich bei schlechterer information eine umfangreichere informationsbeschaffung, sofern es dadurch gelingt, die qualität der information und damit die der entscheidung zu verbessern. a) willkür bei der vorgabe der ergebnismatrix? die bisherigen ausführungen unterstellen, dass der entscheidungsträger auch bei schlechter informationsqualität die ergebnismatrix kennt und wahrscheinlichkeitsverteilungen in zumindest grober form schätzen kann. eine wichtige forschungsfrage lautet, welche minimalen anforderungen an die informationsqualität zu stellen sind, damit die entscheidung besser fundiert wird als bei kaffeesatzlesen. entscheidungsgrundlage ist eine sorgfältig erstellte ergebnismatrix. sie ist selbst das ergebnis vorgelagerter entscheidungen des entscheidungsträgers. diese sollten die optimale handlungsalternative nicht präjudizieren, sondern eine unverzerrte basis für deren wahl bereitstellen. zwei fragen werden hier aufgegriffen. ) welche zustände werden in der ergebnismatrix berücksichtigt, welche nicht? ) wie fein werden zustände/ereignisse differenziert? ad ): in der ergebnismatrix werden nur zustände berücksichtigt, deren eintritt als möglich erachtet wird. auch hierbei gibt es jedoch fühlbarkeitsschwellen. zustände, die nur extrem selten beobachtet werden, werden häufig ausgeklammert . in diesen zuständen können hohe oder niedrige ergebnisse mit entsprechend hohen oder niedrigen ergebnisnutzen auftreten. solange es verlässliche wahrscheinlichkeiten gibt, spielen extrem selten auftretende zustände für den erwartungsnutzen einer handlungsalternative indessen eine geringe rolle, da ihre eintrittswahrscheinlichkeit gegen tendiert. daher ist der einfluss dieser zustände auf die optimierung gering, folglich auch der einfluss der vorgelagerten entscheidung, zustände in die ergebnismatrix aufzunehmen oder nicht. anders ist es, wenn keine wahrscheinlichkeiten bekannt sind. dann sind alle zustände in der ergebnismatrix "gleichrangig". dies verschafft den extrem selten auftretenden zuständen erhebliches gewicht. dies verdeutlicht die anwendung klassischer entscheidungskriterien bei unsicherheit (friberg , ch. ). so ist gemäß der maxmin-regel die alternative zu wählen, deren schlechtestes ergebnis am höchsten ist. dies bedeutet, dass alternativen mit geringer schwankungsbreite des ergebnisses in die engere wahl kommen und solche mit großer schwankungsbreite (= hochriskante alternativen) eher verdrängt werden. dieser verdrängungseffekt ist umso stärker, je mehr selten auftretende zustände mit sehr niedrigen ergebnissen hochriskanter alternativen berücksichtigt werden. damit präjudiziert die vorgabe der ergebnismatrix, inwieweit wenig riskante alternativen hoch riskante verdrängen. dies macht wenig sinn. gemäß der hurwicz-regel wird für jede alternative ein gewogenes mittel von schlechtestem und bestem ergebnis ermittelt und die alternative mit dem höchsten mittel gewählt. auch wenn diese regel den verdrängungseffekt einschränkt, bleibt unbefriedigend, dass lediglich extreme ergebnisse, die eventuell extrem selten auftreten, die entscheidung determinieren. wiederum erweist sich als maßgeblich, welche zustände in die matrix einbezogen werden. soweit diese wahl wenig begründet ist, trifft dies auch für die optimale alternative zu. ad ): die vorgabe einer ergebnismatrix setzt weitere mehr oder weniger willkürliche vorab-entscheidungen voraus, so über die zahl der zu berücksichtigenden risikofaktoren, über die vorgabe eines zahlenintervalls, in dem die realisation eines risikofaktors liegen kann, und über die feinheit der aufteilung dieses intervalls. oft werden die ergebnisse von einer großen zahl von risikofaktoren getrieben. selbst wenn deren definitionsbereich eingegrenzt werden kann, explodiert die zahl der zustände schnell. gibt es n risikofaktoren und jeweils k verschiedene realisationen, die beliebig kombiniert werden können, dann gibt es k n zustände. die problematik der zustandsauswahl wird deutlich, wenn der entscheidungsträger ein (gewogenes) mittel der ergebnisse über alle zustände errechnet und danach entscheidet. auch hierbei erweist sich die zustandsauswahl als entscheidend. denn sie determiniert das mittel der ergebnisse und damit die rangordnung der handlungsalternativen. sollen die vorab-entscheidungen über die ergebnismatrix nicht willkürlich sein, so erscheinen (grobe) wahrscheinlichkeitsurteile unverzichtbar, die ihrerseits möglichst evidenzfundiert sein sollten. auch sollten diese entscheidungen anhand einer kosten-nutzen-analyse getroffen werden. hat z. b. ein nicht-finanzielles risiko oder ein risikofaktor vermutlich nur einen geringen einfluss auf die ergebnisse der handlungsalternativen, dann mag es in anbetracht der informations-und transaktionskosten des entscheidungs-und umsetzungsprozesses sinnvoll sein, dieses risiko bzw. diesen risikofaktor zu vernachlässigen. b) willkür bei der szenarien-analyse? die problematik einer festlegung der ergebnismatrix trifft auch die weit verbreitete szenario-technik . um bei fehlen von wahrscheinlichkeiten einen einblick in ertrag und risiko von handlungsalternativen zu gewinnen, werden ereignisse zu "repräsentativen szenarien" aggregiert, die das spektrum der möglichen zustände abbilden sollen, z. b. szenarien mit sehr schlechten, mit mittleren und mit sehr guten ergebnissen. die zahl der aus einer teilmenge von zuständen ausgewählten szenarien sollte mit dem "repräsentationsgewicht" dieser teilmenge wachsen. so kann versucht werden, ergebnisse operativer handlungsalternativen grob zu kennzeichnen und auf dieser "evidenzgrundlage" zu entscheiden. das fehlen von wahrscheinlichkeiten wird durch plausibilitätskonstrukte "geheilt", die indessen vage bleiben. gibt es bessere vorgehensweisen? die problematik kann am maximum probable loss veranschaulicht werden, der in der versicherungswirtschaft eine wichtige rolle spielt. wenn ein versicherer einen kunden gegen cyber-risiken versichern möchte, kann er für seine eigene kalkulation versuchen, den maximum probable loss des kunden abzuschätzen, der konzeptionell dem value at risk verwandt ist. bisher gibt es zu möglichen cyber-schäden und deren häufigkeiten nur wenig verlässliche information. dieser informationsmangel wird auch beim "maximum probable loss" sichtbar. der begriff ist in sich widersprüchlich, da zwar ein "worst-case scenario" des schadens gemeint ist, jedoch eine art "realistic worst case", also ein "worst case" mit nicht vernachlässigbarer eintrittswahrscheinlichkeit. somit wird nicht das schlechtestmögliche ergebnis der ergebnismatrix verwendet, sondern ein pseudoquantil, dessen ermittlung allerdings in anbetracht der schlechten informationsqualität auf fragwürdigen grundlagen beruht. daher ist der maximum probable loss von subjektiven einschätzungen geprägt und bleibt eine vage größe, die von unterschiedlichen versicherern unterschiedlich eingeschätzt wird. ein zweites beispiel liefert der expected shortfall. zur abschätzung nutzen regulatoren/aufseher mehrere bad case-szenarien, anhand derer sie über eine vorgegebene schwelle hinausgehende verluste berechnen und zu einem expected shortfall aggregieren. das ergebnis hängt naturgemäß stark von den zugrunde gelegten bad case-szenarien und deren gewichtung ab. dieser sachverhalt eröffnet ein feld für manipulationen und erfordert daher organisatorische vorkehrungen. wer soll z. b. in die festlegung von szenarien für ein entscheidungsproblem eingebunden sein? einerseits können dazu personen gehören, die dank ihrer tätigkeit mit dem entscheidungsproblem besonders gut vertraut sind, ebenso übergeordnete entscheidungsträger, andererseits aber auch personen, die kein "skin in the game" haben und daher einen neutralen blick auf mögliche szenarien werfen können. das ähnelt einem lines of defense-modell. die anwendung der szenarien-technik ist weniger problematisch, wenn es um spezifisches hedging geht und es preiswerte hedging-instrumente gibt, um ergebnisschwankungen weitgehend auszugleichen. das gilt insbesondere, wenn es sich um finanzielle hedging-instrumente mit einem marktwert nahe handelt. dann spielt die auswahl von szenarien für die hedgingentscheidung eine untergeordnete rolle. diese Überlegungen knüpfen an these an. je geringer die hedgequalität ist, umso größer ist das basisrisiko, gemessen an den möglichen abweichungen zwischen dem zu hedgenden operativen ergebnis und dem hedge-ergebnis. dann gewinnt die auswahl der szenarien eine größere bedeutung. diese Überlegungen motivieren folgethese zu these . diese these gilt ebenfalls ceteris paribus, also bei ansonsten gegebener informationsbeschaffungsstrategie, kurz: informationsstrategie. sie bezeichnet die vorgehensweise des entscheidungsträgers, um seinen informationsstand nicht nur im zeitablauf zu aktualisieren, sondern ggf. auch darüber hinaus zu verbessern. im nächsten abschnitt wird sie eingehend erörtert. sie variiert mit der qualität der verfügbaren information. so mag es bei jungen start-ups wenig sinn machen, cashflows zu prognostizieren. existiert das unternehmen bereits einige jahre, dann erlaubt die gewonnene erfahrung, cashflows verlässlicher zu prognostizieren, so dass diese prognose teil der informationsstrategie wird. die vorangehenden erwägungen legen nahe, dass es ohne grobe subjektive wahrscheinlichkeiten nicht möglich ist, entscheidungen sinnvoll zu treffen. das wird noch deutlicher bei handlungsalternativen, deren ergebnisverteilung zur mathematischen bequemlichkeit auf dem intervall (- ,+ ) definiert wird. dann sind extreme ergebnisse nicht mehr definiert. wenn es nicht gelingt, durch grobe subjektive wahrscheinlichkeitsvorstellungen die potentiellen ergebnisse näher zu charakterisieren, ist eine gezielte informationsbeschaffung unerlässlich. die beschaffung und verarbeitung setzen indessen erheblich komplexere entscheidungs-und organisationsprozesse voraus. sie werden im folgenden abschnitt näher diskutiert, insbesondere die damit verbundenen offenen fragen. der optimale entscheidungszeitpunkt verschiebt sich tendenziell umso mehr in die zukunft, je geringer die wartekosten pro zeiteinheit sind und/oder je höher der wartevorteil pro zeiteinheit ist. der wartevorteil ist tendenziell bei weniger verfügbarer information größer, weil dann mehr information pro zeiteinheit zufließt und somit eine bessere fundierung von entscheidungen verstärkt. dies motiviert die folgende these : je weniger information aktuell verfügbar ist, desto wertvoller ist die warteoption. modelle flexibler planung eignen sich, durch informationsmangel erschwerte entscheidungsprobleme abzubilden (friberg , ch. ) . im klassischen modell der flexiblen, zeitdiskreten planung gibt es die zeitpunkte , , , ..., t (hespos und strassmann ; laux ) . entscheidungen können in allen zeitpunkten außer dem letzten getroffen werden. die zustandsknoten in diesen zeitpunkten sind gleichzeitig entscheidungsknoten. lediglich die knoten im zeitpunkt t (planungshorizont) sind knoten, die nur der abschließenden feststellung von ergebnissen dienen. jeder entscheidungsknoten ist vollständig gekennzeichnet durch die dann vorhandene information, die auch beobachtbare ergebnisse früherer entscheidungen beinhalten kann, die momentan gegebenen handlungsmöglichkeiten und die bedingten Übergangswahrscheinlichkeiten zu den folgeknoten. die optimalen entscheidungszeitpunkte werden automatisch ausgewählt, sofern die möglichkeiten, entscheidungen zu verschiedenen zeitpunkten zu treffen, im modell korrekt abge-k bildet werden. die in einem knoten verfügbare information wird häufig exogen modelliert, d. h., sie wird von exogenen risikofaktoren getrieben. diese bestimmen auch die in einem knoten beobachtbaren ergebnisse früherer handlungen. der lernprozess des entscheidungsträgers ist somit exogen vorgegeben. der entscheidungsträger kann jedoch außerdem informationsbeschaffung über separate maßnahmen planen und damit seine informationsstrategie ausbauen und verfeinern. dies liegt insbesondere bei nicht-finanziellen risiken infolge der dürftigen information nahe. dann wird der exogene informationsprozess durch einen endogenen ergänzt: informationsbeschaffungsmaßnahmen werden in das flexible modell eingefügt. diese maßnahmen können z. b. dazu dienen, information über weitere risikofaktoren zu sammeln. deren bedeutung für das unternehmen mag auch von zuvor getroffenen entscheidungen abhängen und daher einem endogenen prozess unterliegen, so dass auch die intensität optimaler informationsbeschaffung diesem prozess unterliegt. ebenfalls kann der entscheidungsträger informationen beschaffen, um die qualität der Übergangswahrscheinlichkeiten zu verbessern. besonders wichtig sind indessen forschung und entwicklung, um mit ihren erkenntnissen neue geschäftsmodelle zu erschließen. es liegt nahe, operative maßnahmen so zu wählen, dass damit hoffentlich ein operativer erfolg erzielt wird und gleichzeitig wertvolle information zufließt. dies ist typisch für pilotprojekte. z. b. wird in einem testgebiet ein neues produkt angeboten und damit gleichzeitig information über die voraussichtliche akzeptanz des produktes in anderen gebieten gewonnen. ein klassisches beispiel liefert die internationale sequentielle expansionsstrategie von unternehmen. meist startet ein nationales unternehmen in nur einem ausländischen markt mit dem verkauf seiner produkte, um deren attraktivität für kunden und mögliche reaktionen von wettbewerbern zu testen. hierbei wird nur ein kleinerer betrag investiert, um das risiko zu begrenzen. erweist sich der vorstoß als erfolgreich, werden die aktivitäten im diesem testmarkt ausgebaut und auf weitere auslandsmärkte ausgedehnt, usw. das positive feedback senkt auch das risiko weiterer investitionen und erlaubt daher, das investitionsvolumen zu erhöhen. gibt es dann bei weiteren expansionsschritten einen rückschlag, so werden die internationalen aktivitäten angepasst und ggf. wieder heruntergefahren. mit aktiver informationsbeschaffung wächst die menge der knoten im flexiblen planungsmodell deutlich. dagegen kann eingewendet werden, dass von vornherein im ausgangsmodell alle solchen denkbaren knoten enthalten sein sollten. damit würde dieses allerdings enorm aufgebläht, seine handhabung viel schwieriger und kostspieliger. wie komplex soll das flexible planungsmodell sein? auf grenzen der komplexität hat bereits herbert simon ( ) der erste teil der these beruht darauf, dass bei schlechterer informationsqualität mehr information pro zeiteinheit zufließt, die erfolgreicher genutzt werden kann, weil über mehr perioden eine raschere reaktion ermöglicht wird. zum zweiten teil: bei schlechterer qualität wird es schwieriger, fundiert längerfristig zu planen, so dass solche planungen öfter umgestoßen werden. daher lohnt sich eine längerfristige planung wegen ihrer zusätzlichen kosten weniger, eine häufigere neuauflage der planung erscheint vorteilhaft. vor etwa jahren sagte ulrich weiß, mitglied des vorstands der deutschen bank, in einem vortrag zur strategie der bank im entstehenden europäischen binnenmarkt: Über eine strategie grundsätzlich zu entscheiden beanspruche etwa % arbeitszeit, % würden danach darauf verwendet, die ergebnisse einer zunächst gewählten vorgehensweise laufend zu ermitteln und dann die vorgehensweise entsprechend anzupassen. der hinweis, dass die grundsätzliche entscheidung über eine strategie lediglich % arbeitszeit in anspruch nimmt, deutet darauf hin, dass das zu beginn verwendete entscheidungsmodell recht einfach ist. dafür wird viel arbeitszeit in die gestaltung und abwicklung eines trial and error-prozesses investiert. dies legt die vermutung nahe, dass es bei schlechter informationslage wenig sinn macht, in die anfängliche entscheidung viel zeit zu investieren, sondern dass es besser ist, den informationsprozess und den prozess anschließender adaptierung operativer entscheidungen bei sicherung der anpassungsfähigkeit sorgfältig zu gestalten. diese Überlegungen erscheinen plausibel, wenn es um den aufbau neuer geschäftsfelder, ausgehend von wenig information, geht. sind ähnliche konzepte auch geeignet, um nicht-finanzielle risiken zu managen? die vorangehenden Überlegungen sollen an beispielen aus dem bereich nicht-finanzieller risiken eines kreditinstituts veranschaulicht werden, dem compliance risiko, dem cyber-risiko, dem geldwäscherisiko und dem nachhaltigkeitsrisiko. compliance-risiken im engeren sinn sind risiken aus dem verhalten von mitarbeitern, das gegen rechtliche regeln verstößt. hierbei wird der maßstab für compliance-risiken dem unternehmen von außen vorgegeben. im weiteren sinn umfassen compliance-risiken auch risiken aus verhalten von mitarbeitern, das gegen unternehmensinterne verhaltensvorschriften und -kodizes verstößt. inwieweit das unternehmen in reputationsmanagement investieren soll, ist nur schwer zu beantworten. zwar mögen die kosten zusätzlichen reputationsmanagements abschätzbar sein, aber die erzielbaren reputationserträge sind kaum messbar; das gilt auch für vermiedene reputationskosten, mit ausnahme von messbaren marktwertverlusten. denn die höhe der reputationskosten kann durch das management des unternehmens nach eintritt des schadensfalls beeinflusst werden, z. b. indem es mit geschädigten kunden einen ausgleich sucht. die schätzung von reputationskosten in form eines barwerts zukünftiger ergebnisminderungen ist schwierig, weil die vergessensrate der stakeholder schwer vorherzusagen ist. sie hängt nicht nur vom management des unternehmens ab, sondern auch davon, inwieweit konkurrenzunternehmen von ähnlichen schadensereignissen betroffen sind und ob andere ereignisse, z. b. in der politik, die aufmerksamkeit der stakeholder absorbieren. es ist daher auch problematisch, reputationskosten an unmittelbar eintretenden marktwertverlusten des unternehmens, wie sie im aktienkurs zum ausdruck kommen, zu messen. diese überschätzen vermutlich die mittel-und langfristigen effekte. dies mag auch die hohen reputationskosten erklären, die kamiya et al. ( ) finden. macey ( ) verdeutlicht an zahlreichen beispielen the death of corporate reputation, also das schwinden von reputationseffekten. gleichzeitig weist er auf die zunehmende regulierungsdichte hin. es scheint, als ob die zunahme der regulierung mit einer abnahme von reputationskosten einhergeht. diese Überlegungen verdeutlichen, dass das compliance management zahlreiche offene fragen aufwirft, die weiterer forschung bedürfen. in den letzten jahren ist geldwäsche immer stärker in den fokus von politikern und regulatoren geraten. geldwäsche kann dazu dienen, aus kriminellen aktivitäten wie z. b. steuerhinterziehung, drogenhandel, erpressung, korruption, prostitution und menschenhandel erworbenes geld reinzuwaschen. nicht selten wird versucht, schmutziges geld bei einem kreditinstitut einzuzahlen und dieses zu beauftragen, das geld dann über eine kette von zahlungstransaktionen an einen endgültigen zahlungsempfänger zu transferieren. jede bank ist daher bei einer größeren einzahlung auf ein bei ihr geführtes konto verpflichtet, die quelle des geldes und den leumund des einzahlers, der nicht selten ein strohmann ist, nachzuprüfen. nur nach gründlicher prüfung darf die bank die einzahlung annehmen. ebenso sind die banken gehalten, die empfänger von zahlungen zu prüfen und ob das geld für kriminelle aktivitäten genutzt werden soll. kompliziert wird die prüfung dadurch, dass die banken nicht nur den direkten empfänger einer zahlung überprüfen sollen, sondern auch den endgültigen empfänger, der evt. über eine kette von mittelsmännern und zwischengeschalteten banken das geld empfängt. für die unbedenklichkeit genügt nicht, dass die zahlung an eine bekannte korrespondenzbank weitergeleitet wird. auch dürfen die banken nicht mit unternehmen und personen geschäfte machen, die von der europäischen union oder den usa mit entsprechenden sanktionen belegt sind. noch komplexer wird das problem durch den engen zusammenhang zwischen korruption und geldwäsche, den das us-justizministerium postuliert (hardy ) . selbst wenn ein unternehmen nicht wegen korruption bestraft werden kann, so kann es ggf. trotzdem für geleistete zahlungen wegen geldwäsche verurteilt werden. für eine bank ist es aufwändig, die unbedenklichkeit ihrer geschäftspartner zu prüfen (know your customer). eine erleichterung schaffen gesetzlich vorgeschriebene transparenzregister, in denen zahlreiche angaben über unternehmen und ihre mit den kontrollfragen eng zusammen hängt die gestaltung von risikoberichten. in banken soll sich der vorstand täglich ein bild über das risiko der bank machen. vielfach wünscht der vorstand, das risiko einer bank in wenigen zahlen zusammenzufassen, um rasch erkennen zu können, ob vorgegebene risikolimite eingehalten werden. diese vorgehensweise mag bei finanziellen risiken gerechtfertigt sein, jedoch kaum bei nicht-finanziellen risiken. hunt ( ) , der selbst regulator, aufseher und risikomanager war, spricht von der dashboard illusion. für einzelne nicht-finanzielle risiken wird eine messbarkeit in zahlen unterstellt, die es so nicht gibt. selbst wenn messbarkeit gegeben wäre, bräuchte es leser, die die zahlen verstehen. wie sollte ein reporting aussehen? diese problematik stellt sich auch für nicht-finanzielle unternehmen. in anbetracht der geringen informationsqualität liegt es nahe, die informationsbeschaffung erheblich zu intensivieren. die zunehmende fülle an elektronisch verfügbarer information suggeriert, mit big data-methoden all diese informationskanäle zu nutzen. hierbei sind indessen zwei trugschlüsse zu vermeiden. erstens schleichen sich in die big data zunehmend fake news ein, die immer besser getarnt werden. dies wirft die schwierige aufgabe auf, verlässliche von nicht verlässlichen informationen zu trennen. zweitens zeigt die statistische analyse von big data oft signifikante zusammenhänge, deren prognosegehalt gering ist. für out of sample-tests fehlen häufig die erforderlichen längeren beobachtungszeiträume. inwieweit künstliche intelligenz bei beiden probleme abhilfe schaffen kann, bleibt weiterer forschung vorbehalten. die vielfalt der fragen, die das management von nicht-finanziellen risiken aufwirft, stellt die unternehmen vor große herausforderungen. daher erwarten sie von den hochschulen unterstützung. forschern bieten sich damit neue forschungsfelder, deren ergebnisse entgegen manchen einschätzungen auch eingang in erstklassige zeitschriften finden. wie könnte ein forschungsansatz aussehen? in anbetracht der geringen information zu nicht-finanziellen unternehmensrisiken erzeugt diesbezügliche forschung ein nicht-finanzielles risiko der hochschule. dies legt auch ein anderes als das übliche forschungsdesign nahe. da es um die verknüpfung von finanzwirtschaftlicher planung und organisationskonzepten geht, bietet sich eine kooperation zwischen forschern beider bereiche an, um synergie-effekte zu erzielen. in einem ersten schritt wird dieses forschungsteam sein informationsdefizit zu nicht-finanziellen risiken durch intensive gespräche mit einschlägigen praxisvertretern abbauen. diese können den hochschulvertretern am besten erklären, welcher art die relevanten nicht-finanziellen risiken sind und welche fragen ihr management aufwirft. diese information ist für eine forschung, die sich nicht im elfenbeinturm bewegt, unerlässlich. sodann beginnt die eigentliche forschungsarbeit. da es "forschungsirrläufer" geben kann, ist die einrichtung von verteidigungslinien erforderlich. eine erste vertei-k digungslinie könnte im forschungsteam selbst geschaffen werden, indem regelmäßig und systematisch stärken und schwächen des eigenen ansatzes auf den prüfstand kommen. eine zweite verteidigungslinie könnte die präsentation der ergebnisse vor forschern sein, die planungs-/entscheidungsforschung bzw. organisationsforschung "klassisch" betreiben. eine dritte verteidigungslinie wäre das kritische feedback von praxisvertretern. eine vierte verteidigungslinie bieten schließlich begutachtungsprozesse, bereitgestellt von einschlägigen konferenzen und zeitschriften. wichtig erscheint es, die flexibilität der forschungsstrategie zu sichern, damit "forschungsirrläufer" frühzeitig zu erträglichen "kosten" korrigiert werden können. es ist zu hoffen, dass sich genügend forscher an hochschulen dieser herausforderung stellen. danksagung die anregungen zu diesem papier verdanke ich marliese uhrig-homburg. ihr, einem anonymen gutachter, martin ruckes und alfred wagenhofer danke ich sehr für hilfreiche kommentare. ganz besonders habe ich von ausführlichen konstruktiven und kritischen diskussionen mit wolfgang bühler profitiert. ich stehe sehr in seiner schuld. mängel gehen ausschließlich zu meinen lasten. funding open access funding provided by projekt deal. open access dieser artikel wird unter der creative commons namensnennung . international lizenz veröffentlicht, welche die nutzung, vervielfältigung, bearbeitung, verbreitung und wiedergabe in jeglichem medium und format erlaubt, sofern sie den/die ursprünglichen autor(en) und die quelle ordnungsgemäß nennen, einen link zur creative commons lizenz beifügen und angeben, ob Änderungen vorgenommen wurden. die in diesem artikel enthaltenen bilder und sonstiges drittmaterial unterliegen ebenfalls der genannten creative commons lizenz, sofern sich aus der abbildungslegende nichts anderes ergibt. sofern das betreffende material nicht unter der genannten creative commons lizenz steht und die betreffende handlung nicht nach gesetzlichen vorschriften erlaubt ist, ist für die oben aufgeführten weiterverwendungen des materials die einwilligung des jeweiligen rechteinhabers einzuholen. weitere details zur lizenz entnehmen sie bitte der lizenzinformation auf http://creativecommons.org/ licenses/by/ . /deed.de. interessenkonflikt g. franke gibt an, dass kein interessenkonflikt besteht. da die absatzmenge stochastisch ist, wird vereinfachend ein konstanter absatzpreis p unterstellt. existieren terminkontrakte auf den wechselkurs, so kann der exporteur z. b. den erwarteten exporterlös in fremdwährung durch ein termingeschäft absichern. dann bleibt allerdings ein wechselkursrisiko aus der differenz von tatsächlichem und abgesichertem fremdwährungserlös. was bedeutet dies für die optimale produktionsmenge des exportgutes? der cashflow im zeitpunkt ist bei kostenloser entsorgung nicht verkaufter einheiten gleich c .x; y/ d k f c pwx k.x/x .w f/y d k f c .pf k.x//x c .px y/.w f/ pw.x x/ c ; wobei . x x/ c d ; falls x < x; und .x x/ c d x x; falls x x: der cashflow c(x,y) ist gleich dem cashflow ( ) bei bekannter absatzmenge minus dem aus nichtverkauf produzierter einheiten entgangenen erlös p w (x-x) + . dieser ist ein produkt von zwei stochastischen variablen. bei umfassender information ist die gemeinsame wahrscheinlichkeitsverteilung von x und w bekannt. vereinfachend seien wechselkurs und absatzmenge unabhängig voneinander verteilt. dann bietet es sich an, das absatzrisiko als multiplikatives hintergrundrisiko mit negativem erwartungswert zu modellieren (siehe auch adam-müller , kap. . ). jede kombination aus absatzmenge und wechselkurs definiert einen zustand der natur z(x,w) mit dem zugehörigen bedingten cashflow c(x,y|x,w) und dem nutzen u[c(x,y|x,w)]. bildet man bei gegebenem wechselkurs den erwartungswert über die absatzmenge x, so definiert dieser die indirekte nutzenfunktion u*[c(x,y|w)]. die produktionsmenge x und das hedgevolumen y werden sodann optimiert, indem der über den wechselkurs gebildete erwartungswert der indirekten nutzenfunktion maximiert wird. wie beeinflusst das hintergrundrisiko die entscheidungen? im vergleich zur situation ohne absatzrisiko senkt die "strafe"p w (x-x) + bei gegebenem hedgevolumen die optimale produktionsmenge, weil sich so die strafe senken lässt. unterstellt man wie üblich eine nutzenfunktion mit abnehmender absoluter risikoaversion, dann erhöht die strafe zudem die risikoaversion in den strafzuständen. auch dies vermindert die produktionsmenge. je nach indirekter nutzenfunktion kann das absatzrisiko diverse effekte auf das optimale hedgevolumen erzeugen. hier kann es zudem zu cross hedging-effekten aus dem hedgeergebnis (p x -y) (w -f) und der strafep w (x-x) + mit rückwirkungen auf die optimale produktionsmenge kommen. bereits im newsboy problem, in dem es kein wechselkursrisiko gibt, hängen antworten der komparativen statik von der nutzenfunktion ab (eeckhoudt et al. ) . das gilt erst recht bei multiplikativem hintergrundrisiko. gegeben seien die ereignisse i = ,..,n, wobei ereignis i mindestens ebenso wahrscheinlich ist wie ereignis i- . bezeichnet Φi die qualitative wahrscheinlichkeit von ereignis i, dann gilt Φi ≥ Φi- , i > . konsistent mit diesen wahrscheinlichkeiten existieren quantitative wahrscheinlichkeiten πi mit πi ≥ πi- , i> . die minimale quantitative wahrscheinlichkeit eines ereignisses ist gleich für i = ,..,n - , so dass die maximale wahrscheinlichkeit des ereignisses n gleich ist. die maximale quantitative wahrscheinlichkeit eines ereignisses i ist gleich /(n -i + ) für i = ,..,n - , k nämlich wenn alle ereignisse j ≥ i gleich wahrscheinlich sind und alle ereignisse j < i eine wahrscheinlichkeit von haben. daraus folgt für das intervall i, in dem die quantitative wahrscheinlichkeit des ereignisses i liegt, i d =.n i c / für i d ; :::; n und n d =n: die wahrscheinlichkeit des ereignisses n ist minimal bei gleichverteilung aller ereignisse. das wahrscheinlichkeitsintervall wächst mit i, also mit höherer qualitativer wahrscheinlichkeit, ausgehend von einem sehr niedrigen wert bei hohem n; es erreicht die größe / für das zweitwahrscheinlichste ereignis und fast die größe für das wahrscheinlichste ereignis. das intervall eines ereignisses, ausgenommen das wahrscheinlichste, wächst, wenn die zahl der ereignisse vermindert wird. sofern nur wenig wahrscheinliche ereignisse den value at risk oder den expected shortfall und damit die reservebildung bestimmen, spielen die intervalle hierfür nur eine geringe rolle. ertrag und risiko operativer handlungsalternativen hängen allerdings von allen ereignissen ab, so dass die intervalle eine große rolle spielen können. internationale unternehmensaktivität, wechselkursrisiko und hedging mit finanzinstrumenten nachhaltigkeit, merkblatt zum umgang mit nachhaltigkeitsrisiken external and internal pricing in multidivisional firms the corporate compliance functioneffects on equity and credit risk. working paper optimal international hedging in commodity and currency forward markets internationale unternehmenspolitik how firms should hedge characterization of the extreme points of a class of special polyhedra. a comment to kofler's paper "entscheidungen bei teilweise bekannter verteilung der zustände flexible investitions-und finanzplanung bei unvollkommen bekannten Übergangswahrscheinlichkeiten competitive strategy: options and games cyber security report investment under uncertainty guidelines on stress testing eba action plan on sustainable finance reputation and its risks the risk averse (and prudent) newsboy economic and financial decisions under risk esma sets out its strategy on sustainable finance systematic scenario selection: stress testing and the nature of uncertainty so drastisch hoch sind die strafen für banken seit der finanzkrise multiplicative background risk risk-taking-neutral background risks managing risk and uncertainty-a strategic approach demand-based option pricing managing reputational risk-from theory to practice. in reputation capital: building and maintaining trust in the st century maxmin expected utility with non-unique prior stress scenario selection by empirical likelihood lessons learned: years of financial services litigation since the financial crisis: financial services disputes the economics of risk and time the shift of return distributions on optimal portfolios alle artikel zum thema libor-skandal high-profile fcpa prosecution reflects: government can lose on lead corruption charges but still win on related money laundering charges die koordination von entscheidungen stochastic decision trees for the analysis of investment decisions a practitioner's view on managing conduct and ethics dollar: das zahlten die banken im steuerstreit a theoretical foundation of ambiguity measurement asset prices and ambiguity: empirical evidence recovering risk aversion from option prices and realized returns risk management, firm reputation, and the impact of successful cyberattacks on target firms smart companies match their approach to the nature of the threats they face risk, uncertainty and profit flexible investitionsplanung unternehmensrechnung, anreiz und kontrolle, . aufl upper saddle river (n.j.): ft press. marschak, j. / . elements for a theory of teams optimum consumption and portfolio rules in a continouous-time model morningstar sustainability rating methodology geldwäsche kostete europas banken schon mrd. dollar strafe, handelsblatt, managing non-financial risk / . the application of linear programming to team decision problems fines on banks' misconduct to top $ billion by : report operations research: methods and problems a survey of corporate governance bank of america to pay $ . billion in historic settlement for financial fraud leading up to and during the financial crisis boeing max "grundlegend fehlerhaft und unsicher markets and hierarchies key: cord- -x chc authors: chi, oscar hengxuan; denton, gregory; gursoy, dogan title: interactive effects of message framing and information content on carbon offsetting behaviors date: - - journal: tour manag doi: . /j.tourman. . sha: doc_id: cord_uid: x chc this study examines the effects of message framing and information presentation on tourists' carbon offsetting behaviors within the theoretical framework of heuristic-systematic processing. the interactive effects of message framing and information presentation are assessed on both static and dynamic outcome variables employing a mixed between-within group methodology utilizing two sets of data through a longitudinal × × experimental design. the results reveal that a gain-framed messaging combined with objective climate change information and objective carbon offsetting information results in significantly more positive impacts on changes in purchase intention of carbon offsetting products and increases willingness to pay for carbon offsetting. conversely, the combination of loss-framed messages and subjective information presentation are shown not only to be ineffective in increasing carbon offsetting behavior but results in declines in tourists’ purchase intention of carbon offsetting products and willingness to pay for carbon offsetting. concern over excess carbon emissions generated by tourism continues to grow ( gössling & peeters, ) . the tourism industry risks becoming labeled as a "polluting industry" (becken, ; kim et al., ) and is predicted to generate up to % global carbon emissions by if effective measures are not taken. (sonnenschein & smedby, ) . even though the worldwide outbreak of covid- has drastically decreased the global carbon emissions (le quere et al., ) , efforts to decrease tourism industry's carbon emissions are still a priority for the vitality and sustainability of the tourism industry (unwto, ) considering the fact that governments and researchers have been increasingly considering taxing or regulating travel (dwyer et al., ; gössling & scott, ) . carbon offsetting has been identified as a potentially effective alternative to regulations or other interventions (babakhani et al., ; denton et al., ) . despite high levels of expressed concerns by travelers over the environment (gleim and lawson, ; goldstein et al., ) , voluntary participation in carbon offsetting programs has been extremely low to-date, and efforts to improve participation levels have been met with limited success, suggesting the existence of an attitude-behavior gap in carbon offsetting attitudes and behaviors (cheng et al., ; gleim & lawson, ) . previous research that has explored approaches to promote travelers' carbon offsetting behavior focuses on promoting environmental values (gössling et al., ) , designing appropriate carbon offsetting programs (choi & ritchie, ) , and developing government interventions (mckercher et al., ) . nevertheless, unlike other pro-environmental activities (e.g., water-saving, recycling), in which users are more likely to see the immediate outcomes of their pro-environmental behaviors and have direct control over those environmental outcomes, the effectiveness of carbon offsetting behavior heavily relies on the program providers. furthermore, consumers who participate in carbon offsetting programs are not likely to see the outcomes of their participation immediately due to the intangible and the long-term nature of climate change. moreover, the lack of knowledge regarding the climate science and the concept of carbon offsetting may lead tourists to distrust the consequences of carbon emissions and the benefits of carbon offsetting programs (denton et al., ) . the lack of direct control, sufficient knowledge and immediate outcomes may induce perceived uncertainty toward carbon offsetting products. therefore, compared to other pro-environmental behaviors, effective communication between program providers and tourists is more likely to mitigate the attitude-behavior gap in carbon offsetting. recent studies have suggested the importance of appropriately framed messages to promote emotional arousal toward the carbon offsetting programs (babakhani et al., ) and to increase customers' knowledge levels (denton et al., ) . however, previous studies in the field of tourism have reported contradictory findings about the factors that can influence pro-environmental behaviors, especially carbon offsetting behaviors. first, previous studies have identified inconsistent impacts of knowledge on pro-environmental carbon offsetting behaviors. in general, the lack of knowledge regarding climate change or mitigation strategies is a key factor contributing to the attitude-behavior gap (denton et al., ; gossling et al., ; mckercher et al., ) . however, simply increasing knowledge levels does not necessarily correspond with a higher environmental predisposition (pothitou, hann, & chalvatzis, ) , leading to the need to explore other behavioral and attitudinal antecedents to encourage conservation behaviors (lee & oh, ) . researchers now consider that improving consumer behavior requires further examination into not only the availability of information but the manner in which the information is communicated, highlighting the importance of message framing (cheng et al., ) . however, message framing studies encouraging sustainable tourism behaviors have also produced mixed results and the need exists for further research on how to get tourists more engaged in sustainable behaviors through effective message framing . existing research and theories indicate two major causes that may lead to mixed results. one cause is the research context. although message framing has been identified by social marketing and tourism scholars as an effective method of influencing consumer behaviors (cheng et al., ; kim & kim, ; zhang et al., ) , some studies have revealed an asymmetrical effect of message framing in different research settings (lee & oh, ) even though most studies argue that people exhibit more sensitivity to losses (e.g., o'keefe & jensen, ) in risky situations and gain-framed messages may work the best in promoting positive outcomes as suggested by the regulatory fit theory (e.g., lee & aaker, ) . considering the fact that global warming and climate change poses significant risks, loss-framed messages should be more effective than gain-framed messages in motivating tourists to participate in carbon offsetting programs. however, the hedonic nature of tourism can have significant impacts on the effectiveness of message framing strategies. since most tourists travel to satisfy their hedonic needs and seek positive experiences, negative message framing may not be appropriate in the tourism context despite the objective of preventing climate change because it conflicts with tourists' primary goals. gain-framed messaging in this situation might be more effective with tourists and more acceptable to tourism companies. even though studies argue that the effect of message framing is context-based, the role of hedonic nature of tourism on the effectiveness of carbon offsetting message framing strategies is not clear. studies conducted to-date applying message framing techniques to tourism have focused largely on recycling (e.g. grazzini et al., ) and water usage behaviors (e.g., liang et al., ) . this study aims to address this research gap by assessing the effectiveness of both loss-framed and gain-framed messages in encouraging tourists' carbon offsetting behaviors in order to identify the most effective message framing strategies in carbon offsetting and tourism contexts. another cause is the potential interaction between message framing and information types. studies agree that the way a message is framed can play critical roles in how much persuasion a message can produce (smith and petty, ) . however, since the type of information presented to individuals in a framed message can influence the information processing strategies they utilize, type of information included in a framed message, thus, can result in differential persuasiveness of loss or gain-framed messages. as suggested by the heuristic-systematic processing model (hsm), individuals who receive objective information about a subject or a topic are more likely to utilize a more systematic approach to process the information included in a message compared to individuals who receive subjective information. considering the fact that knowledge about climate change and carbon offsetting programs are two critical determinants of individuals' willingness to participate in carbon offsetting programs, there is a need for examining the effects of information type (objective/subjective) included in framed messages about the climate change and carbon offsetting programs on framed message persuasiveness. the third research gap is methodology related. existing studies have focused on absolute levels of behavioral intention using cross-sectional data sets. despite the critical contributions made by these studies, most of them assume that participants in treatment and control groups are homogeneous in terms of pro-environmental attitudes before participating in the study (e.g., dolnicar et al., ; zhang et al., ) . this assumption weakens the evidence of the causal relationship between message framing and its outcomes since the time-order of the cause and the effect is blurred (shadish, cook, & campbell, ) . in addition, although purchase intention is commonly used to predict actual behavior (ajzen & fishbein, ) , in the context of measuring pro-environmental behavior, the purchase intention reported is always inflated due to the normative nature of the study context (follows & jobber, ) . these arguments highlight the importance of examining actual pro-environmental behavior utilizing longitudinal data. however, no studies to date have evaluated relative changes in actual pro-environmental behaviors such as the changes in the amount that travelers are willing to pay for carbon offsetting programs due to the persuasiveness of messages using a longitudinal dataset, leaving the attitude-behavior gap still underexplored. this study is motivated to fill the three research gaps discussed above. through the theoretical framework of prospect theory, regulatory fit theory, and the heuristic-systematic model (hsm) of information processing, this study explores the interaction effects of message framing and the type of information (objective/subjective) presented about climate change and carbon offsetting programs on consumers' carbon offsetting behaviors. furthermore, this study employs a mixed between-within group methodology utilizing data collected before and after the message intervention, in order to measure not only the impact of the manipulations on purchase intention but to also assess the change in travelers' actual pro-environmental behavior. tourism generates greenhouse gas emissions and is a significant contributor to potential climate change (becken, ; kim et al., ) , contributing an estimated % of total carbon emissions in (world economic forum) with projected contributions of up to % by (sonnenschein & smedby, ) . one widely recognized strategy for reducing net carbon emissions is that of carbon sequestration or offsetting (gössling et al., ) . carbon offsetting has been criticized by some as a form of "greenwashing" (segerstedt & grote, ) but carbon sequestration has been shown to be acceptable to consumers (scott et al., ) and capable of capturing a significant proportion of excess atmospheric carbon (chazdon & brancalion, ) . it thus is in the tourism industry's best interests to explore ways to increase participation in voluntary carbon offsetting programs as a way to reduce tourism's contribution to excess atmospheric carbon. willingness to pay refers to the highest product price or service fee that a consumer is willing to admit (didier & lucie, ) . willingness to pay has been found to be closely related to purchase intention of pro-environmental products (barber et al., ) and the gap between tourists' intention and their behaviors has been widely documented (becken, ; juvan & dolnicar, ) . studies find that between % and % of travelers express intention to reduce or offset their carbon emissions, but only - % of them engage in carbon-reducing or purchase carbon offsetting products (gössling et al., ; segerstedt & grote, ) . these numbers clearly point to a significant attitude-behavior gap. existing studies suggest that the amount consumers are willing to pay for a green product is driven by the product-related information they receive (kang et al., ) . insufficient knowledge leads consumers to feel uncertainty toward the outcome of purchasing (vermeir & verbeke, ) . even though consumers may have purchase intentions of pro-environmental products, the deficient knowledge regarding climate change, tourism's carbon emissions, and/or carbon reduction options significantly reduce travelers' actual participation in carbon reduction hares et al., ; juvan & dolnicar, ; kim & kim, . scholars report that when consumers are evaluating a pro-environmental product, a message that can provide consumers with required information not only evokes purchase intention but also motivates them to pay more since the message reduces perceived uncertainty (kang et al., ) . therefore, an adequate message of carbon offsetting products will promote travelers' purchase intention, and more importantly, encourage them to pay a higher amount to reduce carbon emissions. message framing involves presenting information to recipients in a format intended to induce a desired interpretation or behavior (kapuściński & richards, ; tversky & kahneman, ) . a substantial stream of research has explored the effects of message framing on consumer choices (maheswaran & meyers-levy, ) and the cognitive processes that underlie framing effects (zhang et al., ) . message framing has been found to be a complex psychological process (maheswaran & meyers-levy, ) whereby individuals interpret and react to messages differently depending on how the information is presented (zhang et al., ) . attribute, or valence, framing is the most commonly studied form of framing, and consists of tailoring a message to either focus on the benefits (gains) of engaging in a desired behavior or on the costs or consequences (losses) of not engaging in the desired behavior. a number of studies have explored the effects of message framing on hotel guests' recycling ( (blose, mack, & pitts, ) ; grazzini et al., ; kim & kim, ; yoon et al., ) , booking intentions (sparks & browning, ) , in-room green communication strategies (lee & oh, ) , destination selection and image perception (amar et al., ; zhang et al., ) , and towel re-use (goldstein et al., ) . however, studies evaluating the relative benefits of gain-or loss-framed messages have shown mixed results. in general, loss-framed messages are considered to be more effective in changing behaviors that are considered risky, while gain-framed messages are considered more effective when behaviors are considered safe (cheng et al., ; kim & kim, ) . some studies have found effects to be asymmetrical, with consumers showing more sensitivity to losses than to equivalent gains (o'keefe & jensen, ), while others found that gain-framed messages result in greater engagement (lee & aaker, ; lee & oh, ) . other research found no significant effect of message framing (van 't riet, ). global warming and climate change are clearly risky with significant negative outcomes. therefore, past studies of message framing have suggested that loss-framed messages would thus be more effective than gain-framed messages. however, the nature of tourism and the intangible and the long-term nature of climate change are hypothesized to greatly impact the nature and effectiveness of message framing strategies. tourism is hedonic in nature, which could reduce tourists' engagement in pro-environmental behaviors (grazzini et al., ) . in addition, as a practical matter, negative framing of environmental messages may be undesirable from the perspective of tourism companies regardless of relative effectiveness, as negative messages are contrary to the hospitality industry's objective of having satisfied and happy guests (blose et al., ) . this poses a potential paradox in the context of tourism carbon emissions. regulatory fit theory suggests that people's engagement in tasks is promoted by the perception of "fit" between their primary goals and the tasks used to achieve those goals (higgins, ) . since most tourists travel to satisfy their hedonic needs and seek positive experiences, negative message framing is likely to be disregarded by consumers despite the objective of preventing climate change because it is not congruent with their primary goals as suggested by the regulatory fit theory. gain-framed messaging in this situation is likely to be both more effective with consumers and more acceptable to tourism companies. based on the preceding discussion, we hypothesize that since tourists travel for the purpose of hedonic tourism experiences, gain-framed messages would, therefore, be more effective than loss-framed messages in encouraging carbon offsetting behavior. h a. travelers who receive gained-framed messages that include information regarding climate change and carbon offsetting programs will exhibit significantly higher carbon offsetting product purchase intention compared to travelers who receive loss-framed messages. h b. travelers who receive gained-framed messages that include information regarding climate change and carbon offsetting programs will exhibit willingness to pay significantly higher amount of money for carbon offsetting programs compared to travelers who receive lossframed messages. studies argue that levels and types of knowledge about climate change and carbon offsetting programs are critical determinants of travelers' attitudes and behaviors towards those programs (denton et al., ) . although gaps in information among tourists regarding climate change and carbon offsetting practices have been clearly identified (choi & ritchie, ; hares et al., ; segerstedt & grote, ) , how receiving a message that includes information about both climate change and carbon offsetting programs may influence consumers carbon offsetting behaviors and the amount they are willing to pay for those carbon offsetting programs is not clear. a thorough understanding of the impact of information presented in messages on individuals' participation rates in carbon offsetting programs is particularly important in situations involving high levels of uncertainty and/or incomplete knowledge (dietz et al., ; ouyang et al., ) . this is especially true in the climate change context due to its long-term orientation and the immediate intangible nature of climate change. for this reason, a message that contains information about climate change and carbon offsetting programs can provide useful information to tourists about climate change, carbon offsetting programs and their attributes (ai et al., ) , which can evoke people's perception of the importance of climate change and the relevance of carbon offsetting efforts. as suggested by hsm, people who perceive that the information is relevant and important (chaiken et al., ) are likely to process the information they receive intentionally and systematically, which involves a comprehensive analysis of the information and careful deductions, resulting in a high-level of persuasion. in contrast, when people are asked whether they would like to participate in a carbon offsetting program without adequate information may view it as being irrelevant, which can trigger heuristic processing. when consumers evaluate carbon offsetting products, information regarding climate change can lead tourists to have a higher level of perceived importance of the carbon offsetting, resulting in an increasing effort on processing the message. based on the discussion above, travelers who receive appropriate information regarding climate change and carbon offsetting programs are more likely to exhibit positive behavior toward carbon offsetting products. h a. travelers who receive messages that include information regarding climate change and carbon offsetting programs will exhibit significantly different carbon offsetting product purchase intention compared to travelers who do not receive framed messages that include information regarding climate change and carbon offsetting programs. h b. travelers who receive messages that include information regarding climate change and carbon offsetting programs will exhibit significantly different willingness to pay for carbon offsetting programs compared to travelers who do not receive framed messages that include information regarding climate change and carbon offsetting programs. consumer skepticism due to past instances of "greenwashing" by companies that misrepresented their levels of environmentalism (ponnapureddy et al., ; rahman et al., ) increased the importance of the type of information consumers receive regarding carbon offsetting programs and their format as the stakeholders of the potential carbon offsetting exchange. previous information processing research distinguished between objective (factual) and subjective (evaluative) information (holbrook, ) . objective information (e.g., the nicotine level of a cigarette brand) includes a specific statement regarding the fact, whereas subjective information (e.g., the taste of this cigarette is rich) often contains judgments from other people (edell & staelin, ) . studies have shown that objective information is more credible and has more persuasive power than subjective information because it is processed more systematically (chaiken et al., ) . therefore, the effect of a gain or loss-framed message may differ depending on whether the message contains objective and/or subjective information as suggested by the information persuasion framework. thus, there is likely to be an interaction effect between information persuasion and gain-loss framing as presented in the conceptual model ( fig. ) developed based on the preceding discussion. as presented in figure one, interactive effects of gain or loss-framed messages using objective/subjective information about climate change and carbon offsetting programs will result in significant changes in consumers' willingness to participate in carbon offsetting programs and willingness to pay to offset carbon emissions. as proposed by hsm theory, people's perceived self-relevance critically drives the outcome of information processing (chaiken et al., ) . besides the information content, the perception of self-relevance can be also affected by people's pre-existing mindset. scholars find that customers' pre-existing mindset significantly alters the processing of product-related information during shopping (büttner et al., ) . in the current research context, if tourists are in a hedonic, experience-oriented mindset, then gain-framed messages will be congruent with their mindset, leading to a higher level of perceived selfrelevance. in contrast, the presentation of a negative-framed message will lead to a conflicted mindset, leading to a self-regulation behavior that prevents people from conducting any further actions that expand the conflict (kleiman & enisman, ) . for this reason, loss framing may prevent tourists from processing the information systematically and decrease the persuasive effectiveness of the message, leading to a smaller change in purchase intention. based on these discussions, the following hypotheses were developed, and a conceptual model was displayed in fig. . h . gain-framed messages with objective information about climate change and carbon offsetting programs will result in greater change in purchase intentions than any other combination of gain or loss framed messages containing objective/subjective information about climate change and carbon offsetting programs. the interaction seems to significantly influence willingness to pay. according to the previous discussion, the perceived uncertainty caused by the lack of available information is one of the major factors that cause the intention-behavior gap (vermeir & verbeke, ) . the presentation of objective information (vs. subjective) regarding climate change leads to a higher level of perceived importance. in addition, objective information regarding carbon offsetting programs is more effective than subjective information as a mean of increasing consumers' knowledge. based on hsm, the combination of this objective information will lead to greater changes in the amount that consumers are willing to pay. conversely, if a message is loss-framed consumers are less likely to spend effort on processing this objective information due to the incongruence with their mindset (kleiman & enisman, ) . therefore, the following hypotheses were developed. h . gain-framed messages with objective information about climate change and carbon offsetting programs will result in greater change in willingness to pay for carbon offsetting programs than any other combination of gain or loss framed messages containing objective/subjective information about climate change and carbon offsetting programs. this study adopted a scenario-based experimental design and a mixed between-within group methodology utilizing data collected before consumers received manipulated messages and after they received the manipulated messages. a × × scenario-based experiment enabled this study to examine the impacts of different types of information and the interactions among these interventions. in addition, the use of before and after data captured changes in tourists' purchase intentions and willingness to pay in order to assess the causal impact of the message on tourists' pro-environmental behaviors. this study utilized a student data and an on-line tourist panel to examine the manipulation and the hypotheses, respectively. the pilot offsetting program study data were collected from college students from a public university located in the northwest of the united states. each student was required to fill out a paper survey which included a randomly assigned message and they were asked to indicate whether the message was gain or loss framed and whether the information was objective or subjective. additionally, they rated the understandability and perceived credibility of the messages they received. the main study recruited participants via amazon mechanical turk (mturk). mturk is an online survey platform that is considered to provide a more representative sample of the general population compared to traditional data collection method (buhrmester, kwang, & gosling, ) and is reported to be appropriate when testing general principles that do not require samples from a specific tourism context (viglia & dolnicar, ) . mturk is commonly used by existing environmental studies (e.g., denton et al., ; wang & lyu, ) and tourism studies (e.g., ert et al., ; kim et al., ) . in the current study, a small amount of monetary incentive was offered to participants who completed the online survey and passed the attention checks (e.g., "this is an attention check question. please choose 'disagree'"). attention check questions have been widely used in social science studies to filter out careless respondents without affecting scale validity (kung at al., ) . participant first completed part of the survey and then received a randomly generated identification number. after two weeks, participants received one out of eight ( × × ) messages followed by part of the survey. at the end of part , participants entered the identification numbers received in part to create a longitudinal data format. eight different combinations of framed messages were used in this study to examine the impact on changes in tourists' purchase intentions and the amount that they are willing to pay for a hypothetical carbon offsetting company. each message consisted of three interventions: gain (loss) framed persuasive message, objective (subjective) information of climate change and objective (subjective) information about a carbon offsetting program. in gain framed messages, the text highlighted the benefits of purchasing carbon offset products (e.g., "if you choose to offset your carbon emissions, you will be removing carbon from the atmosphere and helping to preserve our environment"), whereas, the loss framed messages emphasized the cost of not purchasing carbon offset products (e.g., "if you do not offset your carbon emissions, you will not be removing carbon from the atmosphere and helping to prevent deterioration of our environment"). the objective and subjective information were manipulated by providing a message with either more facts (e.g., "carbon emission levels now exceed parts per million, which has never occurred in the , years of recorded history", "our offsetting is done through zero footprint, a private non-profit organization that helps over companies like ours to offset their carbon emissions") or more evaluations (e.g., "carbon emission levels have been climbing and many people believe the current levels of emissions are unsustainable", "our offsetting is done through zero footprint, a private non-profit organization that many companies use to offset their carbon emissions"). in the pilot study, which examined the manipulation effects of the messages, a seven-point likert bipolar scale was used ( : loss framed, : gain framed) to measure the manipulation of gain-loss framed messages. similar bipolar scales were used to measure the objectivity/subjectivity ( : subjective, : objective) of the climate change and carbon offsetting information. two other bipolar scales were used to examine the perceived credibility ( : not credible at all; : highly credible) and understandability ( : not understandable at all; : highly understandable) of the information. the main study assessed the changes in purchase intention and the amount that tourists are willing to pay for carbon offsetting products after the presentation of different messages. for this reason, in part (time ) of the study, participants were asked to express their purchase intention and the amount that they are willing to pay for general carbon offsetting products. in part (time ), participants read a randomly assigned message regarding a specific product and were asked to express their purchase intention and the amount that they are willing to pay for this product. this study adopted measurement items from existing studies (lu & gursoy, ) to measure purchase intention (pi) using a -level likert scale. the changes of purchase intention were calculated by using: the amount that tourists are willing to pay ($wtp) was measured by asking participants to select the amount ($ , $ , $ , or $ ) they would spend on the carbon offsetting product. this direct measure approach was appropriate in this study for the following reasons. first, the direct method has been found to have less hypothetical bias than does the indirect method (schmidt & bijmolt, ) . second, since this study focused on examining the changes in the amount they are willing to pay after reading the messages, direct method allowed this study to accurately calculate and compare the changes, which were calculated by using: independent sample t-tests were used for manipulation checks in the pilot study. the main study utilized correlation and anova with posthoc comparisons to examine whether the participants who completed different versions of the survey were homogeneous in terms of purchase intention and willingness to pay before reading the message. to examine the main effect of gain-loss message framing (h a and h b) and the main effect of messages (h a and h b), this study performed a battery of paired sample t-tests, in which tourists' initial purchase intention and amount that they are willing to pay were compared with the subsequent score after processing gain or loss framed climate change and carbon offset product messages. finally, the current study conducted an anova with post-hoc tests (tukey hsd) to compare the effects of different messages on the changes in purchase intention and willingness to pay (h and h ). post-hoc tests enabled this study to investigate whether one message has significantly better persuasion power than other messages. the pilot study validated the manipulation of messages. sixty (n = ) students were hired, and each student received a randomly assigned scenario. thus, the cell size for each type of manipulation (e.g., gain-loss message) was . the results of independent sample t-tests suggested that the gain framed messages (mean = . ) was significantly more gain focused (t ( ) = . , p = . ) than the loss framed messages (mean = . ). the objective-framed information (mean = . ) of climate change was perceived to be more objective (t ( ) = . , p = . ) than the subjective-framed information (mean = . ). perceived credibility (t ( ) = − . , p = . ) and understandability (t ( ) = − . , p = . ) were not significantly different across types. in addition, the objective-framed information (mean = . ) of carbon offsetting product was perceived to be more objective (t ( ) = . , p = . ) than the subjective-framed information (mean = . ), while the differences in perceived credibility (t ( ) = − . , p = . ) and understandability (t ( ) = − . , p = . ) were not significant. these findings indicated that the scenarios used by this study were appropriately designed. five hundred and eighty-five (n = ) participants were engaged in the main study, which tested the proposed hypotheses. most participants were female ( . %), aged between and ( . %), and married ( . %). a largest proportion of respondents indicated professional occupations ( . %), undergraduate degrees ( . %), and annual household above $ , ( . %). this demographic profile is similar to the demographic profiles reported by the previous studies that utilized mturk (denton et al., ) , email survey (e.g., han & hyun, ) or random sampling approach (mclean et al., ) . moreover, the result of anova and post hoc tests demonstrated that the cell means of purchase intention (f ( , ) = . , p = . ) and the amount of willingness to pay (f ( , ) = . , p = . ) across eight groups were not significantly different. these results indicated that the means were homogeneous before the experimental stimuli were administered, suggesting that the findings of this study were unlikely to be influenced by the differences in participants' initial behavioral intentions. paired sample t-tests were conducted to evaluate the main effects of message framing and message content. the results of these comparisons are summarized in table . results indicated that gain-framed messages produced greater positive effects on purchase intentions (. vs . , t ( ) = . , p < . ) and the amount that subjects are willing to pay ( . increase vs . decrease, t ( ) = . , p < . ), providing support for hypotheses h a and h b. the results of the main effect of messages indicated that after receiving framed messages that included information regarding climate change and carbon offsetting programs, subjects' purchase intention (t ( ) = . , p < . ) and willingness to pay (t ( ) = . , p < . ) significantly increased compared to time data collection, supporting h a and h b. the study also found that objective information generally created stronger impacts on attitude and behavior changes than did subjective information, pointing to a discrepancy in the impact of information types (objective vs subjective). to investigate the effects across different messages, anova and post-hoc (tukey hsd) tests were performed to compare the eight messages on the changes in consumer behaviors. the results suggested that different messages lead to significantly different changes in purchase intention (f ( , ) = . , p < . ), and the amount that travelers are willing to pay (f ( , ) = . , p < . ). the post-hoc analysis (table ) suggested that the gain-framed objective information about climate change and carbon offsetting programs (goo) lead to significantly higher changes in purchase intention than other messages. in addition, there were no significant differences in the effects on purchase intention across the loss-framed messages. these results supported h . similar results were found in the effects on the change in willingness to pay. the gain-framed message with objective knowledge of climate change and carbon offsetting programs (goo) was the most effective message in increasing the amount tourists are willing to pay for carbon offsetting programs. in contrast, no difference was found across lossframed messages. these findings supported h . this study utilized a longitudinal dataset to examine the effects of product messages and message framings on individuals' behaviors toward carbon offsetting products. using experimental designs, this study reveals a number of valuable and interesting insights into tourists' environmental behaviors. first, this study illustrates that gain-framed messages show a consistently greater influence on consumer behavior in the context of tourism carbon emissions than loss-framed messages. moreover, lossframed messages may reduce the willingness to pay for carbon offsetting products. these findings are consistent with the central tenets of regulatory fit theory and show the importance of devising messages that are congruent with consumers' construal levels in order to avoid the diminished responses that can be triggered by cognitive dissonance and mindset conflicts. furthermore, a magnification effect occurs when gain-framed messages are combined with objective information regarding both climate change and carbon offsetting programs. by comparing eight different messages, this study identified the gain-framed objective message of climate change and carbon offsetting programs (goo) as the most effective message, which increases customers' average willingness to pay by . standardized units (equivalent to approximately $ . us per traveler). the estimated cell means (table ) present several additional insights into the interrelationships between message framing and changes in consumer behavior. gain-framed messages are shown to have stronger influences on purchase intentions and willingness to pay across all conditions and within each of the simple effects. furthermore, objective information regarding climate change produces significantly greater increases in purchase intentions and willingness to pay within gainframed messages. however, within loss-framed messages, the difference between objective and subjective information regarding climate change is not significant. these paired comparisons within simple effects and between individual conditions provide additional evidence that objective information regarding climate change and objective information regarding carbon offsetting programs combined produce synergistically higher increases in carbon offsetting behavior only when combined with gain-framed messaging. interestingly, the loss-framed messages slightly decrease $wtp by . units on average, which equals around $ . us. the estimated cell means also reveal discrepant functions of carbon offsetting and climate change information. more specifically, when the information about carbon offsetting is subjective, the fact that the information about climate change is objective or subjective does not seem to be relevant in changing the purchase intention, nor the willingness to pay. however, the fact that the information about carbon offsetting is objective seems to be more important because, if it is combined with an objective climate change message, the purchase intention is higher than when the climate change message is subjective. the same applies to the willingness to pay when combining an objective carbon offsetting information with an objective climate change information, since the intention to pay is higher compared to combining an objective carbon offsetting message with a subjective climate change message. these findings suggest that comparing with using climate change information to increase travelers' perceived importance of carbon offsetting, providing objective information about a carbon offsetting program is more effective in enabling travelers to effectively evaluate purchase options and lead to more positive behavioral outcomes. moreover, this study found that a combination of subjective information about carbon offsetting programs and objective information about carbon emissions results in a lower willingness to pay compared to the combination of subjective information about both. as it is discussed previously (section . . and . . ), the presentation of objective (vs. subjective) message regarding climate change leads to a higher level of perceived importance of the carbon offsetting program. to make a deliberate purchase evaluation, travelers need detailed information regarding the program that is likely to be included in an objective message. however, when a less effective message (subjective message) is used to evoke travelers' perceived importance of carbon offsetting, travelers tend to engage in more heuristic processing. in this processing route, compare to objective information, peripheral cues about the program (subjective message) are more likely to be used, resulting in an impulsive or unplanned purchase in which travelers exhibit a low chi et al. change in purchase intention but a higher level of willingness to pay. this study further compares the impacts of gain-framed messages, loss-framed messages, and the most effective message identified above (gain-framed objective information about climate change and carbon offsetting programs (goo)) in customer groups with different initial levels of purchase intention and willingness to pay (see fig. ). for purchase intention, customers with initial self-reported scores of - (low to very low), . to (neutral), . to (high), and . to (very high) were categorized into group , , and , respectively. similarly, for willingness to pay, customers were categorized by using their responses in time ($ = group , $ = group , $ = group , $ = group ). the results reveal that the gain-framed message with objective information about climate change and carbon offsetting programs was more effective among respondents who had low initial levels of purchase intentions or willingness to pay, which is arguably the most important target market for carbon offsetting pleas. fig. summarizes the changes in willingness to pay caused by messages among the customers (n = ) who did not want to pay for carbon offsetting products at time . those customers who subsequently received a loss-framed message (n = ) expressed very little willingness to pay after processing the message and % of them remained at the level with no willingness to pay ($ ). whereas all the low initialintentioned participants who received gain-framed messages (n = ) showed an increase in willingness to pay. the majority of them ( %) chose to spend $ after reading the messages. in contrast, customers who received the gain-framed objective information about climate change and carbon offsetting programs (goo) (n = ) exhibited the greatest change, % of them decided to spend the maximum amount ($ ) after processing the message. these numbers indicate that an appropriate message could not only cause an increase in willingness to pay but also motivate the traveler with no initial behavioral intention to participate in pro-environmental activities. this study contributes to the literature by furthering our understanding of the drivers behind the attitude-behavior gap that inhibit tourism carbon offsetting participation. this study advances existing theories from three perspectives. first, this study confirms the important role of knowledge in people's pro-environmental behaviors. this study suggests that messages that provide travelers with carbon offsetting program and climate change information result in increases in purchase intention and willingness to pay. second, the results of this study explain the previous mixed results regarding the impact of knowledge on pro-environmental behavior, suggesting that the information is effective only when it is properly framed. more specifically, this study reveals a connection between hsm and prospect theory. the interaction between message framing and information types demonstrates that people process information and make relevant decisions as both rational and irrational agents, simultaneously. that is, people make rational decisions based on the quality and perceived relevance of the information. whereas, this information processing is also influenced by people's irrational cognition, which is driven by their mindset match. in the context of the current study, a gain-framed message evokes a positive mindset, which is congruent with tourists' promotion-oriented mindset. as a result, this irrational cognition boosts peoples' rational evaluation of the information regarding climate change and carbon offsetting, maximizing the positive outcome of information processing. third, an interaction between regulatory fit theory and prospect theory is found by this study, suggesting that the asymmetrical effect of message framing identified by the previous sustainable tourism research is likely to be caused by underestimation of the effects of travelers' primary traveling goals. this study finds that loss-framed messages conflict with the pursuit of pleasure inherent in tourism and that avoiding dissonance by tailoring message frames to be gain-oriented will contribute to closing the attitude-behavior gap. the tourism industry is considered as a major contributor to global carbon emissions and potential climate change (kim et al., ) . world tourism activities currently cause % of total carbon emissions and this number keeps increasing (sonnenschein & smedby, ) . great effort has been made by the tourism industry to reduce its environmental impact through promoting different pro-environmental products such as green hotels or carbon offsetting programs (babakhani et al., ) . however, ineffective promotion of these products has been frequently found to cause negative outcomes, leading customers to have a perception of "greenwashing" (rahman et al., ) . furthermore, the outbreak of covid- in early led to a pause in tourism activities worldwide. nevertheless, due to the stay at home orders reinforced by governments, the reduction of transportation and energy demand drastically decreased the global carbon emission level by % in april compared to the average emission in (le quere et al., ), giving tourism destinations a break. as a result, the emerging effects of the pandemic is considered as a turning point for sustainable tourism. in the unwto global guidelines to restart tourism from covid- , sustainability is a core element and becomes a norm for each dimension of the tourism sector (unwto, ) . motivated by the above observations, this study has explored how o.h. chi et al. tourism management ( ) messages alter customers' pro-environmental behavior in the tourism context with a heavy concentration on addressing the attitude-behavior gap. findings suggest that message framing can significantly affect customer behavior and reveal the importance of congruency between message framing and the cognitive mindset of the consumers. based on the results, managers are recommended to develop messages that are consistent with customers' existing mindset such as developing gainedframed messages for customers who are in hedonic experience mindset. this study finds that the gain-framed messages show salient and positive impacts on customers' purchase intention and willingness to pay and, interestingly, the loss-farmed messages lead to a negative outcome since tourists are primarily promotion-focused. in addition, findings suggest that a combination of objective information about carbon offsetting and climate change in a gain-framed message is more effective in generating higher positive behavioral changes than a combination of mixed types or subjective information only. for this reason, when promoting pro-environmental products, managers will benefit from providing customers with objective informative messages that include, for instance, the current situation of global warming, detailed information about green practices, and the predicted outcomes of purchasing this green product. this study examined the effect of message framing on carbon offsetting behaviors by utilizing three interventions: gain vs loss framing, objective vs subjective information regarding climate change, and objective vs subjective information regarding carbon offsetting products. the × × repeated measure design was performed. the results reveal that gain-framed messages can cause significant positive impacts on purchase intention and willingness to pay for carbon offsetting. in addition, a three-way interaction was found among the three interventions, suggesting that the combination of gain-framed message and conveyance of objective information regarding both climate change and carbon offsetting had an amplification effect, which further increases willingness to pay. this study also identified that the gainframed objective knowledge of climate change and carbon offsetting programs had the greatest persuasion power. this study is not free of limitations. first, although this study found that gain framing is more effective in encouraging pro-environmental behaviors, several factors have been identified by researchers as potential moderators or confounds influencing the relationship between message framing and consumer outcomes. the most widely recognized factors include the closeness (distance) of the association that the consumer has with the topic, and the relative importance that the consumer ascribes to the behavior or the potential outcomes. as carbon emissions cannot be touched and seen like towels and recycling bins, and the effects of excessive carbon emissions are uncertain both in terms of scope and timing, the challenge of making carbon emissions important to consumers may be amplified by these factors. second, risk framing studies suggest that the negative value of a potential loss is typically greater than the positive value associated with potential gains (blose et al., ; kahneman & tversky, ) , particularly when the perceived risks are high (lee & aaker, ) . risk is a function of the combination of uncertainty and consequences (kapuściński & richards, ; moutinho, ) , and perceived risk on a personal level has been shown to influence the salience of risk-framed messages (lee & aaker, ) . since this study has failed to show the effectiveness of loss-framed messages, future studie,s can investigate whether different consequences, such as immediate or delayed consequences, make differences in the effectiveness of a loss-framed message via the mediation effect of perceived risk. oscar hengxuan chi, writing -original draft; conceptualization; methodology; formal analysis; discussion. gregory denton, introduction; literature review; validation; writing -review & editing. dogan gursoy, project administration; writing -review & editing; resources. none. categorizing peer-to-peer review site features and examining their impacts on room sales understanding attitudes and predicting social behavior typography in destination advertising: an exploratory study and research perspectives improving carbon offsetting appeals in online airplane ticket purchasing: testing new messages and using new test methods measuring psychographics to assess purchase intention and willingness to pay how tourists and tourism experts perceive climate change and carbon-offsetting schemes the influence of message framing on hotel guests' linen-reuse intentions post-message wtp levels among customers with no initial wtp o shopping orientation and mindsets: how motivation influences consumer information processing during shopping heuristic and systematic information processing within and beyond the persuasion context. unintended thought restoring forests as a means to many ends the use of message framing in the promotion of environmentally sustainable behaviors willingness to pay for flying carbon neutral in australia: an exploratory study of offsetter profiles an examination of the gap between carbon offsetting attitudes and behaviors: role of knowledge, credibility and trust measuring consumer's willingness to pay for organic and fair trade products support for climate change policy: social psychological and social structural influences do pro-environmental appeals trigger proenvironmental behavior in hotel guests economic impacts of a carbon tax on the australian tourism industry the information processing of pictures in print advertisements trust and reputation in the sharing economy: the role of personal photos in airbnb environmentally responsible purchase behaviour: a test of a consumer model spanning the gap: an examination of the factors leading to the green gap a room with a viewpoint: using social norms to motivate environmental conservation in hotels swedish air travelers and voluntary carbon offsets: towards the co-creation of environmental value? assessing tourism's global environmental impact - the decarbonization impasse: global tourism leaders' views on climate change mitigation loss or gain? the role of message framing in hotel guests' recycling behavior what influences water conservation and towel reuse practices of hotel guests? tourism management climate change and the air travel decisions of uk tourists making a good decision: value from fit beyond attitude structure: toward the informational determinants of attitude the attitude-behaviour gap in sustainable tourism prospect theory: an analysis of decision under risk consumers' willingness to pay for green initiatives of the hotel industry news framing effects on destination risk perception sustainability research in the hotel industry: past, present, and future the influence of preciseness of price information on the travel option choice the effects of message framing and source credibility on green messages in hotels the conflict mindset: how internal conflicts affect self-regulation are attention check questions a threat to scale validity? temporary reduction in daily global co emissions during the covid- forced confinement bringing the frame into focus: the influence of regulatory fit on processing fluency and persuasion effective communication strategies for hotel guests' green behavior running out of water! developing a message typology and evaluating message effects on attitude toward water conservation would consumers pay more for nongenetically modified menu items? an examination of factors influencing diners' behavioral intentions the influence of message framing and issue involvement achieving voluntary reductions in the carbon footprint of tourism and climate change developing a mobile applications customer experience model (mace)-implications for retailers role of trust, emotions and event attachment on residents' attitudes toward tourism environment management in the hotel industry: does institutional environment matter? do loss-framed persuasive messages engender greater message processing than do gain-framed messages? a meta-analytic review the influence of trust perceptions on german tourists' intentions to book a sustainable hotel: a new approach to analyzing marketing information environmental knowledge, proenvironmental behaviour and energy savings in households: an empirical study consequences of "greenwashing": consumers' reactions to hotels' green initiatives accurately measuring willingness to pay for consumer goods: a meta-analysis of the hypothetical bias can tourism be part of the decarbonized global economy? the costs and risks of alternate carbon reduction policy pathways increasing adoption of voluntary carbon offsets among tourists experimental and quasiexperimental designs for generalized causal inference designing air ticket taxes for climate change mitigation: insights from a swedish valuation study the impact of online reviews on hotel booking intentions and perception of trust the frame of decisions and the psychology of choice sustainability as the new normal" a vision for the future of tourism does perceived risk influence the effects of message framing? a new investigation of a widely held notion sustainable food consumption: exploring the consumer "attitude-behavioral intention" gap a review of experiments in tourism and hospitality inspiring awe through tourism and its consequence a study of consumers' intentions to participate in responsible tourism using message framing and appeals message framing and regulatory focus effects on destination image formation none. key: cord- -ode yi authors: naeem, salman bin; bhatti, rubina title: the covid‐ ‘infodemic’: a new front for information professionals date: - - journal: health info libr j doi: . /hir. sha: doc_id: cord_uid: ode yi the virus, commonly known as covid‐ which emerged in wuhan, china, in december , has spread in countries, areas or territories around the globe, with nearly deaths worldwide on april . in the wake of this pandemic, we have witnessed a massive infodemic with the public being bombarded with vast quantities of information, much of which is not scientifically correct. fighting fake news is now the new front in the covid‐ battle. this regular feature comments on the role of health sciences librarians and information professionals in combating the covid‐ infodemic. to support their work, it draws attention to the myth busters, fact‐checkers and credible sources relating to covid‐ . it also documents the guides that libraries have put together to help the general public, students and faculty recognise fake news. a lie can run round the world before the truth has got its boots on (pratchett, ). an infodemic may be defined as an excessive amount of information concerning a problem such that the solution is made more difficult. the end result is that an anxious public finds it difficult to distinguish between evidence-based information and a broad range of unreliable misinformation. as the sars-cov- virus (commonly known as spreads, it has been accompanied by a vast amount of medical misinformation, rumours and half-backed conspiracy theories from unfiltered channels, often disseminated through social media and other outlets. this infodemic now poses a serious problem for public health. in such a rapidly changing situation, with millions on lockdown, social media outlets such as twitter, facebook, whatsapp, instagram and wechat have become major sources of information about the crisis. research by the bruno kessler foundation in italy showed that every day in march there was an average of new posts on twitter linked to misleading information about the pandemic (hollowood & mostrous, ) . a recent ofcom's survey ( ) in the uk indicated that % of uk adults reported that they have been exposed to misleading information online about the crises. % adults in the uk are 'finding it hard to know what is true or false about the virus'. similarly, a study in the united states reported that % of us adults faced a great deal of confusion about the basic facts of current events due to the spread of fake news (barthel et al., ) . most of the misinformation relates to findings of studies that, although empirical, were either preliminary or inconclusive (lai, shih, ko, tang, & hsueh, ) . table summaries some of the commonly spread myths (government of pakistan, ; world health organization, a). the abundance of information on social media frequently without any check on its authenticity makes it difficult for an individual to distinguish between what are facts, and what are opinions, propaganda or biases. there is a huge increase in stories on social media that may initially appear credible but later prove false or fabricated; however, by the time they are proven to false, the damage may be irreversible. we know that every outbreak will be accompanied by a kind of tsunami of information, but also within this information you always have misinformation, rumours, etc. we know that even in the middle ages there was this phenomenon. but the difference now with social media is that this phenomenon is amplified, it goes faster and further, like the viruses that travel with people and go faster and further. so it is a new challenge, and the challenge is the [timing] because you need to be faster if you want to fill the void. . .what is at stake during an outbreak is making sure people will do the right thing to control the disease or to mitigate its impact. so it is not only information to make sure people are informed; it is also making sure people are informed to act appropriately (zarocostas, ) efforts to combat the infodemic fighting this infodemic is the new front in the covid- battle (child, ) . in the 'post-truth' era, audiences are likely to believe information that appeals to their emotions and personal beliefs, as opposed to information that is regarded as factual and or objective (maoret, ) . this poses a major global risk and a threat to public health. thus, it becomes vital to educate people generally, and youth in particular, about the nature of fake news and negative outcomes of sharing such news. the unesco is making efforts to counter misinformation and promote the facts about the covid- disease. the agency is using the hashtags #thinkbeforeclicking, #thinkbeforesharing and #shareknowledge, and promoting the view that the rights to freedom of expression and access to information are the best ways of combating the dangers of disinformation (un news, ). the massachusetts governor, charlie baker, asserted that: 'everybody needs to get their news from legitimate places, not from their friend's friend's friend's friend'. the world economic forum ( ) published a three steps guideline on 'how to read the news like a scientist and avoid the covid- 'infodemic''. it includes (i) embracing uncertaintyresponsibly, (ii) asking where's the information coming from? (iii) determining who's backing up the claim. wardle and derakhshan ( ) presented a useful framework to understand the difference between the types of mis-and dis-information ( table ). health sciences librarians (hsls) have the knowledge, skills and experience to play an important role in the fight against fake news. it is worth bearing in mind that since the s they have played a leading role in educating people (through information literacy programmes) about how to evaluate facts and how to check the authenticity of information (banks, ; dempsey, ) . there is a need now for hsls to promote dialogue amongst themselves about how best to develop mechanisms to prevent and counteract the spread of fake news. the main weapon must be training and education, drawing on the many information literacy programmes to alert the public on how to identify fake news. the next section of this article catalogues some of the tools hsls can draw upon. libraries have put together guides to help students, staff, faculty and the general public to recognise what is fake news (hernandez, ; stein-smith, ) . the international federation for the library association (ifla, ) developed an -step guideline to identify fake news. these steps include (i) consider the source, (ii) check the author, (iii) check the date, (iv) check your biases, (v) read beyond, (vi) seek supporting sources, (vii) ask 'is it a joke?' and (viii) ask the experts see ( figure ). another useful checklist for determining the reliability of the information source is craap (currency, relevance, authority, accuracy and purpose) created by the meriam library, california state university & chico, https:// library.csuchico.edu/help/source-or-informationgood. there are many other information literacy guidelines that can help the general public to recognise and avoid fake news. hsls should be knowledgeable about these resources and publicise them to their users. the who has recently launched a myth buster to respond to the misinformation and myths relating to covid- disease ( figure ) . several countries have also developed similar types of websites. these websites help people to determine the authenticity of the facts presented by any news or information sites, pinpointing any misinformation or myths which are indigenously induced and viral within a country through social networks. there are a range of fact-checking agencies and websites that can help verify the reality of news or information. several of these fact-checking sites continually update details of the news, myths or information that is fake. the following are the lists of widely used fact-checkers. these can be useful to determine the authenticity of news or information during the pandemic. health science librarians have the knowledge and skills to provide guidance to the general public on how to find credible and reliable information in the age of post-truth, especially during the current covid- pandemic. hsls should share resources and collaborate to help people become more critical of what is being presented to them as facts through social media and other outlets. using the many tools at their disposal, the goal of information professionals must be to enable the public to distinguish between facts and fake information. a website seeking truth & exposing fiction since addressing the challenge of fake news through artificial intelligence boom: covid- news colombia fact check for ifcn fact checking organizations on whatsapp official twitter handle of government of pakistan for exposing fake news https://twitte r.com/fakenews_buster?lang=en . snopes is the internet's definitive fact-checking resource fighting fake news: how libraries can lead the way on media literacy many americans believe fake news is sowing confusion fighting fake news: the new front in the coronavirus battle: bogus stories and half-backed conspiracy theories are surging through the internet what's behind fake news and what you can do about it? information today national command and control centre for covid- in pakistan fake news and academic librarians: a hook for introducing undergraduate students to information literacy. information literacy and libraries in the age of fake news fake news in the time of c- how to spot fake news severe acute respiratory syndrome coronavirus (sars-cov- ) and corona virus disease- (covid- ): the epidemic and the challenges the social construction of fats: surviving a post-truth world [video file evaluating information: applying the craap test covid- news and information: consumption and attitudes results from week one of ofcom's online survey the truth librarians, information literacy, and fake news during this coronavirus pandemic, 'fake news' is putting lives at risk: unesco information disorder: toward an interdisciplinary framework for research and policy making how to read the news like a scientist and avoid the covid- 'infodemic coronavirus disease (covid- ) advice for the public: myth busters munich security conference. director general how to fight an infodemic. the lancet we would like to acknowledge prof. khalid n haq for his support in completing the research work. we would also like to acknowledge jeannette murphy, for her valuable comments, timely response and effort she put in getting the work processed and published. there is no conflict of interest. key: cord- -t zubl p authors: daubenschuetz, tim; kulyk, oksana; neumann, stephan; hinterleitner, isabella; delgado, paula ramos; hoffmann, carmen; scheible, florian title: sars-cov- , a threat to privacy? date: - - journal: nan doi: nan sha: doc_id: cord_uid: t zubl p the global sars-cov- pandemic is currently putting a massive strain on the world's critical infrastructures. with healthcare systems and internet service providers already struggling to provide reliable service, some operators may, intentionally or unintentionally, lever out privacy-protecting measures to increase their system's efficiency in fighting the virus. moreover, though it may seem all encouraging to see the effectiveness of authoritarian states in battling the crisis, we, the authors of this paper, would like to raise the community's awareness towards developing more effective means in battling the crisis without the need to limit fundamental human rights. to analyze the current situation, we are discussing and evaluating the steps corporations and governments are taking to condemn the virus by applying established privacy research. due to its fast spreading throughout the world, the outbreak of sars-cov- has become a global crisis putting stress on the current infrastructure in some areas in unprecedented ways, making shortcomings visible. since there is no vaccination against sars-cov- , the only way to deal with the current situation are non-pharmaceutical interventions (npi's), to reducing the number of new infections and flatten the curve of total patients. having a look at european states like italy, spain, france, or austria which, are in lockdown as of march , keeping people away from seeing each other, their right to living a selfdetermined life is not in their hands anymore. as shown by hatchett et al., this method showed a positive effect in st. luis during 's influenza pandemic [ ] , nevertheless, its longterm effects on the economy and day-to-day life, including psychological effects on people forced to self-isolate, are often seen as a cause of concern [ ] . furthermore, some models show the possibility of a massive rise of new infections after the lockdown is ended [ ] . hence, to handle the situation, measures are being discussed, some of which may invade a citizen's privacy. we can see an example of this approach in asian countries e.g., south korea [ ] and singapore [ ] where, besides extensive testing, methods such as tracing mobile phone location data in order to identify possible contact with infected persons [ ] . other countries have taken similar measures. for instance, netanyahu, israel's prime minister, ordered shin bet, israel's internal security service, to start surveilling citizens' cellphones [ ] , [ ] , [ ] . persons who have been closer than two meters to an infected person are receiving text messages telling them to go into immediate home isolation for days. as shin bet mandate is to observe and fight middle easter terrorism, naturally, israel's citizens are now concerned that it is now helping in a medical situation [ ], [ ] , [ ] . within the eu, in particular, in germany and austria, telecommunications providers are already providing health organizations and the government with anonymous data of mobile phone location data [ ] . although nobody has evaluated the effectiveness of these measures, they raise concerns from privacy experts as the massive collection of data can easily lead to harming the population and violating their human rights if the collected data is misused. in this paper, we discuss the privacy issues that can arise in times of crisis and take a closer look into the case of the german robert koch institute receiving data from telekom. we conclude by providing some recommendations about ways to minimize privacy harms while combating the pandemic. in this section we outline the general definitions of privacy, including describing the contextual integrity framework for reasoning about privacy, and discuss privacy harms that can occur from misuse of personal data. we furthermore discuss the issues with privacy that can occur during a crisis such as this global pandemic and what can be done to ensure information security and hence appropriate data protection. privacy is a broad concept which has been studied from the point of view of different disciplines, including social sciences and humanities, legal studies and computer science. the definitions of privacy are commonly centered around seeing privacy as confidentiality (preventing disclosure of information), control (providing people with means to control how their personal data is collected and used) and transparency (ensuring that users are aware of how their data is collected and used, as well as ensuring that the data collection and processing occurs in a lawful manner) [ ] . hannah arendt, a jewish philosopher who grew up in germany in the beginning of the th century defined privacy within the context of public and private space. her claim was that if there exists public space, there is also private space. arendt considers the privacy concept as a distinction between things that should be shown and things that should be hidden [ ] . and that, private spaces exist in opposition to public spaces. meaning, while the public square is dedicated to appearances, the private space is devoted to the opposite, namely hiding and privacy. she associated privacy with the home. due to the fact that we have become used to a "digital private space", such as our own email inbox or personal data on the phone, people are concerned and offended when the private, hidden space is violated. however, in times of crisis the term hidden or privacy becomes a new meaning. helen nissenbaum, a professor of information science, proposed the concept of contextual integrity as a framework to reason about privacy. according to her framework, privacy is defined as adhering to the norms of information flow [ ] . these norms are highly contextual: for example, it is appropriate for doctors to have access to the medical data of their patients, but in most cases it is inappropriate for employers to have access to medical data of their workers. nissenbaum distinguishes between the following five principles of information flow [ ] : the sender, the subject, the receiver, the information type and the transmission principle (e.g. whether confidentiality has to be preserved, whether the data exchange is reciprocal or whether consent is necessary and/or sufficient for the appropriateness of the data exchange). the norms governing these parameters are furthermore evaluated against a specific context, including whether the information flow is necessary for achieving the purpose of the context. data misuse can lead to different kinds of harms that jeopardise physical and psychological well-being of people as well as the overall society (see e.g. solove, ) . one of them is persecution by the government -this might not be a big concern in democratic societies, but democratic societies can move into more authoritarian governance styles, especially is crisis situations. even if this does not happen, there are other harms, e.g. a so called "chilling effect", where people are afraid to speak up against the accepted norms when they feel that they are being watched. furthermore, harms can result from data leaks, like unintentional errors or cyberattacks. in these cases, information about individuals may become known to unintended targets. this can result in physical harm, stalking and damage of the data subject's personal relationships. knowledge about one's medical data can lead to job discrimination. leaked details about one's lifestyle can lead to raised insurance rates. leakage of location data, in particular, can reveal a lot of sensitive information about an individual, such as the places they visit, which might in turn result in dramatic effects when revealed. just think of closeted homosexuals visiting a gay clubs or marginalized religious minorities visiting their place of worship. even beyond these concerns, access to large amounts of personal data can be used for more effective opinion and behavior manipulation, as evidenced by the cambridge analytica scandal [ ] . in summary, absence of privacy has a dramatic effect on our freedom of expression as individuals and on the wellfunctioning of the society as a whole. it is therefore important to ensure that the damage to privacy is minimized even in times of crisis. when we are considering the example of doctors treating their patients, we can use the framework of contextual integrity to reason about the appropriate information flow as follows: the patient is both the sender and the subject of the data exchange, the doctor is the receiver, the information type is the patient's medical information, the transmission principle includes, most importantly, doctor-patient confidentiality aside from public health issues. the overall context is health care, and the purpose of the context is both healing the patient and protecting health of the population. it can therefore be argued that in case of a global pandemic, one should allow the exchange of patient's data, especially when it comes to data about infected patients and their contacts, to the extent that it is necessary to manage the pandemic. there is, however, a danger of misusing the collected data outside of the defined context -the so-called "mission creep", which experts argue was the case with nsa collecting data from both us and foreign citizens on an unprecedented scale as an aftermath of the / terrorist attack [ ] . furthermore, aside from the danger of collecting data by the government, the crisis situation leads to increase of data collection by private companies, as people all over the world switch to remote communication and remote collaboration tools from faceto-face communications. the data collection and processing practices of these tools, however, are often obscure from their users: as known from research in related fields, privacy policies are often too long, obscure, and complicated to figure out, and shorter notices such as cookie disclaimers tend to be perceived as too vague and not providing useful information [ ] , [ ] . this leads to users often ignoring the privacy policies and disclaimers, hence, being unaware of important information about their data sharing. moreover, even among the privacyconcerned users, the adoption of more privacy-friendly tools can be hindered by social pressure and network effects, if everyone else prefers to use more popular tools that are less inclined to protect the privacy of their users (as seen in studies on security and privacy adoption in other domains, see e.g. [ ] , [ ] ). this data collection even furthers the effects of the so-called surveillance capitalism [ ] , which leads to corporations having even more power over people than before the crisis. this access to personal data by corporations is furthermore aggravated by an increased usage of social media platforms, increases in users sharing their location data and giving applications increased access to their phone's operating system. lowered barriers and increased online activity that can be directly linked to an individual or an email address is a treasure trove for for-profit corporations that monetize consumer data. many corporations are now getting free or low cost leads for months to come. a question that is often open for discussion is to which extent people themselves would be ready to share their data, even if it results in a privacy loss. as such, data sharing habits in general have been the topic of research, leading to discussions on so-called privacy paradox: people claiming that privacy is important to them, yet not behaving in a privacypreserving way. the privacy paradox can be explained by different factors [ ] . one of them is the lack of awareness about the extent of data collection as well as about the possible harms that can result from unrestricted data sharing. a further factor stems from decision biases, such as people's tendency to underestimate the risks that may happen in the future compared against immediate benefit. another noteworthy factor are the manipulations by service providers (so-called dark patterns) nudging users into sharing more of their data contrary to their actual preferences. but rational decisions in times of crisis are even more difficult. given the state of stress and anxiety many are in, people might be more likely to accept privacy-problematic practices if they are told that these practices are absolutely necessary for managing the crisiseven if this is not actually the case. the problem that people are more likely to surrender their privacy rights if they have already had to surrender other fundamental rights (such as freedom of movement due to lockdown restrictions) is reminiscent of the psychological mechanism of door-in-the-face technique. the door-in-the-face technique is a method of social influence, where we ask a person at first to do something that requires more than they would accept. afterward, we ask for a second smaller favor. research has shown that the person is now more likely to accept the other smaller favor [ ] . in the case of the sars-cov- pandemic, governments first asked their citizens to self-isolate ( limiting significant fundamental freedom) before following up with the smaller favor of handing over some private data to fight the outbreak. however, according to cantarero et al., the level of acceptance differs from individual to individual [ ] , which makes it even more critical to rising consciousness in population. at the same time, timely access to data voluntarily shared by people (in addition to the data collected by hospitals and authorities) can indeed help combat the epidemics. in this, we are supporting informed consent of data subjects, because it ensures that people will only share data with institutions that kept their data safe against privacy harms. in an increasingly digital world, establishing proper information security safeguards is critical in preventing data leaks, and hence, in preserving the privacy of data subjects. however, the situation of such a global pandemic places significant challenges on established workflows, information technology, and security as well, resulting in various issues. these problems arose when people stopped traveling, going into the office, and started working from home. while some companies and institutions have provided a possibility for remote work also before the crisis, or are at least infrastructurally and organizationally prepared, many are unprepared for such a dramatic increase of home office work. they face significant technical and organizational challenges, such as ensuring the security of their systems given the need for opening the network to remote access, e.g., via the so-called demilitarized zone (dmz), or perimeter control, an extension of technical monitoring of the system and overall extension of system hardening is "hostile" (home) environments. a recent poll revealed that the security teams of % of companies did not have "emergency plans in place to shift an on-premise workforce to one that is remote" [ ] . even worse, these challenges are more present in regulated (and therefore often critical) industries as sumir karayi, ceo and founder of e, in a threatpost interview states: "government, legal, insurance, banking and healthcare are all great examples of industries that are not prepared for this massive influx of remote workers [...] many companies and organizations in these industries are working on legacy systems and are using software that is not patched. not only does this mean remote work is a security concern, but it makes working a negative, unproductive experience for the employee. [...] regulated industries pose a significant challenge because they use systems, devices or people not yet approved for remote work [...] proprietary or specific software is usually also legacy software. it's hard to patch and maintain, and rarely able to be accessed remotely." [ ] in consequence, the urgent need to enable remote collaboration related to the lack of preparation and preparation time may lead to hurried and immature remote work strategies. at the same time, ensuring proper security behavior of the employees -something that was a challenge in many companies also before the crisis -is becoming an even more difficult task. we can currently see employees trying to circumvent corporate restrictions by sending or sharing data and documents over private accounts (shadow it). additionally, there is a surge of social engineering attacks among other phishing email campaigns, business email compromise, malware, and ransomware strains, as sherrod degrippo, senior director of threat research and detection at proofpoint, states [ ] . similar findings are provided by atlas vpn research, which shows that several industries broadly use unpatched or no longer supported hardware or software systems, including the healthcare sector [ ]. together with immature remote strategies, information security and privacy risks may significantly increase and undermine the standardized risk management process. the european data protection board (edpb) has formulated a statement on the processing of personal data in the context of the sars-cov- outbreak [ ] . according to edpb, data protection rules do not hinder measures taken in the fight against the coronavirus pandemic. even so, the edpb underlines that, even in these exceptional times, the data controller and processor must ensure the protection of the personal data of the data subjects. therefore, several considerations should be taken into account to guarantee the lawful processing of personal data, and in this context, one must respect the general principles of law. as such, the gdpr allows competent public health authorities like hospitals and laboratories as well as employers to process personal data in the context of an epidemic, by national law and within the conditions set therein. concerning the processing of telecommunication data, such as location data, the national laws implementing the eprivacy directive must also be respected. the national laws implementing the eprivacy directive provide that the location data can only be used by the operator when they are made anonymous, or with the consent of the individuals. if it is not possible to only process anonymous data, art. of the eprivacy directive enables the member states to introduce legislative measures pursuing national security and public security. this emergency legislation is possible under the condition that it constitutes a necessary, appropriate, and proportionate measure within a democratic society. if a member state introduces such measures, it is obliged to put in place adequate safeguards, such as granting individuals the right to a judicial remedy. in this section, we conduct a preliminary analysis of german disease control receiving movement data from a telecommunication provider. in germany, the authority for disease control and prevention, the robert koch institute (rki), made headlines on march , , as it became public that telecommunication provider telekom had shared an anonymized set of mobile phone movement data to monitor citizens' mobility in the fight against sars-cov- . in total, telekom sent million customer's data to the rki for further analysis. the german federal commissioner for data protection and freedom of information, ulrich kelber, overseeing the transfer, commented on the incident that he is not concerned about violating any data protection rules, as the data had been anonymized upfront [ ] . however, researchers have shown that seemly anonymized data sets can indeed be "deanonymized" [ ] . constanze kurz, an activist, and expert on the subject matter, commented that she was skeptical about the anonymization. she urged telekom to publicize the anonymization methods that were being used and asked the robert koch institute to explain how it will protect this data for unauthorized third-party access. several research studies had shown the deanonymization for data sets to extract personal information, including a case from , when a journalist and a data scientist acquired an anonymized dataset with the browsing habits of more than three million german citizens [ ] , [ ] . as at the moment, it is hard to tell whether disclosure of personal data is possible from the shared set (even more so given the development of new re-identification methods, including possible future development), we look at the worstcase scenario, namely, that personal data is deanonymizeable. given this scenario, we use nissenbaum's contextual integrity thesis to understand if privacy telekom has violated its customer's privacy [ ] . we do so by stating the context of the case, the norm -what everyone expects should happenplus five contextual parameters to further analyze the situation. table i summarizes the contextual integrity framework as applied to the german data sharing situation. a principle that is perhaps most interesting for further elaboration is the transmission principle. given the context and urgency of the situation, one might agree that having the german federal commissioner for data protection and freedom of information oversee the transaction and taking some measures to anonymize the data set appropriately serves as a practical solution towards limiting the spread of sars-cov- , also without explicitly obtaining consent from data subjects. we do, however, assume that appropriate use of data would be limiting it to a specific purpose of combating the pandemic, and not reusing it to other purposes without further assessment. note, however, that there is space for discussion, in which the community should be engaged, about the norms that apply in this situation, especially given the extraordinary situation and the severity of the crisis. a further step of the contextual integrity is, however, also part of the contextual integrity framework to nissenbaum's five parameter thesis of contextual information to create hypothetical scenarios that could threaten the decision's future integrity. we, therefore, consider the following hypotheticals, which we believe would violate contextual integrity: hypothetical scenario : "the robert koch institute does not delete the data after sars-cov- crisis" hypothetical scenario : "the robert koch institute forwards data to other state organs or to third parties" these hypothetical scenarios would violate the transmission principle that the data is only going to be used to handle the crisis (and, in the second hypothetical, also the receiver of the data). we believe a future assessment is necessary to determine if the data transfer was indeed necessary to fight the pandemic. alternatively, if alternatively, customer permissions should have been required upfront. hypothetical scenario : "the robert koch institute requests data about telekom customer movements from the last ten years these scenarios change the type of information. we want to argue that the new exchanged data no longer serves the purpose of fighting the pandemic. this point was also made by the electronic freedom frontier organization [ ] , noting that since the incubation period of the virus is estimated to last days, getting access to data that is much older than that would be a privacy violation. we think that, similar to the first three scenarios, a further assessment, based on transparent information, is necessary. as with the first three scenarios, the transmission principle of confidentiality is violated in this scenario, albeit unintentionally, and, in case of improper anonymization, personal information might still leak. hence, a privacy violation has taken place. referring to outlined information security concerns, an increase in cyber attacks related to improper information security management in a time of crisis significantly increases the risk. given the above-outlined hypotheticals, we recommend implementing appropriate protection measures.t countries around the world have already taken numerous initiatives to slow down the spread of the sars-cov- , such as remote working, telemedicine, and online learning and shopping. that has required a legion of changes in our lives. however, as mentioned in previous sections, these activities come with associated security and privacy risks. various organizations are raising concerns regarding these risks (see e.g., the statement and proposed principles from the electronic freedom frontier [ ] . of particular interest is the case of healthcare systems, which must be transparent with the information related to patients, but cautious with the disclosed information. equally, hospitals might also decide to withhold information in order to try to minimize liability. that is a slippery slope: both cases -no information or too much information -might lead to a state of fear in the population and a false sense of security (i.e., no information means there is no problem) or a loss of privacy when we decide to disclose too much information. in the current situation and others that might arise, principles, and best practices developed before the crisis are still applicable, namely, privacy by design principles, and most importantly, data minimization. only strictly necessary data needed to manage the crisis should be collected and deleted once humanity has overcome the crisis. in this context, patient data should be collected, stored, analyzed, and processed under strict data protection rules (such as the general data protection regulation gdpr) by competent public health authorities, as mentioned in the previous chapter [ ] . an example of addressing the issues of data protection during the crisis can also be seen within the austrian project vkt-goepl [ ] . it was the project's goal to generate a dynamic situational map for ministries overseeing the crisis. events, such as terrorist attacks, flooding, fire, and pandemic scenarios, were selected. already ten years ago, the need for geographical movement data provided by telecommunication providers was treated as a use-case. furthermore, the project initiators prohibited the linking of personal data from different databases in cases where this data was not anonymized. they recommended that ministries are transparently informing all individuals about the policies which apply to the processing of their data. regarding data analysis, we recommend that citizens only disclose their data to authorized parties, once these are putting adequate security measures and confidentiality policies in place. moreover, only data that is strictly necessary should be shared. we think that proper data storage should make use of advanced technology such as cryptography. patient data -including personal information such as contact data, sexual preferences or religion amongst others -should not be revealed. as anonymizing data has been shown to be a non-trivial task that is hard to achieve in a proper way, advanced solutions such as cryptographic techniques for secure multiparty computation or differential privacy algorithms for privacy-preserving data releases should be used. besides, to ensure privacy from the collection stage, consistent training of the medical personnel, volunteers, and administrative staff should be done. the current lack of training (due to limited resources, shortage of specialists, and general time pressure) leads to human errors and neglect of proper security and privacy protection measures. a further concern, which we did not investigate in this paper is to ensure fairness when it comes to algorithmic decision making. as such, automated data systems ("big data" or "machine learning") are known to have issues with biasbased e.g., on race or gender that can lead to discrimination [ ] . in order to prevent such adverse effects during the crisis, these systems should furthermore be limited in order to limit bias based on nationality, sexual preferences, religion, or other factors that are not related to handling the pandemic. finally, we recognize that having access to timely and accurate data can play a critical role in combating the epidemic. nevertheless, as discussed in previous sections, ignoring issues around the collection and handling of personal data might cause serious harm that will be hard to repair in the long run. therefore, as big corporations and nation-states are collecting data from the world's population; it is of crucial importance that this data is handled responsibly and keeping the privacy of the data subjects in mind. cdc grand rounds: modeling and public health decision-making the psychological impact of quarantine and how to reduce it: rapid review of the evidence impact of non-pharmaceutical interventions (npis) to reduce covid- mortality and healthcare demand coronavirus: south korea's success in controlling disease is due to its acceptance of surveillance interrupting transmission of covid- : lessons from containment efforts in singapore effect of non-pharmaceutical interventions for containing the covid- outbreak in china israel begins tracking and texting those possibly exposed to the coronavirus israel starts surveilling virus carriers, sends who were nearby to isolation mobilfunker a liefert bewegungsströme von handynutzern an regierung the cyber security body of knowledge the human condition privacy as contextual integrity contextual integrity up and down the data food chain. theoretical inquiries law fresh cambridge analytica leak 'shows global manipulation is out of control mission creep: when everything is terrorism a design space for effective privacy notices this website uses cookies": users' perceptions and reactions to the cookie disclaimer obstacles to the adoption of secure communication tools a socio-technical investigation into smartphone security big other: surveillance capitalism and the prospects of an information civilization the myth of the privacy paradox door-in-the-face-technik being inconsistent and compliant: the moderating role of the preference for consistency in the door-in-the-face technique coronavirus poll results: cyberattacks ramp up, wfh prep uneven working from home: covid- 's constellation of security challenges us is fighting covid- with outdated software statement onthe processing of personal data in the context of the covid- outbreak warum die telekom bewegungsdaten von handynutzern weitergibt broken promises of privacy: responding to the surprising failure of anonymization estimating the success of re-identifications in incomplete datasets using generative models big data deidentification, reidentification and anonymization protecting civil liberties during a public health crisis crisis and disaster management as a network-activity how big data increases inequality and threatens democracy key: cord- -j q pcfa authors: zhan, xiu-xiu; liu, chuang; zhou, ge; zhang, zi-ke; sun, gui-quan; zhu, jonathan j.h.; jin, zhen title: coupling dynamics of epidemic spreading and information diffusion on complex networks date: - - journal: appl math comput doi: . /j.amc. . . sha: doc_id: cord_uid: j q pcfa the interaction between disease and disease information on complex networks has facilitated an interdisciplinary research area. when a disease begins to spread in the population, the corresponding information would also be transmitted among individuals, which in turn influence the spreading pattern of the disease. in this paper, firstly, we analyze the propagation of two representative diseases (h n and dengue fever) in the real-world population and their corresponding information on internet, suggesting the high correlation of the two-type dynamical processes. secondly, inspired by empirical analyses, we propose a nonlinear model to further interpret the coupling effect based on the sis (susceptible-infected-susceptible) model. both simulation results and theoretical analysis show that a high prevalence of epidemic will lead to a slow information decay, consequently resulting in a high infected level, which shall in turn prevent the epidemic spreading. finally, further theoretical analysis demonstrates that a multi-outbreak phenomenon emerges via the effect of coupling dynamics, which finds good agreement with empirical results. this work may shed light on the in-depth understanding of the interplay between the dynamics of epidemic spreading and information diffusion. the interaction between disease and disease information on complex networks has facilitated an interdisciplinary research area. when a disease begins to spread in the population, the corresponding information would also be transmitted among individuals, which in turn influence the spreading pattern of the disease. in this paper, firstly, we analyze the propagation of two representative diseases ( h n and dengue fever ) in the real-world population and their corresponding information on internet, suggesting the high correlation of the two-type dynamical processes. secondly, inspired by empirical analyses, we propose a nonlinear model to further interpret the coupling effect based on the sis (susceptible-infected-susceptible) model. both simulation results and theoretical analysis show that a high prevalence of epidemic will lead to a slow information decay, consequently resulting in a high infected level, which shall in turn prevent the epidemic spreading. finally, further theoretical analysis demonstrates that a multi-outbreak phenomenon emerges via the effect of coupling dynamics, which finds good agreement with empirical results. this work may shed light on the in-depth understanding of the interplay between the dynamics of epidemic spreading and information diffusion. recently, understanding how diseases spread among individuals has been an increasing hot research area of nonlinear studies [ ] . generally, epidemic spreading is considered to be a dynamic process in which the disease is transmitted from one individual to another via physical contact in peer-to-peer networks. to date, there is a vast amount of research tries to understand the epidemic spreading phenomenon, which could be mainly categorized into three types: (i) epidemic spreading on various types of networks [ ] , such as the scale-free network [ , ] , the small-world network [ , ] and the interdependent network [ , ] ; (ii) propagation mechanisms that describe the dynamic spreading process, such as the table illustration of parameters used in the spreading processes. susceptible-infected-recovered (sir) model for influenza [ , ] , the susceptible-infected-susceptible (sis) model for sexually transmitted disease [ , ] and the susceptible-exposed-infected-recovered (seir) model for rabies [ , ] ; (iii) data-driven modeling approaches that tackle the epidemic transmission [ ] by analyzing the available real datasets, such as the scaling laws in human mobility [ , ] , individual interactions [ , ] , and contact patterns [ , ] . the majority of the aforementioned studies focused on epidemic spreading independently, ignoring the fact that information diffusion of the diseases themselves may also have significant impact on epidemic outbreaks [ ] . for example, the outbreak of a contagious disease may lead to quick spreading of disease information, through either medias or friends. conversely, the information shall also drive people to take corresponding protective measures, such as staying at home, wearing face masks, and getting vaccinated [ ] . such behavioral responses may further impact epidemic outbreak in large population [ ] . therefore, studies on the coupling effect between epidemic spreading and information diffusion have attracted much attention from various disciplines. theoretical models have been proposed to explain how both disease and information simultaneously spread in the same population [ ] [ ] [ ] [ ] [ ] . in particular, the nonlinear influence of coupling parameters on the basic reproductive number ( r ) is studied to show the interplay between the two spreading processes [ ] . theoretical results indicate that the coupling interaction could decrease epidemic outbreak size in a well-mixed population [ ] . in some cases, enough behavioral changes would emerge in response to the diffusion of a great deal of disease information so that the severe epidemic would vanish completely, even the epidemic transmission rate was higher than the classical threshold initially [ ] [ ] [ ] [ ] [ ] . in addition, the interplay between information diffusion and epidemic spreading is elucidated on multiplex networks, where each type of dynamics diffuses on respective layers (e.g., information diffusion on communication layer versus epidemic spreading on physical layer) [ ] [ ] [ ] . as a consequence, the epidemic threshold, as related to the physical contact layer, can be increased by enhancing the diffusion rate of information on the communication layer. therefore, the effect of behavioral changes arises in three aspects [ ] : (i) disease state of the individuals, e.g., vaccination [ ] [ ] [ ] [ ] [ ] ; (ii) epidemic transmission and recovery rate [ , ] ; (iii) topological structure of contact network, e.g., the adaptive process [ ] [ ] [ ] [ ] . besides researches from physical discipline, scholars from mass communication share similar views on the causal linkages of the two diffusion processes. the outbreak of severe diseases usually attracts heavy media coverage, subsequently resulting in massive responses from the public: (i) cognitive responses, such as the attention to the information and increased awareness of the situation [ ] ; (ii) affective responses, such as anxiety, fear, or even panic [ ] ; (iii) behavioral responses, such as the adoption of new practices in order to replace undesirable habits [ ] . however, those assumptions are just theoretical hypotheses rather than empirical facts as it is difficult to find relevant data of one-to-one relationship in the spreading process. even when the data is available, it is also difficult to separate the unique effect of information on the control of epidemics from interference factors, such as variation of virus, seasonal factors and improved medical treatments, etc. present studies on the coupling dynamics mainly focus on the suppression effect of epidemic spreading by information diffusion. the occurrence of a disease prompts the sharing of corresponding information, leading to preventive measures that inhibit further epidemic spreading [ , ] . researchers have also pointed out that when the epidemic outbreak is under control, people shall not be very vigilant in discussing or sharing relevant information. it will lead to a consequent decrease in protection actions and may result in a recurrence of epidemics in future. for example, the spread of sars (severe acute respiratory syndromes) is alleviated in early march , however, a sudden increase appear later that month (as indicated in the evolution curve of the probable cases of sars, see fig. in ref. [ ] ). in this work, firstly, we demonstrate a similar outbreak pattern using data on the spread of two representative diseases, i.e., avian influenza a ( h n ) [ ] [ ] [ ] and dengue fever [ , ] , along with the diffusion of respective disease information. secondly, a nonlinear mathematical model is proposed to describe the coupled spreading dynamics as an sis spreading model. results show that information diffusion can significantly inhibit epidemic spreading. finally, both empirical analysis and the proposed model find good agreements in revealing a multi-outbreak phenomenon in the coupled spreading dynamics. to better illustrate this work, we collected data of two representative diseases, h n and dengue fever. each disease has two time series datasets: (i) daily number of individuals infected by the corresponding disease in china, which are collected from the chinese center for disease control and prevention ; (ii) online diffusion messages discussing or forwarding the information of the corresponding disease during the same period of epidemic spreading. the message diffusion data was crawled from the largest micro-blogging system in china [ ] , sina weibo ( http://www.weibo.com/ ). we have essentially obtained one-year data for the disease h n from the year to , and two-year data for dengue from the year to . we assume that individuals who post or retweet messages about the observed diseases are considered to be aware of the disease. empirical analysis of h n : fig. (a) shows the spreading processes of both disease and disease information of h n . it can be seen that the evolutionary trend of two processes are highly correlated, with pearson correlation coefficient of . . when the epidemic broke out in apr. and feb. ( fig. (a) ), it shows that many people were discussing it online simultaneously. actually, public responses to h n , such as staying at home or wearing face masks, can also affect the spread of the epidemic. the peaks of the disease spreading and the information diffusion shown in fig. (a) suggest that the mutual influence of these two spreading processes could be significant. interestingly, the size of the first epidemic peak (apr. ) is smaller than the second one (feb. ), which is inversely correlated with the information amount. that is to say, the number of individuals discussing the disease during the first outbreak is much greater than that of the second one. this might imply that the awareness of epidemics and the physical epidemics could influence each other. empirical analysis of dengue fever: fig. (b) describes the spreading processes of both disease and disease information of dengue. similar to the analysis of h n , the evolution trend of the two processes is also consistent with each other, with even much higher correlation coefficient of . . according to the two largest peaks (in sept. and sept. , respectively) of disease spreading, we find that the first epidemic peak is also smaller than the second one, while the corresponding information peaks show a contrary trend. considering the two small peaks of information in fig. (b ) and (b ), we can also find the same relationship between the the two dynamic processes as that of two largest peaks, suggesting also the possible coupling effect of the awareness of epidemics and the infected cases of dengue. in the aforementioned section, we empirically showed that the spread of disease and disease information has a coupling effect with each other by analyzing the data from two contagious diseases. inspired by the empirical results, we propose a network based nonlinear model to describe the interaction between epidemic spreading and information diffusion in this section. in this model, we assume there are two states for disease spreading: susceptible ( s ) and infected ( i ), and two states of information diffusion: aware (+) and unaware (-). as a consequence, each individual will be at one of the four states during the model evolution: • initially, one arbitrary individual is randomly picked from the given network as the spreading seed ( i + state). the rest individuals are set to be s − state. • at each time step, the infected individuals ( i + and i − states) will spread epidemics to their susceptible network neighbors ( s + and s − states) with given spreading probability. the infected individuals ( i + and i − states) could recover to the susceptible state with given recovery probability. • at each time step, individuals that are aware of the disease ( i + and s + states) will transmit the information to their unaware neighbors ( i − and s − states) with probability α. in addition, the informed individuals ( i + and s + ) could become unaware of the disease with the probabilities of λ and δλ, respectively. beyond the parameters given in table , we define σ as the probability of individuals taking protective measures. thus, σ s < is defined as that a susceptible aware individual ( s + ) will take protective measures to avoid becoming infected, and σ i < is defined as infected aware individuals ( i + state) will reduce contact with their susceptible neighbors or adopt medical treatments. in addition, we assume the infected probabilities for these two different populations are independent with each other, hence σ si = σ s σ i is defined as the probability of the i + state individuals infecting the s + state ones. when an i + individual is aware of the epidemic, s/he will take positive measures, leading to an increased recovery rate, which is represented by the factor ε > . furthermore, i + state individuals, which could be assumed to better understand the seriousness of epidemics, would be less likely to neglect relevant information, leading to δ < . in this work, since the spreading processes of information and disease are primarily determined by the corresponding transmission probabilities, we fix other parameters and mainly investigate the effects of α and β. in the following analysis, we set σ s = . , σ i = . , δ = . , ε = . , λ = . and γ = . . table . subsequently, the proposed model is performed on an er network with a total population n = , and average degree k = . to measure the spreading effects, we denote the infected level ( i ) as the fraction of infected individuals (both i + and i − ), and the informed level ( info ) as the fraction of individuals who are aware of the disease (both s + and i + ). fig. shows the simulation results by fixing the infection probability β = . . in this model, the parameter α can be considered as the information diffusion capability, hence larger α indicates that information diffuses much easier, resulting in a monotonically increase in the number of informed individuals (see the inset of fig. ). in fig. , it also shows that the increase in α will inversely hamper the speed of epidemic spreading, hence diminish the overall epidemic outbreak size. as a consequence, appropriate publicity might be an effective strategy to inhibit further spreading of epidemics, which is also consistent with the empirical analysis shown in fig. . in fig. , the model also indicates that there is mutual influence between information diffusion and epidemic spreading. a high prevalence of epidemic would lead to a small information fading probability δ, consequently resulting in a high infected level i . it in turn inhibits the epidemic spreading ( σ { i , s , si } < ). this coupling effect can be clearly described by the full set of differential equations (see appendix ). in addition, the equations are solved by mean-field and pairwise approaches, respectively. fig. shows the results of simulation, theoretical analysis of both mean-field and pairwise analysis. we find that the pairwise approach can better fit the model than the mean-field method. therefore, we use pairwise approach to perform further studies in the following analysis. in order to investigate the effect of the mutual interaction between α and β on the spreading process, we explore the phase diagram showing the fraction of infected individuals caused by combination of such coupling effects (see fig. ). the . that is to say, epidemic outbreak will occur if the parameter combination is larger than the critical value, otherwise the epidemic will die out. the results also clearly show that more individuals will be infected with large β and small α, suggesting that the information diffusion can impede the disease spreading. it is noted that the process degenerates to the standard sis model if α = , where there is no information diffusion in the system. thus, the epidemic outbreak threshold is β c = γ k = . [ ] , which is also consistent with the results of pairwise analysis and simulation shown in fig. . in addition, fig. (c) shows a detailed view of pairwise analysis for α, β ∈ [ , . ] in order to better observe the threshold changes. the threshold value of β is around . when α → , as the epidemic information cannot spread out in this case according to the inset of fig. . when α > , the epidemic threshold can be significantly increased because of the effect of information diffusion. on the contrary, fig. shows that the informed level only slightly ascends when α is large enough (e.g., α > . ), which leads to an obscure change in the epidemic threshold. this result additionally indicates that abundant information would not always work for obstructing epidemic spreading. for example, in the case that a disease with a strong infectiveness (corresponds to large β in fig. ), enhancing the public awareness alone is insufficient to control the large outbreak of epidemics. in order to obtain better understanding of dynamics of the critical phenomenon, we observe the evolution of infection density for various values of β in fig. . from the differential equation, di dt we can obtain i ∝ t − at the critical point, which shows a power-law decay. in addition, the inset of fig. also presents a power-law decay of the infection density when β ≈ . . by contrast, the infection turns to break out as an endemic, namely steady state, for β > . ( β = . in fig. ), otherwise the epidemic will be eliminated, so-called healthy state for β < . ( β = . in fig. ). therefore, it can be inferred that β c is approximately . in this case, which is consistent with the results in fig. , where β c is around . for α = . . interestingly, the empirical analysis also demonstrates that a multi-outbreak phenomenon emerges for both epidemic spreading [ , [ ] [ ] [ ] and information diffusion [ ] , in which there are several outbreaks during the dynamic process of epidemic spreading. generally, there are many complicated factors that might contribute to this phenomenon, including seasonal influence, climate change, and incubation period, etc. in this model, the periodic outbreaks can be interpreted by the influence of information diffusion. as discussed above, there is a mutual interaction as the two dynamics are coupled with each other during the process. on one hand, a larger proportion of infected individuals should result in an increase in preventive behavioral responses [ ] due to the increased awareness of the disease, consequently leading to a steady decrease of further infected cases. on the other hand, when the spread of epidemic tends to be under control, people shall become less sensitive to discuss or share relevant messages, which leads to dissemination of information and simultaneously raises the possibility of a second outbreak. notably, there are also some cases where the size of the second outbreak is smaller than the first one. for example, the eight dengue outbreaks in thailand over years from to [ ] , and there are also some cases that the second outbreak is larger than the previous one, as in the case of sars in [ ] and dengue in taiwan in - [ ] . in order to better understand the underlying mechanism that drives the multi-outbreak phenomenon of the coupled dynamics, we set two thresholds, i high and i low , to represent different infected levels. that is to say, when the fraction of infected individuals is larger than i high , the information diffusion parameter α will be set as high as α = . so that the information will diffuse even more quickly. accordingly, when it is smaller than i low , the parameter will directly decay to α = . to represent the corresponding response to abatement effect of information. fig. shows the simulation results. it can be seen that the epidemic spreads very quickly at the beginning as there are very few people aware of it, and soon reaches the threshold i high and triggers the designed high information transmission probability α = . . as a consequence, as the information bursts out, the high informed level has a significant impact on inhibiting epidemic spreading (the decay period of the epidemic), which will be completely suppressed if the high informed level remains. however, when the epidemic spreading is notably controlled from the first outbreak (i.e. the infected density is smaller than i low ), people are less likely to consider the epidemic as a threat, hence ignore relevant information and no longer actively engage in taking protective measures, which will in turn lead to a subsequent epidemic outbreak in the future. two representative outbreak patterns are shown in fig. , where the first outbreak is smaller than the second one ( fig. (a) ) and vice versa ( fig. (b) ). moreover, fig. , where the size of the first epidemic outbreak is smaller than that of the second one, while the informed level shows to the contrary. it should be noted that, due to the difficulty in collecting data of patient-to-fans to precisely quantify the informed level in the empirical analysis, the number of messages that discuss the epidemic is alternatively used in fig. . different from the trend shown in fig. , a high informed level( info > . ) must be maintained during the period when the infected level decreases shown in fig. . based on the model analysis, it could be concluded that it is important to raise public awareness of epidemic occurrence, especially during when the epidemic seems to be under control, otherwise, there is a likelihood of subsequent outbreak in the foreseeable future. furthermore, we explore the evolution of the informed and infected density with different values of β in fig. . in fig. (a) , it shows that the infected density firstly achieves a small peak and then rapidly vanishes, resulting in a evolution pattern known as healthy , which means there is approximately no disease. in fig. (b) , an oscillatory pattern is revealed for . < beta ≤ . . similarly, for large β ∈ ( . , ], the infected density firstly achieves a large peak (almost close to one), then rapidly decrease to a low level (nearly zero) and gradually raised to a steady state, showing a unimodal pattern [ ] . in this paper, we have studied the coupling dynamics between epidemic spreading and relevant information diffusion. empirical analyses from representative diseases ( h n and dengue fever ) show that the two kinds of dynamics could significantly influence each other. in addition, we propose a nonlinear model to describe such coupling dynamics based on the sis (susceptible-infected-susceptible) process. both simulation results and theoretical analyses show the underlying coupling phenomenon. that is to say, a high prevalence of epidemic will lead to a slow information decay, consequently resulting in a high infected level, which shall in turn prevent the epidemic spreading. further theoretical analysis demonstrates that a multi-outbreak phenomenon emerges via the effect of coupling dynamics, which finds good agreement with empirical results. the findings of this work may have various applications of network dynamics. for example, as it has been proved that preventive behaviors introduced by disease information can significantly inhibit the epidemic spreading, and information diffusion can be utilized as a complementary measure to efficiently control epidemics. therefore, the government should make an effort to maintain the public awareness, especially during the harmonious periods when the epidemic seems to be under control. in addition, in this work, we only consider the general preventive behavioral response of crowd. however, the dynamics of an epidemic may be very different due to the behavioral responses of people, such as adaptive process [ ] , migration [ ] , vaccination [ ] , and immunity [ ] . this work just provides a starting point to understand the coupling effect between the two spreading processes, a more comprehensive and in-depth study of personalized preventive behavioral responses shall need further effort s to discover. mean-field analysis: according to fig. , we adopt mean-field analysis for the spread of epidemic and information in a homogeneous network as follows: where n is the number of individuals in the system, k is the average degree of the network and the other parameters are illustrated in table . pairwise analysis: pairwise models have recently been widely used to illustrate the dynamic process of epidemics on networks, as those models take into account of the edges of the networks [ ] [ ] [ ] . in this study, we consider a set of evolution equations which are comprised of four types of individuals and types of edges. using the well-known closure, (assuming the neighbors of each individual obey poisson distribution) [ ] , we can get a set of differential equations as follows: ( ) epidemic processes in complex networks some features of the spread of epidemics and information on a random graph epidemic spreading in scale-free networks velocity and hierarchical spread of epidemic outbreaks in scale-free networks small world effect in an epidemiological model searching for the most cost-effective strategy for controlling epidemics spreading on regular and small-world networks epidemics on interconnected networks epidemics on interconnected lattices forecast and control of epidemics in a globalized world mitigation strategies for pandemic influenza in the united states spreading of sexually transmitted diseases in heterosexual populations influence of network dynamics on the spread of sexually transmitted diseases predicting the local dynamics of epizootic rabies among raccoons in the united states analysis of rabies in china: transmission dynamics and control modelling dynamical processes in complex socio-technical systems impact of non-poissonian activity patterns on spreading processes modeling human mobility responses to the large-scale spreading of infectious diseases modeling human dynamics of face-to-face interaction networks small but slow world: how network topology and burstiness slow down spreading temporal networks how human location-specific contact patterns impact spatial transmission between populations? dynamics of information diffusion and its applications on complex networks epidemic spreading with information-driven vaccination world health organization. consensus document on the epidemiology of severe acute respiratory syndrome (sars) the spread of awareness and its impact on epidemic outbreaks endemic disease, awareness, and local behavioural response modelling the influence of human behaviour on the spread of infectious diseases: a review interacting epidemics on overlay networks epidemic dynamics on information-driven adaptive networks on the existence of a threshold for preventive behavioral responses to suppress epidemic spreading the impact of awareness on epidemic spreading in networks imitation dynamics predict vaccinating behaviour social contact networks and disease eradicability under voluntary vaccination imitation dynamics of vaccination behaviour on social networks dynamical interplay between awareness and epidemic spreading in multiplex networks competing spreading processes on multiplex networks: awareness and epidemics asymmetrically interacting spreading dynamics on complex layered networks vaccination and the theory of games effects of behavioral response and vaccination policy on epidemic spreading-an approach based on evolutionary-game dynamics vaccination and epidemics in networked populationsan introduction information cascades in complex networks statistical physics of vaccination the impact of information transmission on epidemic outbreaks epidemic dynamics on an adaptive network adaptive human behavior in epidemiological models information spreading on dynamic social networks roles of edge weights on epidemic spreading dynamics issue competition and attention distraction: a zero-sum theory of agenda-setting moral threats and dangerous desires: aids in the news media diffusion of innovations h n is a virus worth worrying about determination of original infection source of h n avian influenza by dynamical model global concerns regarding novel influenza a (h n ) virus infections controlling dengue with vaccines in thailand impact of human mobility on the emergence of dengue epidemics in pakistan how events determine spreading patterns: information transmission via internal and external influences on social networks mapping spread and risk of avian influenza a (h n ) in china transmission characteristics of different students during a school outbreak of (h n ) pdm influenza in china the effect of antibody-dependent enhancement, cross immunity, and vector population on the dynamics of dengue fever spatial behavior of an epidemic model with migration epidemic enhancement in partially immune populations representing spatial interactions in simple ecological models the effects of local spatial structure on epidemiological invasions pair approximation of the stochastic susceptible-infected-recovered-susceptible epidemic model on the hypercubic lattice key: cord- - kxhav q authors: kearsley, r.; duffy, c. c. title: the covid‐ information pandemic: how have we managed the surge? date: - - journal: anaesthesia doi: . /anae. sha: doc_id: cord_uid: kxhav q the severe acute respiratory syndrome coronavirus- (sars-cov- ) or coronavirus (covid- ) pandemic has permanently impacted our everyday normality. since the outbreak of this pandemic, our e-mail inboxes, social media feeds and even general news outlets have become saturated with new guidelines, revisions of guidelines, new protocols and updated protocols, all subject to constant amendments. this constant stream of information has added uncertainty and cognitive fatigue to a workforce that is under pressure. while we adapt our practice and learn how to best manage our covid- patients, a second pandemic - information overload - has become our achilles' heel. anaesthetists, by the nature of our work, are exposed to covid- , and we have been at the helm of creating pathways and guidelines to support staff and ensure safety. as leaders in patient safety [ ] , we have learnt and adapted process and safety improvements from other industries, most notably aviation [ ] . simple, clear and structured guidelines such as the difficult airway society guidelines are important cognitive tools that help aid our decision-making processes especially in emergencies [ ] . it is recognised that presenting multiple differing techniques introduces cognitive overload, confusion and increases the chance of error [ ] . as we adapt our established clinical practices to deal with covid- , we must be cognisant to the fact that these changes potentially expose us to an increased risk of error. during this period, we do not have the luxury of time; to reflect on previous practice; to rely on large scale randomised controlled trials; or to review guidelines before publication. this is a pandemic in action, where well-intentioned guidelines, which present accurate and understood practices in one moment, are liable to frequent and drastic change. lessons learnt from wuhan, china and northern italy gave other global healthcare systems a vital time advantage. this allowed them to start creating guidelines for the impending surge with the important caveat that they would require near daily revision [ ] . we have seen an explosion of guidelines, released by multiple organisations, in good faith and often only differing in their visual presentation; as illustrated by infographics from hong kong and italy [ , ] . at times, guidelines from reputable organisations have also provided contrasting clinical opinions, such as the use of high-flow nasal oxygen in patients with covid [ ] . we are invariably playing 'spot the difference' between newly published guidelines; which is to be expected as we react in action rather than reflect on action. frequent revisions, though often necessary, have the potential to create confusion, miscommunication and fear. the sars outbreak demonstrated that strict hierarchal structures are required during a crisis [ ] . similarly, the same concept should be applied to our search for guidance regarding covid- . when organisations join forces (e.g. the association of anaesthetists, the royal college of anaesthetists, the faculty of intensive care medicine and the intensive care society) to produce a strong united platform with one message, there is a greater sense of trust and security for their members. in times of a pandemic, clear, simple guidelines abate fear and anxiety [ ] . we have witnessed a race to publish articles on covid- with unedited proofs, pre-prints and rapid review articles during this news drought. we saw the use of hydroxychloroquine was heavily promoted in the media despite any positive evidence for its use [ ] . there is an enormous amount of information in the ether, and unfortunately not all of it reliable, as the number of retracted papers also grows [ ] . with clinical information coming from multiple sources, it is important to ensure that the most important, accurate information filters through. information chaos leading to alert fatigue is well recognised in the healthcare environment [ ] . when increased volume of communications are sent through an increasing number of platforms, alert fatigue may impact individual's ability to recall specific messages, due to 'noise' created by the greater frequency. information delivered too frequently and/or repetitively through numerous communication channels may have a negative effect on the ability of healthcare providers to effectively recall emergency information [ ] . we live in a technological age where we can be easily accessed by emails, text-messaging and social media alerts; the magnitude of the potential for alert fatigue should be acknowledged. keeping healthcare workers informed during a pandemic is critical and the way in which we do that needs to be co-ordinated and measured to avoid the risk of alert fatigue and potential for important information to be lost in the 'noise'. the covid- pandemic is demonstrating that we are utilising social media as one of our main sources for the dissemination of medical information [ ] . knowledge and debate surrounding personal protective equipment (ppe) has been one of the most prominent covid- discussion points, due to the high risk of contagion via droplet spread [ , ] , with frequent social media use [ ] . as part of our response to managing stress and minimising burnout, it is important to appreciate the impact that information overload and cognitive load has had on us. modifying our social media use and consumption of general news is important to support our mental well-being. we have witnessed an increase in public interest, awareness and knowledge of the role of the anaesthetist in healthcare due to this pandemic. we know from previous research, the public's knowledge of the role of the anaesthetist can be limited [ ] . google trends worldwide have shown a surge in searches for the word 'ventilator' and the term 'ppe' since the beginning of march . for the first time ever, an anaesthetist featured on the front cover of time magazine [ ] . we find our specialty in the spotlight. although longterm effects of increased public knowledge about our healthcare role may prove positive, we must also recognise with added exposure comes added pressure. there has been much debate publicly surrounding the allocation of resources such as intensive care beds and ventilators and the limitations of treatment for some patients, which has served to highlight the difficult ethical decisions which we face on a daily basis. this increased focus within mainstream media makes it difficult to escape the day job. we need to utilise the well-being and psychological supports on offer to give ourselves some time away from intensity of the day job. this growing interest in who we are and what we do is another example of the surge in information associated with covid- . as we learn to live with this virus, it is important for us to be cognisant that we are all at risk of error; we need to work to reduce information overload and focus on unifying our approach to both information dissemination and presentation. we must go back to basics and apply the well- anaesthesiology as a model for patient safety in health care beyond the borders: lessons from various industries adopted in anesthesiology the effects of a displayed cognitive aid on non-technical skills in a simulated 'can't intubate, can't oxygenate' crisis a systems analysis approach to medical error hospital surge capacity in a tertiary emergency referral centre during the covid- outbreak in italy infographic for principles of airway management in covid- the use of high-flow nasal oxygen in covid- applying the lessons of sars to pandemic influenza: an evidence-based approach to mitigating the stress experienced by healthcare workers over three days this week, fox news promoted an antimalarial drug treatment for coronavirus over times retracted coronavirus (covid- ) papers information chaos in primary care: implications for physician performance and patient safety public health communications and alert fatigue social media and emergency preparedness in response to novel coronavirus social media and online communities of practice in anaesthesia education social media for rapid knowledge dissemination: early experience from the covid- pandemic. anaesthesia . epub selak t dissemination of medical publications on social media -is it the new standard? airborne transmission of severe acute respiratory syndrome coronavirus- to healthcare workers: a narrative review personal protective equipment during the coronavirus disease (covid) pandemic -a narrative review association of anaesthetists. fatigue and anaesthetists faculty of intensive care medicine. staff wellbeing resources mental health problems and social media exposure during covid- outbreak irish patients knowledge and perception of anaesthesia front line workers tell their own stories in the new issue of time no competing interests declared. key: cord- -xh z v m authors: khatiwada, asmita priyadarshini; shakya, sujyoti; shrestha, sunil title: paradigm shift of drug information centers during the covid- pandemic date: - - journal: drugs ther perspect doi: . /s - - - sha: doc_id: cord_uid: xh z v m the novel coronavirus disease (covid- ) pandemic is a major global threat affecting millions of lives throughout the world physically and psychologically. with the asymptomatic presentation of covid- in many patients and the similarity of its symptoms with the common cold and influenza, the need for accurate information on the disease is very important for its identification and proper management. accurate information on the disease, its prevention and treatment can be disseminated through drug information centers (dics). dics are usually staffed by pharmacists and/or clinical pharmacists/pharmacologists. dics are a reliable source of current and unbiased information on covid- and its associated complications, including management options for healthcare professionals and the public. in addition to health and drug information, pharmacists working in the dics can be involved in the management of the patients’ health by providing information on home care and safety, medication management of patients with chronic comorbid illnesses, and psychological advice. this article explores the possible additional roles dics can play, besides providing drug information within the hospital or in the community. with the initiation of the outbreak in wuhan, china in december , the pandemic of novel coronavirus disease (covid- ) has become a worldwide threat. psychologically and/or physically, covid- has deep effects on people throughout the globe. covid- is caused by severe acute respiratory syndrome coronavirus (sars-cov- ), and people may present as asymptomatic or with symptoms such as fever, cough, tiredness, headache, and difficulty breathing [ ] . as the clinical presentation of covid- symptoms is similar to that of the common cold and influenza, complete and reliable information on the disease, testing methods, and prevention is of utmost importance for the public and healthcare professionals/providers (hcps) [ ] . such information can be gathered, validated and provided to the public by drug information centers (dics). dics are usually found within hospitals or in the community setting, are staffed by pharmacists, and are accountable for the communication of recent, unbiased, critically reviewed information on medications [ ] . not all dics serve the public; however, during the current pandemic, dics can serve as information hubs for providing current and reliable information to hcps and patients/consumers regarding covid- and its related complications, symptom management, interactions with co-morbid conditions, and drug-drug interactions (ddis). however, providing drug information and other dic services to the public may not be easy in many lower-income countries (lics), as hcps are not evenly distributed across geographical regions [ ] , and resources and manpower are limited [ , ] . in lics, managing covid- seems difficult, as they lack resources to hire or train new hcps, leaving them with the alternative of maximizing the use of existing human resources, including pharmacists. existing pharmacist manpower could serve the public by providing information on covid- and its transmission, prevention, and management within hospitals and communities. with the global spread of covid- , providing accurate information about the disease is an issue for everyone, including hcps, patients, and the public. continuing medication education programs and training on the effective management of patients with covid- can be arranged for hcps. additionally, training focused on patients presenting with covid- symptoms along with various comorbid conditions may be highly effective. to prevent the complications arising from covid- and existing comorbidities, appropriate knowledge of management strategies is crucial for patient safety. as the infection transfers from human to human through respiratory droplets and contacts [ ] , the public needs to be aware of covid- , with current information on the spread and prevention of the disease. to deal with unforeseen situations such as covid- , in addition to government preparedness, health literacy among the general population and hcps regarding the disease is important [ ] . during outbreaks or pandemics, pharmacists are known to effectively work as frontline hcps [ ] [ ] [ ] [ ] . pharmacists played a crucial role in direct patient care, medication information, and proper drug distribution with proactive communication among themselves and with other hcps during the severe acute respiratory syndrome (sars) pandemic. pharmacists were among the major stakeholders responsible for developing guidelines for the administration of oral and intravenous antivirals, updating hospital formularies, providing information on investigational drugs, and providing information on dosage adjustments in hepatic and renal failure patients [ ] . during an outbreak of h n influenza, pharmacists were also involved in improving patient awareness. they provided information about the disease and preventive measures to the patients through community pharmacies, and acted as immunization providers in collaboration with public health associates [ , ] . the pharmacists' role during the ebola virus epidemic in africa was minimal, with no additional activities being undertaken. however, new roles including immunization, contraception, public awareness, identifying infected patients and redirecting them to hospitals and isolation centers, logistics, supplies and clinical management, and being an information hub for patients and hcps regarding the disease, transmission, preventive measures, management approaches, and investigational medications were identified [ , ] . even though the various services provided by the pharmacists during different pandemics were not directly through dics, pharmacists were actively involved in the dissemination of information on the disease and investigational world health organization (who)-approved/non-approved medications to the patients, hcps and the public. dics can serve as a focal point for the dissemination of unbiased information to the patients/public regarding covid- and its treatment and management [ ] . this will upgrade the role of dics in patient care by acting as a 'disease information center', in addition to providing drug information. detailed information on covid- and its effects in patients with various health conditions, such as cardiovascular disease (cvd), diabetes mellitus, neurological issues, and respiratory illnesses, can be disseminated to hcps and patients/ public via dics, thereby promoting the role of pharmacists in patient management. the following sections cover the broad role of dics as disease information centers. patients with a chronic illness, such as diabetes, cvd, and kidney disease, usually have many questions regarding medications and lifestyle modifications [ ] . such questions may be more frequent during the covid- outbreak, as patients may consider themselves to be at high risk for the infection. the management of chronic conditions is challenging because of their various complications, multiple and complicated drug regimens, and fluctuating disease states and patterns [ ] . polypharmacy is associated with comorbidities and negative clinical outcomes [ ] . epidemiological data show that older patients with several comorbidities are more susceptible to covid- -related health issues than those without [ ] . proper communication regarding the prevention and spread of infection in this vulnerable patient group can be done by dic pharmacists in collaboration with other public hcps, which may allow frontline hcps more time to care for their infected patients. with the help of their documentation, dics can contact and follow patients with chronic conditions to whom dic services had been provided earlier, which may improve their health outcomes. in many lics, there is a lack of e-prescribing software, making patient follow-up difficult. where available, however, pharmacy databases used mainly to record sales and purchases of medicines, are valuable in identifying patients with chronic illnesses, as their prescribed drugs suggest their comorbidities [ ] . this may represent an additional role for dics in hospitals and communities, but relies on dics having staff available to perform such duties, the preparation and utilization of standard operating procedures, and the ability to access patient information in both the hospital and community settings. dics can also greatly aid hcps by providing current, complete, and accurate information on drug therapy and management approaches for treating covid- patients with various comorbid conditions based on the scientific literature, and national and international guidelines. information regarding adverse drug reactions (adrs) and ddis is often sought by patients, especially those with diabetes, cvd, or respiratory diseases [ ] . although online information can be easy to access, the reliability and validity of such information is often questionable. patients with multiple comorbidities deal with psychological distress, and are often poorly compliant concerning medication administration and lifestyle modifications, leading to reduced quality of life [ ] . this seems to be more likely in the absence of appropriate knowledge on the disease conditions, medications, ddis, drug-food interactions, drug-disease interactions, and adrs [ ] [ ] [ ] . this highlights the important role that dics can play as a disease information center, particularly in this health crisis, and may contribute to better patient outcomes in collaboration with other hcps [ ] . if direct face-to-face communication of information is not possible, remote communication via teleconferencing can be used [ ] . monitoring of adrs may be difficult during covid- lockdowns. proactive communication is very important for proper assessment of adrs, with pharmacists able to be actively involved in assessing, monitoring, and detecting adrs. this may also be a role for dics and, to ensure smooth functioning of their diverse functions, requires dics to be adequately staffed and resourced. patients should be advised to be alert to the development of unwanted adrs, and to provide the dic with the details via telephone, email, or other means. according to the who, staying home and maintaining social distancing can help to prevent the spread of covid- , as no vaccine or specific treatment is currently available [ ] . dics can play an important role in providing information regarding home care and safety during the covid- pandemic. dics can encourage hcps to evaluate the basic recommendations given by reliable organizations, such as the who, mayo health centres, us centers for disease control and prevention, and others, for care at home during this pandemic. recommendations may include evaluating whether the patient is stable enough to receive care at home, and the availability of caregivers at home and the isolation area; and suggestions on maintaining required distances with infected individuals, sanitization, and cleaning. dics could also provide information on the accessibility of food and other necessities, including personal protective equipment (ppe; at the very least, gloves and facemasks) [ , ] . hcps should educate patients/public on measures such as self-isolation; not sharing personal items; avoiding touching the eyes, nose, and mouth; washing hands frequently for at least s or using sanitizers; cleaning and sterilizing surfaces; and ignoring unwanted visitors while providing home care for mildly symptomatic individuals [ ] . furthermore, dics should also be involved in the evaluation of the presence of any vulnerable individuals who are at increased risk of covid- (e.g. those aged > years, young children, pregnant women, immunocompromised individuals, and those with chronic heart, lung, or kidney conditions), and provide information on the measures to be taken if any unusual symptoms or symptoms of severe covid- develop [ , ] . the public often use social media when seeking information and solutions for health issues. social media can also be an effective aid in providing healthcare services to patients (e.g., online consultation with pharmacists regarding medication-related issues), as well as for sharing and acquiring information among hcps [ ] . dic pharmacists can use the official social media pages of dics for healthcare communication, sharing health and medicine information, locating medicines for patients, and providing medicines in case of shortages. this will limit consumer visits to hcps during the pandemic. for example, when a patient approaches the dic for a medication, they can contact the pharmacies in their network regarding drug availability, then provide the patient with the details. such communication can be done through social media sites such as whatsapp, viber, facebook, twitter, etc. information on the availability of medications could also be gathered and, with permission from the pharmacies, provided on the social media pages of the dic, which will be of great help for patients. this will help develop public trust regarding the other services provided by the dic and limit unnecessary travel of people outside the home, preventing them from being exposed to the virus. new and updated information on medications approved for covid- treatment or prevention can be posted on social media sites of dics. initially, hydroxychloroquine, a drug used to treat malaria and inflammation related to autoimmune disorders, was authorized for use in covid- by the us food and drug administration (fda) through an emergency use authorization (eua) [ , ] . however, cases of hospitalization and death due to self-medication of this drug indicate that dissemination of correct information regarding its use is necessary [ , ] . given data from ongoing trials and emerging studies suggesting that hydroxychloroquine and chloroquine do not help to decrease the mortality or speed the recovery of covid- patients, and the potential for life-threatening heart rhythm problems, the fda withdrew the eua for their use in covid- [ ] . nevertheless, trials exploring the use of hydroxychloroquine and chloroquine in the prevention of covid- are to be restarted, as there are issues regarding the results of the study depicting the harmful effect of these drugs, based on which the who had earlier suspended the use of these drugs in covid- [ ] . an eua was issued also for remdesivir, an investigational antiviral agent in the treatment of hospitalized covid- patients. however, it is not yet approved by the fda as the drug is still undergoing clinical trials [ , ] . recently, dexamethasone, a well-known corticosteroid, was shown to decrease covid- mortality in patients with serious respiratory problems, but do not have any beneficial effect in patients with mild symptoms, according to preliminary results of the recovery trial at oxford university [ , ] . there are many ongoing clinical trials in the quest for treatments for covid- . among the plethora of information and frequently changing notions regarding treatment, appropriate information must be delivered to patients and hcps. dics can play a significant role in providing information to hcps and the public regarding the use and effects of new medications being authorized and other medications being investigated for covid- . patients currently taking hydroxychloroquine/chloroquine to treat approved indications, such as malaria and autoimmune conditions, should continue to take the drug, and be advised not to stop taking their medicine without consulting a hcp [ ] . dics should inform new users not to consume these medicines without knowledge of their hcps, due to the risk of severe poisoning and death with their non-prescribed use [ ] . they should be counseled to seek immediate medical attention if they experience irregular heartbeat, dizziness, or fainting while receiving hydroxychloroquine/chloroquine, and to store the drugs beyond the reach of children. the same advice also pertains to dexamethasone in the context of regular use, new users and approaching hcps for help if adrs occur. dics should ensure hcps are aware of the current position of investigational drugs hydroxychloroquine/chloroquine, remdesivir, and dexamethasone (i.e., they are not yet approved to prevent or treat covid- , but their use should be continued in patients taking them for other conditions). as some individuals may still take hydroxychloroquine/chloroquine for covid- , hcps should be aware of the potential adrs associated with these drugs, such as qt prolongation (increased risk in patients with renal insufficiency or failure), increased insulin levels and insulin action (increasing the risk of severe hypoglycemia), hemolysis in patients with glucose- -phosphate dehydrogenase deficiency, and ddis with drugs that cause qt prolongation (e.g., azithromycin) [ ] . dics should alert hcps regarding the possible adrs of dexamethasone (e.g., hyperglycemia, secondary infections, hypertension, fluid retention, and exacerbation of systemic fungal infections) [ ] , as well as its appropriate dosage and patient profile. hcps should ensure that patients receiving the drug do not have any existing conditions that might be exaggerated by dexamethasone. based on the scientific evidence and expert opinion, dics can provide up-to-date information on the management of covid- . people should be informed that immediate hospitalization is not necessary in cases of mild clinical presentation (lack of viral pneumonia and hypoxia), and that the initial symptoms can be easily managed at home [ ] . dics should encourage patients to seek immediate medical attention if they experience breathing problems, persistent chest pain and chest tightness, dizziness or inability to wake up, or their lips or face turn blue [ ] . some patients with covid- need to be hospitalized for serious illness and associated comorbid conditions. inpatient management revolves around adjuvant management of common complications of severe covid- , such as pneumonia, hypoxemic respiratory failure/acute respiratory distress syndrome (ards), sepsis and septic shock, cardiomyopathy and arrhythmia, acute kidney injury, and chronic hospital complications, secondary bacteriostoma, thromboembolism, and serious disease polyneuropathy/ myopathy [ ] [ ] [ ] [ ] . patients with severe acute respiratory infections (sari) and respiratory problems, hypoxemia, or shock require immediate complementary oxygen therapy. patients presenting with severe covid- symptoms should be closely monitored for clinical decline (e.g., rapid progressive respiratory failure and sepsis) and immediate treatment with adjuvant interventions should be provided. empiric antimicrobial therapy should be given as soon as possible to treat sari. advanced oxygen/ventilator support should be considered for severe hypoxemic respiratory failure when standard oxygen therapy fails in patients with respiratory problems. corticosteroids are not routinely recommended for viral pneumonia or ards, and should be avoided unless indicated for another reason (e.g., chronic obstructive pulmonary disease exacerbations, refractory septic shock) [ ] . with the global chaos of the covid- pandemic, mental health is an issue of concern. pharmacists can play a significant role in managing the psychological trauma, insecurity, and depression in individuals during this threatening period. although a psychologist is a more suitable hcp, pharmacists can also provide psychological support by communicating with patients, giving information on mental health issues along with the contact details of trained counselors, if needed [ , ] . pharmacists can counsel patients in the one-to-one session via telephone or social media, without the need for a physical visit. there is a scope for dics to liaise with different stakeholders to develop online support groups especially focused on patients with chronic illnesses, as group discussions may help them manage their conditions and prevent any mental effects amid quarantine and lockdowns. • during the global threat of covid- , all hcps have significant and crucial roles to play. • dics, staffed with pharmacists, can be involved in achieving better health outcomes for patients/consumers by enhancing their role across the disease management process. • to enhance their role, dics can provide information on covid- , including management approaches, psychological advice, home care and safety, ddis and adrs, and medical management of chronic comorbid illnesses, in addition to their usual role in providing drug information. • further, the liaison of dics with the different national health authorities and organizations might ensure the better utilization of dic services by the people of the country. world health organization (who). coronavirus a comprehensive literature review on the clinical presentation, and management of the pandemic coronavirus disease (covid- ) overview, challenges and future prospects of drug information services in nepal: a reflective commentary workforce description the labour market for human resources for health in low-and middle-income countries. human resources for health observer -issue no workforce resources for health in developing countries modes of transmission of virus causing covid- : implications for ipc precaution recommendations covid- : health literacy is an underestimated problem role of hospital pharmacists during pandemic flu updated public health and pharmacy collaboration in an influenza pandemic: summary of findings from an exploratory interview project severe acute respiratory syndrome (sars): the pharmacist's role ebola virus disease defining the pharmacist role in the pandemic outbreak of novel h n influenza ebola virus disease: how can african pharmacists respond to future outbreaks? the role of pharmacists in the evd outbreak review on benefits of drug information center services: a new transpiring practice to health care professionals in hospitals the characteristics of drug information inquiries in an ethiopian university hospital: a two-year observational study good pharmacy practice in chronic disease management identifying patients with chronic conditions using pharmacy data in switzerland: an updated mapping approach to the classification of medications comorbidity and its impact on patients with covid- . sn compr clin med comorbidity is a better predictor of impaired immunity than chronological age in older adults how to meet patients' individual needs for drug information: a scoping review the correlations between the presence of comorbidities, psychological distress and health-related quality of life. handbook of disease burdens and quality of life measures relationship between patients' knowledge and medication adherence among patients with hypertension low medication knowledge and adherence to oral chronic medications among patients attending community pharmacies: a cross-sectional study in a low income country medication knowledge as a determinant of medication adherence in geriatric patients, serse elian city, menoufia governorate, egypt assessment of the use and status of new drug information centers in a developing country, ethiopia: the case of public university hospital drug information centers pharmacists at the frontline beating the covid- pandemic coronavirus disease (covid- ) advice for the public coronavirus disease : caring for patients at home american red cross. covid- : safety tips for you severe acute respiratory infections treatment centre: practical manual to set up and manage a sari treatment centre and sari screening facility in health care facilities. geneva: world health organization professional use of social media by pharmacists: a qualitative study mechanisms of action of hydroxychloroquine and chloroquine: implications for rheumatology hydroxychloroquine or chloroquine for covid- : drug safety communication-fda cautions against use outside of the hospital setting or a clinical trial due to risk of heart rhythm problems the cdc just changed key info about hydroxychloroquine on its coronavirus site president trump called hydroxychloroquine a 'game changer,' but experts warn against self-medicating with the drug. here's what you need to know memorandum explaining basis for revocation of emergency use authorization for emergency use of chloroquine phosphate and hydroxychloroquine sulfate coronavirus: hydroxychloroquine trial to restart letter of authorisation for emergency use of remdesivir for treatment of covid- covid- ) update: fda issues emergency use authorisation for potential covid- treatment dexamethasone proves first life-saving drug. bbc news online who welcomes preliminary results about dexamethasone use in treating critically ill covid- patients hydroxychloroquine or chloroquine for covid- : drug safety communication -fda cautions against use outside of the hospital setting or a clinical trial due to risk of heart rhythm problems medwatch: the fda safety information and adverse event reporting program silver spring: us food and drug administration clinical features of patients infected with novel coronavirus in wuhan clinical course and risk factors for mortality of adult inpatients with covid- in wuhan, china: a retrospective cohort study clinical course and outcomes of critically ill patients with sars-cov- pneumonia in wuhan, china: a single-centered, retrospective, observational study epidemiological and clinical characteristics of cases of novel coronavirus pneumonia in wuhan, china: a descriptive study clinical characteristics of hospitalized patients with novel coronavirus-infected pneumonia in wuhan, china recommendations and guidance for providing pharmaceutical care services during covid- pandemic: a china perspective beyond prescriptions: what patients need during the covid- pandemic. us pharm acknowledgements the authors would like to acknowledge dr. saval khanal, research fellow in health economics and behavioral sciences at warwick university, uk, for the suggestions and modifications during revision of the manuscript. author contributions ap khatiwada developed the study concept, conducted the literature review, and wrote the first draft of the manuscript. s shakya made substantial changes to the study concept, conducted a literature review, and provided support during the writing of the first draft. s shrestha provided feedback on the study concept, and critically revised and reviewed the manuscript. all authors approved the final version of the manuscript. funding this research did not receive any funding from any agency in the public, commercial, or not-for-profit sectors. the authors declare that they have no conflicts of interest. key: cord- -gvezk vp authors: ahonen, pasi; alahuhta, petteri; daskala, barbara; delaitre, sabine; hert, paul de; lindner, ralf; maghiros, ioannis; moscibroda, anna; schreurs, wim; verlinden, michiel title: safeguards date: journal: safeguards in a world of ambient intelligence doi: . / - - - - _ sha: doc_id: cord_uid: gvezk vp the multiplicity of threats and vulnerabilities associated with ami will require a multiplicity of safeguards to respond to the risks and problems posed by the emerging technological systems and their applications. in some instances, a single safeguard might be sufficient to address a specified threat or vulnerability. more typically, however, a combination of safeguards will be necessary to address each threat and vulnerability. in still other instances, one safeguard might apply to numerous treats and vulnerabilities. one could depict these combinations in a matrix or on a spreadsheet, but the spreadsheet would quickly become rather large and, perhaps, would be slightly misleading. just as the ami world will be dynamic, constantly changing, the applicability of safeguards should also be regarded as subject to a dynamic, i.e., different and new safeguards may need to be introduced in order to cope with changes in the threats and vulnerabilities. the multiplicity of threats and vulnerabilities associated with ami will require a multiplicity of safeguards to respond to the risks and problems posed by the emerging technological systems and their applications. in some instances, a single safeguard might be sufficient to address a specified threat or vulnerability. more typically, however, a combination of safeguards will be necessary to address each threat and vulnerability. in still other instances, one safeguard might apply to numerous treats and vulnerabilities. one could depict these combinations in a matrix or on a spreadsheet, but the spreadsheet would quickly become rather large and, perhaps, would be slightly misleading. just as the ami world will be dynamic, constantly changing, the applicability of safeguards should also be regarded as subject to a dynamic, i.e., different and new safeguards may need to be introduced in order to cope with changes in the threats and vulnerabilities. for the purpose of this chapter, we have grouped safeguards into three main categories: the main privacy-protecting principles in network applications are: • anonymity (which is the possibility to use a resource or service without disclosure of user identity) • pseudonymity (the possibility to use a resource or service without disclosure of user identity, but to be still accountable for that use) • unlinkability (the possibility to use multiple resources or services without others being able to discover that these resources are being used by the same user) prime project, which studied the state of the art in privacy protection in network applications in , pointed out many performance problems and security weaknesses. the challenges in privacy-enhancing technologies for networked applications include developing methods for users to express their wishes regarding the processing of their data in machine-readable form ("privacy policies") and developing methods to ensure that the data are indeed processed according to users' wishes and legal regulations ("licensing languages" and "privacy audits": the former check the correctness of data processing during processing, while the latter check afterwards and should allow checking even after the data are deleted). privacy protection research is still new, and research on privacy protection in such emerging domains as personal devices, smart environments and smart cars is especially still in its infancy. privacy protection for personal mobile devices is particularly challenging due to the devices' limited capabilities and battery life. for these domains, only generic guidelines have been developed (see lahlou et al. ). langheinrich et al. show how difficult it might be to apply fair information practices (as contained in current data protection laws) to ami applications. most of the research on privacy protection is concerned with dangers of information disclosure. other privacy aspects have not received much attention from researchers. for example, the privacy aspect known as "the right to be let alone" is rarely discussed by technology researchers, despite its importance. research is needed with regard to overcoming the digital divide in the context of ami. the european commission has already been sponsoring some research projects which form a foundation for needed future initiatives. projects dealing with accessibility for all and e-inclusion (such as cost : "accessibility for all to services and terminals for next generation mobile networks", ask-it: "ambient intelligence system of agents for knowledge-based and integrated services for camenisch, j. (ed.), first annual research report, prime deliverable d . , . http:// www.prime-project.eu.org/public/prime_products/deliverables/rsch/pub_del_d . .a_ec_ wp . _v _final.pdf such research is, however, going on. an example is the ec-supported connect project, which aims to implement a privacy management platform within pervasive mobile services, coupling research on semantic technologies and intelligent agents with wireless communications (including umts, wifi and wimax) and context-sensitive paradigms and multimodal (voice/graphics) interfaces to provide a strong and secure framework to ensure that privacy is a feasible and desirable component of future ambient intelligence applications. the two-year project started in june . http://cordis.europa.eu/search/index.cfm?fuseaction = proj.simpledocument&pj_rcn = lahlou, s., and f. jegou, "european disappearing computer privacy design guidelines v ", ambient agora deliverable d . , electricité de france, clamart, . mobility impaired users") are concerned with standardisation, intuitive user interfaces, personalisation, interfaces to all everyday tools (e.g., domotics, home health care, computer accessibility for people with disabilities and elderly people), adaptation of contents to the channel capacity and the user terminal and so on. standardisation in the field of information technology (including, e.g., biometrics) is important in order to achieve interoperability between different products. however, interoperability even in fairly old technologies (such as fingerprint-based identification) has not yet been achieved. minimising personal data should be factored into all stages of collection, transmission and storage. the goal of the minimal data transmission principle is that data should reveal little about the user even in the event of successful eavesdropping and decryption of transmitted data. similarly, the principle of minimal data storage requires that thieves do not benefit from stolen databases and decryption of their data. implementation of anonymity, pseudonymity and unobservability methods helps to minimise system knowledge about users at the stages of data transmission and storage in remote databases, but not in cases involving data collection by and storage in personal devices (which collect and store mainly the device owner's data) or storage of videos. the main goals of privacy protection during data collection are, first, to prevent linkability between diverse types of data collected about the same user and, second, to prevent surveillance by means of spyware or plugging in additional pieces of hardware transmitting raw data (as occurs in wiretapping). these goals can be achieved by: • careful selection of hardware (so that data are collected and transmitted only in the minimally required quality and quantity to satisfy an application's goals, and there are no easy ways to spy on raw and processed data) • an increase of software capabilities and intelligence (so that data can be processed in real time) • deleting data as soon as the application allows. in practice, it is difficult to determine what "minimally needed application data" means. moreover, those data can be acquired by different means. thus, we suggest that data collection technologies less capable of violating personal privacy expectations be chosen over those more privacy-threatening technologies even if the accuracy of collected data decreases. software capabilities need to be maximised in order to minimise storage of raw data and avoid storage of data with absolute time and location stamps. we suggest this safeguard in order to prevent accidental logging of sensitive data, because correlation of different kinds of data by time stamps is fairly straightforward. these safeguards are presented below in more detail: • in our opinion, the most privacy-threatening technologies are physiological sensors and video cameras. physiological sensors are privacy-threatening because they reveal what's going on in the human body and, accordingly, reveal health data and even feelings. video cameras, especially those storing raw video data, are privacy-threatening because they violate people's expectations that "nobody can see me if i am hidden behind the wall" and because playback of video data can reveal more details than most people pay attention to in normal life. we suggest that usage of these two groups of devices should be restricted to safety applications until proper artificial intelligence safeguards (see below) are implemented. • instead of logging raw data, only data features (i.e., a limited set of pre-selected characteristics of data, e.g., frequency and amplitude of oscillations) should be logged. this can be achieved by using either hardware filters or real-time preprocessing of data or a combination of both. • time and location stamping of logged data should be limited by making it relative to other application-related information or by averaging and generalising the logged data. • data should be deleted after an application-dependent time, e.g., when a user buys clothes, all information about the textile, price, designer, etc., should be deleted from the clothes' rfid tag. for applications that require active rfid tags (such as for finding lost objects ), the rfid identifier tag should be changed, so that links between the shop database and the clothes are severed. • applications that do not require constant monitoring should switch off automatically after a certain period of user inactivity (e.g., video cameras should automatically switch off at the end of a game). • anonymous identities, partial identities and pseudonyms should be used wherever possible. using different identities with the absolute minimum of personal data for each application helps to prevent discovery of links between user identity and personal data and between different actions by the same user. orr, r.j., r. raymond, j. berman and f. seay, "a system for finding frequently lost objects in the home", technical report - , graphics, visualization, and usability center, georgia tech, . data and software protection from malicious actions should be implemented by intrusion prevention and by recovery from its consequences. intrusion prevention can be active (such as antivirus software, which removes viruses) or passive (such as encryption, which makes it more difficult to understand the contents of stolen data). at all stages of data collection, storage and transmission, malicious actions should be hindered by countermeasures such as the following: privacy protection in networking includes providing anonymity, pseudonymity and unobservability whenever possible. when data are transferred over long distances, anonymity, pseudonymity and unobservability can be provided by the following methods: first, methods to prove user authorisation locally and to transmit over the network only a confirmation of authorisation; second, methods of hiding relations between user identity and actions by, for example, distributing this knowledge over many network nodes. for providing anonymity, it is also necessary to use special communication protocols which do not use device ids or which hide them. it is also necessary to implement authorisation for accessing the device id: currently, most rfid tags and bluetooth devices provide their ids upon any request, no matter who actually asked for the id. another problem to solve is that devices can be distinguished by their analogue radio signals, and this can hinder achieving anonymity. additionally, by analysing radio signals and communication protocols of a personal object, one can estimate the capabilities of embedded hardware and guess whether this is a new and expensive thing or old and inexpensive, which is an undesirable feature. unobservability can be implemented to some extent in smart spaces and personal area networks (pans) by limiting the communication range so that signals do not penetrate the walls of a smart space or a car, unlike the current situation when two owners of bluetooth-enabled phones are aware of each other's presence in neighbouring apartments. methods of privacy protection in network applications (mainly long-distance applications) include the following: • anonymous credentials (methods to hide user identity while proving the user's authorisation). • a trusted third party: to preserve the relationships between the user's true identity and his or her pseudonym. • zero-knowledge techniques that allow one to prove the knowledge of something without actually providing the secret. • secret-sharing schemes: that allow any subset of participants to reconstruct the message provided that the subset size is larger than a predefined threshold. • special communication protocols and networks such as: -onion routing: messages are sent from one node to another so that each node removes one encryption layer, gets the address of the next node and sends the message there. the next node does the same, and so on until some node decrypts the real user address. -mix networks and crowds that hide the relationship between senders and receivers by having many intermediate nodes between them. • communication protocols that do not use permanent ids of a personal device or object; instead, ids are assigned only for the current communication session. communication protocols that provide anonymity at the network layer, as stated in the prime deliverable, are not suitable for large-scale applications: there is no evaluation on the desired security level, and performance is a hard problem. strong access control methods are needed in ami applications. physical access control is required in applications such as border control, airport check-ins and office access. reliable user authentication is required for logging on to computers and personal devices as well as network applications such as mobile commerce, mobile voting and so on. reliable authentication should have low error rates and strong anti-spoofing protection. work on anti-spoofing protection of iris and fingerprint recognition is going on, but spoofing is still possible. we suggest that really reliable authentication should be unobtrusive, continuous (i.e., several times during an application-dependent time period) and multimodal. so far, there has been limited research on continuous multimodal access control systems. authentication methods include the following: camenish, . some experts don't believe that biometrics should be the focus of the security approach in an ami world, since the identification and authentication of individuals by biometrics will always be approximate, is like publishing passwords, can be spoofed and cannot be revoked after an incident. tokens are portable physical devices given to users who keep them in their possession. implants are small physical devices, embedded into a human body (nowadays they are inserted with a syringe under the skin). implants are used for identification by unique id number, and some research aims to add a gps positioning module in order to detect the user's location at any time. with multimodal fusion, identification or authentication is performed by information from several sources, which usually helps to improve recognition rates and anti-spoofing capabilities. multimodal identification and/or authentication can also be performed by combining biometric and non-biometric data. methods for reliable, unobtrusive authentication (especially for privacy-safe, unobtrusive authentication) should be developed. unobtrusive authentication should enable greater security because it is more user-friendly. people are not willing to use explicit authentication frequently, which reduces the overall security level, while unobtrusive authentication can be used frequently. methods for context-dependent user authentication, which would allow user control over the strength and method of authentication, should be developed, unlike the current annoying situation when users have to go through the same authentication procedure for viewing weather forecasts and for viewing personal calendar data. recently, the meaning of the term "access control" has broadened to include checking which software is accessing personal data and how the personal data are processed. access control to software (data processing methods) is needed for enforcing legal privacy requirements and personal privacy preferences. user-friendly interfaces are needed for providing awareness and configuring privacy policies. maintaining privacy is not at the user's focus, so privacy information should not be a burden for a user. however, the user should easily be able to know and configure the following important settings: • purpose of the application (e.g., recording a meeting and storing the record for several years) • how much autonomy the application has • information flow from the user • information flow to the user (e.g., when and how the application initiates interactions with the user). additionally, user-friendly methods are needed for fast and easy control over the environment, which would allow a person (e.g., a home owner but not a thief) to override previous settings, and especially those settings learned by ami technologies. standard concise methods of initial warnings should be used to indicate whether privacy-violating technologies (such as those that record video and audio data, log personal identity data and physiological and health data) are used by ambient applications. licensing languages or ways to express legal requirements and user-defined privacy policies should be attached to personal data for the lifetime of their transmission, storage and processing. these would describe what can be done with the data in different contexts (e.g., in cases involving the merging of databases), and ensure that the data are really treated according to the attached licence. these methods should also facilitate privacy audits (checking that data processing has been carried out correctly and according to prescribed policies), including instances when the data are already deleted. these methods should be tamper-resistant, similar to watermarking. high-level application design to provide an appropriate level of safeguards for the estimated level of threats can be achieved by data protection methods such as encryption and by avoiding usage of inexpensive rfid tags that do not have access control to their id and by minimising the need for active data protection on the part of the user. high-level application design should also consider what level of technology control is acceptable and should provide easy ways to override automatic actions. when communication capabilities move closer to the human body (e.g., embedded in clothes, jewellery or watches), and battery life is longer, it will be much more difficult to avoid being captured by ubiquitous sensors. it is an open question how society will adapt to such increasing transparency, but it would be beneficial if the individual were able to make a graceful exit from ami technologies at his or her discretion. to summarise, the main points to consider in system design are: • data filtering on personal devices is preferred to data filtering in an untrustworthy environment. services (e.g., location-based services) should be designed so that personal devices do not have to send queries; instead, services could simply broadcast all available information to devices within a certain range. such an implementation can require more bandwidth and computing resources, but is safer because it is unknown how many devices are present in a given location. thus, it is more difficult for terrorists to plan an attack in a location where people have gathered. • authorisation should be required for accessing not only personal data stored in the device, but also for accessing device id and other characteristics. • good design should enable detection of problems with hardware (e.g., checking whether the replacement of certain components was made by an authorised person or not). currently, mobile devices and smart dust nodes do not check anything if the battery is removed, and do not check whether hardware changes were made by an authorised person, which makes copying data from external memory and replacement of external memory or sensors relatively easy, which is certainly inappropriate in some applications, such as those involved in health monitoring. • personal data should be stored not only encrypted, but also split according to application requirements in such a way that different data parts are not accessible at the same time. • an increase in the capabilities of personal devices is needed to allow some redundancy (consequently, higher reliability) in implementation and to allow powerful multitasking: simultaneous encryption of new data and detection of unusual patterns of device behaviour (e.g., delays due to virus activity). an increase in processing power should also allow more real-time processing of data and reduce the need to store data in raw form. • software should be tested by trusted third parties. currently, there are many kinds of platforms for mobile devices, and business requires rapid software development, which inhibits thorough testing of security and the privacy-protecting capabilities of personal devices. moreover, privacy protection requires extra resources and costs. • good design should provide the user with easy ways to override any automatic action, and to return to a stable initial state. for example, if a personalisation application has learned (by coincidence) that the user buys beer every week, and includes beer on every shopping list, it should be easy to return to a previous state in which system did not know that the user likes beer. another way to solve this problem might be to wait until the system learns that the user does not like beer. however, this would take longer and be more annoying. • good design should avoid implementations with high control levels in applications such as recording audio and images as well as physiological data unless it is strictly necessary for security reasons. • means of disconnecting should be provided in such a way that it is not taken as a desire by the user to hide. to some extent, all software algorithms are examples of artificial intelligence (ai) methods. machine-learning and data-mining are traditionally considered to belong to this field. however, safeguarding against aml threats requires al methods with very advanced reasoning capabilities. currently, ai safeguards are not mature, but the results of current research may change that assessment. many privacy threats arise because the reasoning capabilities and intelligence of software have not been growing as fast as hardware capabilities (storage and transmission capabilities). consequently, the development of ai safeguards should be supported as much as possible, especially because they are expected to help protect people from accidental, unintentional privacy violation, such as disturbing a person when he does not want to be, or from recording some private activity. for example, a memory aid application could automatically record some background scene revealing personal secrets or a health monitor could accidentally send data to "data hunters" if there are no advanced reasoning and anti-spyware algorithms running on the user's device. advanced ai safeguards could also serve as access control and antivirus protection by catching unusual patterns of data copying or delays in program execution. we recommend that ami applications, especially if they have a high control level, should be intelligent enough to: • detect sensitive data in order to avoid recording or publishing such data provide an automatic privacy audit, checking traces of data processing, data-or code-altering, etc. these requirements are not easy to fulfil in full scale in the near future; however, we suggest that it is important to fulfil these requirements as far as possible and as soon as possible. data losses and identity theft will continue into the future. however, losses of personal data will be more noticeable in the future because of the growing dependence on ami applications. thus, methods must be developed to inform all concerned people and organisations about data losses and to advise and/or help them to replace compromised data quickly (e.g., if somebody's fingerprint data are compromised, a switch should be made to another authentication method in all places where the compromised fingerprint was used). another problem, which should be solved by technology means, is recovery from loss of or damage to a personal device. if a device is lost, personal data contained in it can be protected from strangers by diverse security measures, such as data encryption and strict access control. however, it is important that the user does not need to spend time customising and training a new device (so that denial of service does not happen). instead, the new device should itself load user preferences, contacts, favourite music, etc, from some back-up service, like a home server. we suggest that ways be developed to synchronise data in personal devices with a back-up server in a way that is secure and requires minimal effort by the user. we suggest that the most important, but not yet mature technological safeguards are the following: • communication protocols that either do not require a unique device identifier at all or that require authorisation for accessing the device identifier • network configurations that can hide the links between senders and receivers of data • improving access control methods by multimodal fusion, context-aware authentication and unobtrusive biometric modalities (especially behavioural biometrics, because they pose a smaller risk of identity theft) and by aliveness detection in biometric sensors • enforcing legal requirements and personal privacy policies by representing them in machine-readable form and attaching these special expressions to personal data, so that they specify how data processing should be performed, allow a privacy audit and prevent any other way of processing • developing fast and intuitive means of detecting privacy threats, informing the user and configuring privacy policies • increasing hardware and software capabilities for real-time data processing in order to minimise the lifetime and amount of raw data in a system • developing user-friendly means to override any automatic settings in a fast and intuitive way • providing ways of disconnecting in such a way that nobody can be sure why a user is not connected • increasing security by making software updates easier (automatically or semiautomatically, and at a convenient time for the user), detection of unusual patterns, improved encryption • increasing software intelligence by developing methods to detect and to hide sensitive data; to understand the ethics and etiquette of different cultures; to speak different languages and to understand and translate human speech in many languages, including a capability to communicate with the blind and deaf • developing user-friendly means for recovery when security or privacy has been compromised. the technological safeguards require actions by industry. we recommend that industry undertake such technological safeguards. industry may resist doing so because it will increase development costs, but safer, more secure technology should be seen as a good investment in future market growth and protection against possible liabilities. it is obvious that consumers will be more inclined to use technology if they believe it is secure and will shield, not erode their privacy. we recommend that industry undertake such safeguards voluntarily. it is better to do so than to be forced by bad publicity that might arise in the media or from action by policy-makers and regulators. security guru bruce schneier got it right when he said that "the problem is … bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. … the only way to fix this problem is for vendors to fix their software, and they won't do it until it's in their financial best interests to do so. … liability law is a way to make it in those organizations' best interests." if development costs go up, industry will, of course, pass on those costs to consumers, but since consumers already pay, in one way or another, the only difference is who they pay. admittedly, this is not a simple problem because hardware manufacturers, software vendors and network operators all face competition and raising the cost of development and lengthening the duration of the design phase could have competitive implications, but if all industry players face the same exacting liability standards, then the competitive implications may not be so severe as some might fear. co-operation between producers and users of ami technology in all phases from r&d to deployment is essential to address some of the threats and vulnerabilities posed by ami. the integration of or at least striking a fair balance between the interests of the public and private sectors will ensure more equity, interoperability and efficiency. governments, industry associations, civil rights groups and other civil society organisations can play an important role in balancing these interests for the benefit of all affected groups. standards form an important safeguard in many domains, not least of which are those relating to privacy and information security. organisations should be expected to comply with standards, and standards-setting initiatives are generally worthy of support. while there have been many definitions and analyses of the dimensions of privacy, few of them have become officially accepted at the international level, especially by the international organization for standardization. the iso has at least achieved consensus on four components of privacy, i.e., anonymity, pseudonymity, unlinkability and unobservability. (see section . , p. , above for the definitions.) among the iso standards relevant to privacy and, in particular, information security are iso/iec on evaluation criteria for it security and iso , the code of practice for information security management. the iso has also established a privacy technology study group (ptsg) under joint technical committee (jtc ) to examine the need for developing a privacy technology standard. this is an important initiative and merits support. its work and progress should be tracked closely by the ec, member states, industry and so on. the iso published its standard iso in , which was updated in july . since then, an increasing number of organisations worldwide formulate their security management systems according to this standard. it provides a set of recommendations for information security management, focusing on the protection of information as an asset. it adopts a broad perspective that covers most aspects of information systems security. among its recommendations for organisational security, iso states that "the use of personal or privately owned information processing facilities … for processing business information, may introduce new vulnerabilities and necessary controls should be identified and implemented." by implementing such controls, iso/iec , information technology -security techniques -evaluation criteria for it security, first edition, international organization for standardization, geneva, . the standard is also known as the common criteria. similar standards and guidelines have also been published by other eu member states: the british standard bs was the basis for the iso standard. another prominent example is the german it security handbook (bsi, organisations can, at the same time, achieve a measure of both organisational security and personal data protection. iso acknowledges the importance of legislative requirements, such as legislation on data protection and privacy of personal information and on intellectual property rights, for providing a "good starting point for implementing information security". iso is an important standard, but it could be described more as a framework than a standard addressing specificities of appropriate technologies or how those technologies should function or be used. also, iso was constructed against the backdrop of today's technologies, rather than with ami in mind. hence, the adequacy of this standard in an ami world needs to be considered. nevertheless, organisations should state to what extent they are compliant with iso and/or how they have implemented the standard. audit logs may not protect privacy since they are aimed at determining whether a security breach has occurred and, if so, who might have been responsible or, at least, what went wrong. audit logs could have a deterrent value in protecting privacy and certainly they could be useful in prosecuting those who break into systems without authorisation. in the highly networked environment of our ami future, maintaining audit logs will be a much bigger task than now where discrete systems can be audited. nevertheless, those designing ami networks should ensure that the networks have features that enable effective audits. the oecd has been working on privacy and security issues for many years. it produced its first guidelines more than years ago. its guidelines on the protection of privacy and transborder flows of personal data were (are) intended to harmonise national privacy legislation. the guidelines were produced in the form of a recommendation by the council of the oecd and became applicable in september . the guidelines are still relevant today and may be relevant in an ami world too, although it has been argued that they may no longer be feasible in an ami world. the oecd's more recent guidelines for the security of information systems and networks are also an important reference in the context of developing privacy and security safeguards. these guidelines were adopted as a recommendation of the oecd council (in july ). in december , the oecd published a report on "the promotion of a culture of security for information systems and networks", which it describes as a major information resource on governments' effective efforts to date to foster a shift in culture as called for in the aforementioned guidelines for the security of information systems and networks. in november , the oecd published a -page volume entitled privacy online: oecd guidance on policy and practice, which contains specific policy and practical guidance to assist governments, businesses and individuals in promoting privacy protection online at national and international levels. in addition to these, the oecd has produced reports on other privacy-related issues including rfids, biometrics, spam and authentication. sensible advice can also be found in a report published by the us national academies press in , which said that to best protect privacy, identifiable information should be collected only when critical to the relationship or transaction that is being authenticated. the individual should consent to the collection, and the minimum amount of identifiable information should be collected and retained. the relevance, accuracy and timeliness of the identifier should be maintained and, when necessary, updated. restrictions on secondary uses of the identifier are important in order to safeguard the privacy of the individual and to preserve the security of the authentication system. the individual should have clear rights to access information about how data are protected and used by the authentication system and the individual should have the right to challenge, correct and amend any information related to the identifier or its uses. among privacy projects, prime has identified a set of privacy principles in the design of identity management architecture: principle : design must start from maximum privacy. principle : explicit privacy rules govern system usage. principle : privacy rules must be enforced, not just stated. principle : privacy enforcement must be trustworthy. principle : users need easy and intuitive abstractions of privacy. principle : privacy needs an integrated approach. principle : privacy must be integrated with applications. trust marks and trust seals can also be useful safeguards because the creation of public credibility is a good way for organisations to alert consumers and other individuals to an organisation's practices and procedures through participation in a programme that has an easy-to-recognise symbol or seal. trust marks and seals are a form of guarantee provided by an independent organisation that maintains a list of trustworthy companies that have been audited and certified for compliance with some industry-wide accepted or standardised best practice in collecting personal or sensitive data. once these conditions are met, they are allowed to display a trust seal logo or label that customers can easily recognise. a trust mark must be supported by mechanisms necessary to maintain objectivity and build legitimacy with consumers. trust seals and trust marks are, however, voluntary efforts that are not legally binding and an effective enforcement needs carefully designed procedures and the backing of an independent and powerful organisation that has the confidence of all affected parties. trust seals and trust marks are often promoted by industry, as opposed to consumer-interest groups. as a result, concerns exist that consumers' desires for stringent privacy protections may be compromised in the interest of industry's desire for the new currency of information. moreover, empirical evidence indicates that even some eight years after the introduction of the first trust marks and trust seals in internet commerce, citizens know little about them and none of the existing seals has reached a high degree of familiarity among customers. this does not necessarily mean that trust marks are not an adequate safeguard for improving security and privacy in the ambient intelligence world, it suggests that voluntary activities like self-regulation have -apart from being well designed -to be complemented by other legally enforceable measures. in addition to the general influence of cultural factors and socialisation, trust results from context-specific interaction experiences. as is well documented, computermediated interactions are different from conventional face-to-face exchanges due to anonymity, lack of social and cultural clues, "thin" information, and the uncertainty about the credibility and reliability of the provided information that commonly characterise mediated relationships. in an attempt to reduce some of the uncertainties associated with online commerce, many websites acting as intermediaries between transaction partners are operating so-called reputation systems. these institutionalised feedback mechanisms are usually based on the disclosure of past transactions rated by the respective partners involved. giving participants the opportunity to rank their counterparts creates an incentive for rule-abiding behaviour. thus, reputation systems seek to imitate some of the real-life trust-building and social constraint mechanisms in the context of mediated interactions. so far, reputation systems have not been developed for ami services. and it seems clear that institutionalised feedback mechanisms will only be applicable to a subset of future ami services and systems. implementing reputation systems only makes sense in those cases in which users have real choices between different suppliers (for instance, with regard to ami-assisted commercial transactions or information brokers). ami infrastructures that normally cannot be avoided if one wants to take advantage of a service need to be safeguarded by other means, such as trust seals, iso guidelines and regulatory action. despite quite encouraging experiences in numerous online arenas, reputation systems are far from perfect. many reputation systems tend to shift the burden of quality control and assessment from professionals to the -not necessarily entirely informed -individual user. in consequence, particularly sensitive services should not exclusively be controlled by voluntary and market-style feedbacks from customers. furthermore, reputation systems are vulnerable to manipulation. pseudonyms can be changed, effectively erasing previous feedback. and the feedback itself need not necessarily be sincere, either due to co-ordinated accumulation of positive feedback, due to negotiations between parties prior to the actual feedback process, because of blackmailing or the fear of retaliation. last but not least, reputation systems can become the target of malicious attacks, just like any netbased system. an alternative to peer-rating systems are credibility-rating systems based on the assessment of trusted and independent institutions, such as library associations, consumer groups or other professional associations with widely acknowledged expertise within their respective domains. ratings would be based on systematic assessments along clearly defined quality standards. in effect, these variants of reputation-and credibility-enhancing systems are quite similar to trust marks and trust seals. the main difference is that professional rating systems enjoy a greater degree of independence from vested interests. and, other than in the case of peer-rating systems which operate literally for free, the independent professional organisations need to be equipped with adequate resources. on balance, reputation systems can contribute to trust-building between strangers in mediated short-term relations or between users and suppliers, but they should not be viewed as a universal remedy for the ubiquitous problem of uncertainty and the lack of trust. a possible safeguard is a contract between the service provider and the user that has provisions about privacy rights and the protection of personal data and notification of the user of any processing or transfer of such data to third parties. while this is a possible safeguard, there must be some serious doubt about the negotiating position of the user. it's quite possible the service provider would simply say here are the terms under which i'm willing to provide the service, take it or leave it. also, from the service provider's point of view, it is unlikely that he would want to conclude separate contracts with every single user. in a world of ambient intelligence, such a prospect becomes even more unlikely in view of the fact that the "user", the consumer-citizen will be moving through resnick, p., r. zeckhauser, e. friedman and k. kuwabara, "reputation systems: facilitating trust in internet interactions", communications of the acm, ( ), , pp. - . http:// www.si.umich.edu/~presnick/papers/cacm /reputations.pdf. different spaces where there is likely to be a multiplicity of different service providers. it may be that the consumer-citizen would have a digital assistant that would inform him of the terms, including the privacy implications, of using a particular service in a particular environment. if the consumer-citizen did not like the terms, he would not have to use the service. consumer associations and other civil society organisations (csos) could, however, play a useful role as a mediator between service providers and individual consumers and, more particularly, in forcing the development of service contracts (whether real or implicit) between the service provider and the individual consumer. consumer organisations could leverage their negotiating position through the use of the media or other means of communication with their members. csos could position themselves closer to the industry vanguard represented in platforms such as artemis by becoming members of such platforms themselves. within these platforms, csos could encourage industry to develop "best practices" in terms of provision of services to consumers. government support for new technologies should be linked more closely to an assessment of technological consequences. on the basis of the far-reaching social effects that ambient intelligence is supposed to have and the high dynamics of the development, there is a clear deficit in this area. research and development (at least publicly supported r&d) must highlight future opportunities and possible risks to society and introduce them into public discourse. every research project should commit itself to explore possible risks in terms of privacy, security and trust, develop a strategy to cover problematic issues and involve users in this process as early as possible. a template for "design guidelines" that are specifically addressing issues of privacy has been developed by the "ambient agora" project which has taken into account the fundamental rules by the oecd, notably its guidelines on the protection of privacy and transborder flows of personal data, adopted on september , and the more recent guidelines for the security of information systems and networks. if the state acts as a buyer of strategically important innovative products and services, it contributes to the creation of the critical demand that enables suppliers to reduce their business risk and realise spillover effects. thus, public procurement programmes can be used to support the demand for and use of improved products and services in terms of security and privacy or identity protection. in the procurement of ict products, emphasis should therefore be given to critical issues such as security and trustworthiness. as in other advanced fields, it will be a major challenge to develop a sustainable procurement policy that can cope with ever-decreasing innovation cycles. the focus should not be on the characteristics of an individual product or component, but on the systems into which components are integrated. moreover, it is important to pay attention to the secondary and tertiary impacts resulting from deployment of large technical systems such as ambient intelligence. an evaluation of the indirect impacts is especially recommended for larger (infrastructure) investments and public services. while public procurement of products and services that are compliant with the eu legal framework and other important guidelines for security, privacy and identity protection is no safeguard on its own, it can be an effective means for the establishment and deployment of standards and improved technological solutions. accessibility is a key concept in helping to promote the social inclusion of all citizens in the information society embedded with ami technologies. accessibility is needed to ensure user control, acceptance, enforceability of policy in a user-friendly manner and the provision of citizens with equal rights and opportunities in a world of ambient intelligence. all citizens should have equal rights to benefit from the new opportunities that ami technologies will offer. this principle promotes the removal of direct and indirect discrimination, fosters access to services and encourages targeted actions in favour of under-represented groups. this principle promotes system design according to a user-centric approach (i.e., the concept of "design for all"). the design-for-all concept enables all to use applications (speech technology for the blind, pictures for the deaf). it means designing in a way to make sure applications are user-friendly and can be used intuitively. in short, industry has to make an effort to simplify the usage of icts, rather than forcing prospective users to learn how to use otherwise complex icts. better usability will then support easy learning (i.e., learning by observation), user control and efficiency, thus increasing satisfaction and, consequently, user acceptance. this principle aims to overcome user dependency and more particularly user isolation and stress due to the complexity of new technology, which leads to loss of control. education programmes on how to use new technologies will increase user awareness about the different possibilities and choices offered by ami technologies and devices. training and education help to overcome user dependency and social disruptions. user awareness is important to reduce the voluntary exclusion caused by a misunderstanding on how the technology works. this safeguard is essential in order to prevent almost all facets of dependency, system dependency as well as user dependency. consumers need to be educated about the privacy ramifications arising from virtually any transaction in which they are engaged. an education campaign should be targeted at different segments of the population. school-age children should be included in any such campaign. any networked device, particularly those used by consumer-citizens, should come with a privacy warning much like the warnings on tobacco products. when the uk department of trade and industry (dti) released its information security review, the uk e-commerce minister emphasised that everyone has a role to play in protecting information: "risks are not well managed. we need to dispel the illusion the information security issues are somebody else's problem. it's time to roll up our sleeves." the oecd shares this point of view. it has said that "all participants in the new information society … need … a greater awareness and understanding of security issues and the need to develop a 'culture of security'." the oecd uses the word "participants", which equates to "stakeholders", and virtually everyone is a participant or stakeholder -governments, businesses, other organisations and individual users. oecd guidelines are aimed at promoting a culture of security, raising awareness and fostering greater confidence (i.e., trust) among all participants. there are various ways of raising awareness, and one of those ways would be to have some contest or competition for the best security or privacy-enhancing product or service of the year. the us government's department of homeland security is sponsoring such competitions, and europe could usefully draw on their experience to hold similar competitions in europe. in the same way as the principle that "not everything that you read in the newspapers is true" has long been part of general education, in the ict age, awareness should generally be raised by organisations that are trustworthy and as close to the citizen as possible (i.e., on the local or regional level. questions of privacy, identity and security are, or should be, an integral part of the professional education of computer scientists. we agree with and support the commission's "invitation" to member states to "stimulate the development of network and information security programmes as part of higher education curricula". perhaps one of the best safeguards is public opinion, stoked by stories in the press and the consequent bad publicity given to perceived invasions of privacy by industry and government. new technologies often raise policy issues, and this is certainly true of ambient intelligence. ami offers great benefits, but the risk of not adequately addressing public concerns could mean delays in implementing the technologies, a lack of public support for taxpayer-funded research and vociferous protests by privacy protection advocates. cultural artefacts, such as films and novels, may serve as safeguards against the threats and vulnerabilities posed by advanced technologies, including ambient intelligence. science fiction in particular often presents a dystopian view of the future where technology is used to manipulate or control people, thus, in so doing, such artefacts raise our awareness and serve as warnings against the abuse of technology. a new york times film critic put it this way: "it has long been axiomatic that speculative science-fiction visions of the future must reflect the anxieties of the present: fears of technology gone awry, of repressive political authority and of the erosion of individuality and human freedom." an example of a cultural artefact is steven spielberg's film, minority report, which depicts a future embedded with ambient intelligence, which serves to convey messages or warnings from the director to his audience. minority report is by no means unique as a cultural artefact warning about how future technologies are like a double-edged knife that cuts both ways. to implement socio-economic safeguards will require action by many different players. unfortunately, the very pervasiveness of ami means that no single action by itself will be sufficient as a safeguard. a wide variety of socio-economic safeguards, probably even wider than those we have highlighted in the preceding sections, will be necessary. as implementation of ami has already begun (with rfids, surveillance systems, biometrics, etc.), it is clearly not too soon to begin implementation of safeguards. we recommend, therefore, that all stakeholders, including the public, contribute to this effort. the fast emergence of information and communication technologies and the growth of online communication, e-commerce and electronic services that go beyond the territorial borders of the member states have led the european union to adopt numerous legal instruments such as directives, regulations and conventions on ecommerce, consumer protection, electronic signature, cyber crime, liability, data protection, privacy and electronic communication … and many others. even the european charter of fundamental rights will play an important role in relation to the networked information society. our analysis of the dark scenarios shows that we may encounter serious legal problems when applying the existing legal framework to address the intricacies of an ami environment. our proposed legal safeguards should be considered as general policy options, aimed at stimulating discussion between stakeholders and, especially, policymakers. law is only one of the available sets of tools for regulating behaviour, next to social norms, market rules, "code" -the architecture of the technology (e.g., of cyberspace, wireless and wired networks, security design, encryption levels, rights management systems, mobile telephony systems, user interfaces, biometric features, handheld devices and accessibility criteria) and many other tools. the regulator of ambient intelligence can, for instance, achieve certain aims directly by imposing laws, but also indirectly by, for example, influencing the rules of the market. regulatory effect can also be achieved by influencing the architecture of a certain environment. the architecture of ami might well make certain legal rules difficult to enforce (for example, the enforcement of data protection obligations on the internet or the enforcement of copyright in peer-to-peer networks), and might cause new problems, particularly related to the new environment (spam, dataveillance ). on the other hand, the "code" has the potential to regulate by enabling or disabling certain behaviour, while law regulates via the threat of sanction. in other words, software and hardware constituting the "code", and architecture of the digital world, causing particular problems, can be at the same time the instrument to solve them. regulating through code may have some specific advantages: law traditionally regulates ex post, by imposing a sanction on those who did not comply with its rules (e.g., in the form of civil damages or criminal prosecution). architecture regulates by putting conditions on one's behaviour, allowing or disallowing something, not allowing the possibility to disobey. it regulates ex ante. ambient intelligence is particularly built on software code. this code influences how ambient intelligence works, e.g., how the data are processed, but this code itself can be influenced and accompanied by regulation. thus, the architecture can be a tool of law. this finding is more than elementary. it shows that there is a choice: should the law change because of the "code"? or should the law change "code" and thus ensure that certain values are protected? the development of technology represents an enormous challenge for privacy, enabling increasing surveillance and invisible collection of data. a technology that threatens privacy may be balanced by the use of a privacy-enhancing technology: the "code", as lessig claims, can be the privacy saviour. other technologies aim to limit the amount of data actually collected to the necessary minimum. however, most of the current technologies simply ignore the privacy implications and collect personal data when there is no such need. a shift of the paradigm to privacy-bydesign is necessary to effectively protect privacy. indeed, technology can facilitate privacy-friendly verification of individuals via, for example, anonymous and pseudonymous credentials. leenes and koops recognise the potential of these privacyenhancing technologies (pets) to enforce data protection law and privacy rules. but they also point at problems regarding the use of such technologies, which are often troublesome in installation and use for most consumers. moreover, industry is not really interested in implementing privacy-enhancing technology. they see no (economic) reason to do it. the analysis of leenes and koops shows that neither useful technology, nor law is sufficient in itself. equally important is raising stakeholder awareness, social norms and market rules. all regulatory means should be used and have to be used to respond to problems of the new environment to tackle it effectively. for the full effectiveness of any regulation, one should always look for the optimal mixture of all accessible means. as the impact and effects of the large-scale introduction of ami in societies spawn a lot of uncertainties, the careful demarche implied by the precautionary principle, with its information, consultation and participation constraints, might be appropriate. the application of this principle might inspire us in devising legal policy options when, as regards ami, fundamental choices between opacity tools and transparency tools must be made. opacity tools proscribe the interference by powerful actors into the individual's autonomy, while transparency tools accept such interfering practices, though under certain conditions which guarantee the control, transparency and accountability of the interfering activity and actors. legal scholars do not discuss law in general terms. their way of thinking always involves an application of the law in concrete or exemplified situations. the legislator will compare concrete examples and situations with the law and will not try to formulate general positions or policies. thus, the proposed legal framework will not deal with the ami problems in a general way, but focus on concrete issues, and apply opacity and transparency solutions accordingly. another particularity of legal regulation in cyberspace is the absence of a central legislator. though our legal analysis is based mostly on european law, we emphasise that not everything is regulated at a european level. regulation of (electronic) identity cards, for instance, concerns a crucial element in the construction of an ami environment, but is within the powers of the individual member states. both at european and national level, some decision-making competences have been delegated to independent advisory organs (children's rights commissioners, data protection authorities). hence, there exist many, what we can call, "little legislators" that adjust in some way the often executive power origin of legislation: the article data protection working party, national children's rights commissioners and international standardisation bodies can and do, for example, draft codes of conduct that constitute often (but not always) the basis for new legislation. we do not suggest the centralisation of the law-making process. on the contrary, we recommend respect for the diversity and plurality of lawmakers. the solutions produced by the different actors should be taken into consideration and be actively involved in policy discussions. development of case law should also be closely observed. consulting concerned citizens and those who represent citizens ( including legislators) at the stage of development would increase the legitimacy of new technologies. privacy aims to ensure no interference in private and individual matters. it offers an instrument to safeguard the opacity of the individual and puts limits to the interference by powerful actors into the individual's autonomy. normative in nature, regulatory opacity tools should be distinct from regulatory transparency tools, of which the goal is to control the exercise of power rather than to restrict power. we observe today that the reasonable expectation of privacy is eroding due to emerging new technologies and possibilities for surveillance: it develops into an expectation of being monitored. should this, however, lead to diminishing the right to privacy? ambient intelligence may seriously threaten this value, but the need for privacy (e.g., the right to be let alone) will probably remain, be it in another form adapted to new infrastructures (e.g., the right to be left offline). the right to privacy in a networked environment could be enforced by any means of protecting the individual against any form of dataveillance. such means are in line with the data minimisation principle of data protection law, which is a complementary tool to privacy. however, in ambient intelligence where collecting and processing personal data is almost a prerequisite, new tools of opacity such as the right to be left offline (in time, e.g., during certain minutes at work, or in space, e.g., in public bathrooms) could be recognised. several instruments of opacity can be identified. we list several examples, and there may be others. additional opacity recommendations are made in subsequent sections, for example, with regard to biometrics. we observe that there is not necessarily an internal coherence between the examples listed below. the list should be understood as a wish list or a list with suggestions to be consulted freely. opacity designates a zone of non-interference which should not be confused with a zone of invisibility: privacy, for instance, does not imply secrecy; it implies the possibility of being oneself openly without interference. another word might have been "impermeability" which is too strong and does not contrast so nicely with "transparency" as "opacity" does. see hildebrandt, m., and s. gutwirth the concept of a digital territory represents a vision that introduces the notions of space and borders in future digitised everyday life. it could be visualised as a bubble, the boundaries and transparency of which depend on the will of its owner. the notion of a digital territory aims for a "better clarification of all kinds of interactions in the future information society. without digital boundaries, the fundamental notion of privacy or the feeling of being at home will not take place in the future information society." the concept of digital territories encompasses the notion of a virtual residence, which can be seen as a virtual representation of the smart home. the concept of digital territories could provide the individual with a possibility to access -and stay in -a private digital territory of his own at (any) chosen time and place. this private, digital space could be considered as an extension of the private home. today, already, people store their personal pictures on distant servers, read their private correspondences online, provide content providers with their viewing and consuming behaviour for the purpose of digital rights management, communicate with friends and relatives through instant messengers and internet telephony services. the "prognosis is that the physical home will evolve to 'node' in the network society, implying that it will become intimately interconnected to the virtual world." the law guarantees neither the establishment nor the protection of an online private space in the same way as the private space in the physical world is protected. currently, adequate protection is lacking. for example, the new data retention law requires that telecommunication service providers keep communication data at the disposal of law enforcement agencies. the retention of communication data relates to mobile and fixed phone data, internet access, e-mail and e-telephony. data to be retained includes the place, time, duration and destination of communications. what are the conditions for accessing such data? is the individual informed when such data are accessed? does he have the right to be present when such data are examined? does the inviolability of the home extend to the data that are stored on a distant server? another example of inadequate protection concerns the increasing access to home activities from a distance, for example, as a result of the communication data generated by domestic applications that are connected to the internet. in both examples, there is no physical entrance in the private place. to ensure that these virtual private territories become a private domain for the individual, a regulatory framework could be established to prevent unwanted and unnoticed interventions similar to that which currently applies to the inviolability of the home. a set of rules needs to be envisaged to guarantee such protection, amongst them, the procedural safeguards similar to those currently applicable to the protection of our homes against state intervention (e.g., requiring a search warrant). technical solutions aimed at defending private digital territories against intrusion should be encouraged and, if possible, legally enforced. the individual should be empowered with the means to freely decide what kinds of information he or she is willing to disclose, and that aspect should be included in the digital territory concept. similarly, vulnerable home networks should be granted privacy protection. such protection could be extended to the digital movement of the person, that is, just as the privacy protection afforded the home has been or can be extended to the individual's car, so the protection could be extended to home networks, which might contact external networks. privacy at the workplace has already been extensively discussed. most of the legal challenges, we believe, that may arise can be answered with legal transparency rules. more drastic, prohibitive measures may be necessary in certain situations involving too far-reaching or unnecessary surveillance, which a society considers as infringing upon the dignity of the employee. in addition, transparency rules are needed to regulate other, less intrusive problems. we recall here the specific role of law-making institutions in the area of labour law. companies must discuss their surveillance system and its usage in collective negotiations with labour organisations and organisations representing employees before its implementation in a company or a sector, taking into account the specific needs and risks involved (e.g., workers in a bank vs. workers in public administration). all employees should always be clearly and a priori informed about the employee surveillance policy of the employer (when and where surveillance is taking place, what is the finality, what information is collected, how long it will be stored, what are the (procedural) rights of the employees when personal data are to be used as evidence, etc.). specific cyber territories for children have to be devised along the same lines. the united nations convention on the rights of the child ( ) contains a specific privacy right for children, and sets up monitoring instruments such as national children's rights commissioners. opinions of such advisory bodies should be carefully taken into account in policy discussion. national children's rights commissioners could take up problems relating to the permanent digital monitoring of children. as concluded in the legal analysis of the dark scenarios above, courts are willing to protect one's privacy but, at the same time, they tend to admit evidence obtained through a violation of privacy or data protection. there is a lack of clarity and uniformity regarding the consequence of privacy violations. in that case, the evidence was secured by the police in a manner incompatible with the requirements of article of the european convention on human rights (echr). the court accepted that the admission of evidence obtained in breach of the privacy right is not necessarily a breach of the required fairness under article of echr (the right to a fair trial), since the process taken as a whole was fair in the sense of article . the evidence against the accused was admitted and led to his conviction. the khan doctrine (followed in cases such as doerga v. the netherlands and p.g. and j.h. v. the united followed by at least some national courts. the fact that there is no general acceptance of an exclusionary rule creates legal uncertainty. its general acceptance is, however, necessary to protect the opacity of the individual in a more effective way. the departure from such position by the courts (namely "no inclusion of evidence obtained through privacy and/or data protection law infringements") could be considered and legislative prohibition of the admissibility (or general acceptance of the exclusionary rule) of such obtained evidence envisaged. in ambient intelligence, the use of implants can no longer be considered as a kind of futuristic or extraordinary exception. whereas it is clear that people may not be forced to use such implants, people may easily become willing to equip themselves with such implants on a (quasi) voluntary basis, be it, for example, to enhance their bodily functions or to obtain a feeling of security through always-on connections to anticipate possible emergencies. such a trend requires a careful assessment of the opacity and transparency principles at a national, european and international level. currently, in europe, the issue of medical implants has already been addressed. in ami, however, implants might be used for non-medical purposes. one of our dark scenarios suggests that organisations could force people to have an implant so they could be located anywhere at any time. cannot be achieved by less body-intrusive means. informed consent is necessary to legitimise the use of implants. we agree with those findings. the european group on ethics in science and new technologies goes further, stating that non-medical (profit-related) applications of implants constitute a potential threat to human dignity. applications of implantable surveillance technologies are only permitted when there is an urgent and justified necessity in a democratic society, and must be specified in legislation. we agree that such applications should be diligently scrutinised. we propose that the appropriate authorities (e.g., the data protection officer) control and authorise applications of implants after the assessment of the particular circumstances in each case. when an implant enables tracking of people, people should have the possibility to disconnect the implant at any given moment and they should have the possibility to be informed when a (distant) communication (e.g., through rfid) is taking place. we agree with the european group on ethics in science and new technologies that irreversible ict implants should not be used, except for medical purposes. further research on the long-term impact of ict implants is also recommended. another safeguard to guarantee the opacity of the individual is the possibility to act under anonymity (or at least under pseudonymity or "revocable anonymity"). the article working party has considered anonymity as an important safeguard for the right to privacy. we repeat here its recommendations: (a) the ability to choose to remain anonymous is essential if individuals are to preserve the same protection for their privacy online as they currently enjoy offline. (b) anonymity is not appropriate in all circumstances. (c) legal restrictions which may be imposed by governments on the right to remain anonymous, or on the technical means of doing so (e.g., availability of encryption products) should always be proportionate and limited to what is necessary to protect a specific public interest in a democratic society. (e) some controls over individuals contributing content to online public fora are needed, but a requirement for individuals to identify themselves is in many cases disproportionate and impractical. other solutions are to be preferred. (f) anonymous means to access the internet (e.g., public internet kiosks, prepaid access cards) and anonymous means of payment are two essential elements for true online anonymity. according to the common criteria for information technology security evaluation document (iso ), anonymity is only one of the requirements for the protection of privacy, next to pseudonymity, unlinkability, unobservability, user control/ information management and security protection. all these criteria should be considered as safeguards for privacy. the e-signature directive promotes the use of pseudonyms and, at the same time, aims to provide security for transactions. currently, the directive on electronic signatures states that only advanced electronic signatures (those based on a qualified certificate and created by a secure signature-creation device) satisfy the legal requirements of a signature in relation to data in electronic form in the same manner as a handwritten signature satisfies those requirements in relation to paper-based data and are admissible as evidence in legal proceedings. member states must ensure that an electronic signature (advanced or not) is not denied legal effectiveness and admissibility as evidence in legal proceedings solely on the grounds that it is: (a) in electronic form, (b) not based upon a qualified certificate, (c) not based upon a qualified certificate issued by an accredited certification service-provider, or (d) not created by a secure signature-creation device. in ambient intelligence, the concept of unlinkability can become as important as the concept of anonymity or pseudonymity. unlinkability "ensures that a user may make multiple uses of resources or services without others being able to link these uses together. … unlinkability requires that users and/or subjects are unable to determine whether the same user caused certain specific operations in the system." when people act pseudonymously or anonymously, their behaviour in different times and places in the ambient intelligence network could still be linked and consequently be subject to control, profiling and automated decision-making: linking data relating to the same non-identifiable person may result in similar privacy threats as linking data that relate to an identified or identifiable person. thus, in addition to and in line with the right to remain anonymous goes the use of anonymous and pseudonymous credentials, accompanied with unlinkability in certain situations (e.g., e-commerce), reconciling thus the privacy requirements with the accountability requirements of, e.g., e-commerce. in fact, such mechanisms should always be foreseen when disclosing someone's identity or when linking the information is not necessary. such necessity should not be easily assumed, and in every circumstance more privacy-friendly technological solutions should be sought. however, the use of anonymity should be well balanced. to avoid its misuse, digital anonymity could be further legally regulated, especially stating when it is not appropriate. by the cybercrime convention, which provides a definition for several criminal offences related to cybercrime and for general principles concerning international co-operation. the cybercrime convention, however, allows for different standards of protection. the convention obliges its signatories to criminalise certain offences under national law, but member states are free to narrow the scope of the definitions. the most important weakness of this convention is the slow progress in its ratification by signatory states. council framework decision / /jha also provides for criminal sanctions against cybercrimes. the framework decision is limited, however, both in scope and territory, since it only defines a limited number of crimes and is only applicable to the member states of the european union. international co-operation in preventing, combating and prosecuting criminals is needed and may be facilitated by a wide range of technological means, but these new technological possibilities should not erode the privacy of innocent citizens who are deemed to be not guilty until proven otherwise. cybercrime prosecution, and more importantly crime prevention, might be facilitated by a wide range of technological means, among them, those that provide for the security of computer systems and data against attacks. almost all human activity in ami can be reduced to personal data processing: opening doors, sleeping, walking, eating, putting lights on, shopping, walking in a street, driving a car, purchasing, watching television and even breathing. in short, all physical actions become digital information that relates to an identified or identifiable individual. often, the ambient intelligence environment will need to adapt to individuals and will therefore use profiles applicable to particular individuals or to individuals within a group profile. ami will change not only the amount, but also the quality of data collected so that we can be increasingly supported in our daily life (a goal of ambient intelligence). ami will collect data not only about what we are doing, when we do it and where we are, but also data on how we have experienced things. one can assume that the accuracy of the profiles, on which the personalisation of services depends, will improve as the amount of data collected grows. but as others hold more of our data, so grow the privacy risks. thus arises the fundamental question: do we want to minimise personal data collection? instead of focusing on reducing the amount of data collected alone, should we admit that they are indispensable for the operation of ami, and focus rather on empowering the user with a means to control such processing of personal data? data protection is a tool for empowering the individual in relation to the collection and processing of his or her personal data. the european data protection directive imposes obligations on the data controller and supports the rights of the data subject with regard to the transparency and control over the collection and processing of data. it does not provide for prohibitive rules on data processing (except for the processing of sensitive data and the transfer of personal data to third countries that do not ensure an adequate level of protection). instead, the eu data protection law focuses on a regulatory approach and on channelling, controlling and organising the processing of personal data. as the title of directive / /ec indicates, the directive concerns both the protection of the individual with regard to the processing of personal data and the free movement of such data. the combination of these two goals in directive / /ec reflects the difficulties we encounter in the relations between ambient intelligence and data protection law. there is no doubt that some checks and balances in using data should be put in place in the overall architecture of the ami environment. civil movements and organisations dealing with human rights, privacy or consumer rights, observing and reacting to the acts of states and undertakings might provide such guarantees. it is also important to provide incentives for all actors to adhere to legal rules. education, media attention, development of good practices and codes of conducts are of crucial importance. liability rules and rules aimed at enforcement of data protection obligations will become increasingly important. data protection law provides for the right to information on data processing, access to or rectification of data, which constitute important guarantees of individual rights. however, its practical application in an ami era could easily lead to an administrative nightmare, as information overload would make it unworkable. we should try to remedy such a situation in a way that does not diminish this right. the individual's right to information is a prerequisite to protect his interests. such a right corresponds to a decentralised system of identity (data) management, but it seems useful to tackle it separately to emphasise the importance of the individual's having access to information about the processing of his data. because of the large amounts of data to be processed in an ami world, the help of or support by intelligent agents to manage such information streams seems indispensable. information about what knowledge has been derived from the data could help the individual in proving causality in case of damage. further research on how to reconcile access to the knowledge in profiles (which might be construed as a trade secret in some circumstances) with intellectual property rights would be desirable. the right to be informed could be facilitated by providing information in a machine-readable language, enabling the data subject to manage the information flow through or with the help of (semi-) autonomous intelligent agents. of course, this will be more difficult in situations of passive authentication, where no active involvement of the user takes place (e.g., through biometrics and rfids). thus, information on the identity of the data controller and the purposes of processing could exist both in human-readable and machine-readable language. the way such information is presented to the user is of crucial importance -i.e., it must be presented in an easily comprehensible, user-friendly way. in that respect, the article working party has provided useful guidelines and proposed multilayer eu information notices essentially consisting of three layers: layer -the short notice contains core information required under article of the data protection directive (identity of the controller, purpose of processing, or any additional information which, in the view of the particular circumstances of the case, must be provided to ensure fair processing). a clear indication must be given as to how the individual can access additional information. layer -the condensed notice contains all relevant information required under the data protection directive. this includes the name of the company, the purpose of the data processing, the recipients or categories of recipients of the data, whether replies to the questions are obligatory or voluntary, as well as the possible consequences of failure to reply, the possibility of transfer to third parties, the right to access, to rectify and oppose choices available to the individual. in addition, a point of contact must be given for questions and information on redress mechanisms either within the company itself or details of the nearest data protection agency. layer -the full notice includes all national legal requirements and specificities. it could contain a full privacy statement with possible additional links to national contact information. we recommend that industry and law enforcement agencies consider an approach for ami environments similar to that recommended by the article working party. electronic versions of such notices should be sufficient in most of circumstances. our dark scenarios indicate a new kind of practice that has emerged in recent years in the sector of personal data trading: while some companies collect personal data in an illegal way (not informing the data subjects, transfer of data to third parties without prior consent, usage for different purposes, installing spyware, etc.), these personal data are shared, sold and otherwise transferred throughout a chain of existing and disappearing companies to the extent that the origin of the data and the original data collector cannot be traced back. this practice has been described as "data laundering", with analogy to money laundering: it refers to a set of activities aiming to cover the illegitimate origin of data. in our ami future, we should assume the value of personal data and therefore the (illegal) trading in these data will only grow. a means to prevent data laundering could be to oblige those who buy or otherwise acquire databases, profiles and vast amounts of personal data to check diligently the legal origin of the data. without checking the origin and/or legality of the databases and profiles, one could consider the buyer equal to a receiver of stolen goods and thus held liable for illegal data processing. they could be obliged to notify the national data protection officers when personal data(bases) are acquired. those involved or assisting in data laundering could be subject to criminal sanctions. ami requires efficient, faultless exchanges of relevant data and information throughout the ami network. the need for efficiency requires interoperable data formats and interoperable hardware and software for data processing. dark scenario (about the bus accident) has shown the need for interoperability in ambient intelligence, but it must be recognised that, at the same time, interoperable data and data processing technologies in all sectors and all applications could threaten trust, privacy, anonymity and security. full interoperability and free flow of personal data are not always desirable, and should not be considered as unquestionable. interoperability can entail an unlimited availability of personal data for any purpose. interoperability may infringe upon the finality and purpose specification principles and erode the rights and guarantees offered by privacy and data protection law. moreover, the purposes for which the data are available are often too broadly described (what is "state security", "terrorism", "a serious crime"?). data can become available afterwards for any purpose. interoperability of data and data processing mechanisms facilitates possible function creep (use of data for purposes other than originally envisaged). interoperability could contribute to the criminal use of ambient intelligence, for example, by sending viruses to objects in the network (interoperability opens the door for fast transmission and reproduction of a virus) or abusing data (interoperable data formats make data practical for any usage). interoperability is thus not only a technological issue. awareness -already today -of the possible negative sides of interoperability should bring about a serious assessment of both law and technology before the market comes up with tools for interoperability. legal initiatives in france (e.g., requiring interoperability of the itunes music platform) and sanctions imposed by the european commission (imposing interoperability of the microsoft work group server operating system) indicate clearly that the need for interoperability is desired on a political and societal level. in the communication from the commission to the council and the european parliament on improved effectiveness, enhanced interoperability and synergies among european databases in the area of justice and home affairs of , interoperability is defined as the "ability of it systems and of the business processes they support to exchange data and to enable the sharing of information and knowledge". this is, however, a more technological definition: it "explicitly disconnects the technical and the legal/political dimensions from interoperability, assuming that the former are neutral and the latter can come into play later or elsewhere. … indeed, technological developments are not inevitable or neutral, which is mutatis mutandis also the case for technical interoperability. the sociology of sciences has shown that any technological artefact has gone through many small and major decisions that have moulded it and given it its actual form. hence, the development of information technology is the result of micro politics in action. technologies are thus interwoven with organisation, cultural values, institutions, legal regulation, social imagination, decisions and controversies, and, of course, also the other way round. any denial of this hybrid nature of technology and society blocks the road toward a serious political, democratic, collective and legal assessment of technology. this means that technologies cannot be considered as faits accomplis or extrapolitical matters of fact." this way of proceeding has also been criticised by the european data protection supervisor, according to whom this leads to justifying the ends by the means. taking into account the need for interoperability, restrictions in the use and implementation of interoperability are required based on the purpose specification and proportionality principles. to this extent, a distinction between the processing of data for public (enforcement) to achieve certain purposes, for which access to data has been granted, access to the medium carrying the information (e.g., a chip) may be sufficient, for example, when verifying one's identity. there should always be clarity as to what authorities are being granted access. in the case of deployment of centralised databases, a list of authorities that have access to the data should be promulgated in an adequate, official, freely and easily accessible publication. such clarity and transparency would contribute to security and trust, and protect against abuses in the use of databases. the proportionality and purpose limitation principles are already binding under existing data protection laws. the collection and exchange of data (including interoperability) should be proportional to the goals for which the data have been collected. it will not be easy to elaborate the principles of proportionality and purpose limitation in ambient intelligence; previously collected data may serve for later developed applications or discovered purposes. creation and utilisation of databases may offer additional benefits (which are thus additional purposes), e.g., in the case of profiling. those other (derived) purposes should, as has been indicated in the opinion of the european data protection supervisor, be treated as independent purposes for which all legal requirements must be fulfilled. technical aspects of system operation can have a great impact on the way a system works, and how the proportionality principles and purpose limitation principles are implemented since they can determine, for example, if access to the central database is necessary, or whether access to the chip or part of the data is possible and sufficient. biometric technology can be a useful tool for authentication and verification, and may even be a privacy-enhancing technology. however, it can also constitute a threat to fundamental rights and freedoms. thus, specific safeguards should be put in place. biometric safeguards have already been subject of reflection by european data protection authorities: the article working party has stated that biometric data are in most cases personal data, so that data protection principles apply to processing of such data. on the principle of proportionality, the article working party points out that it is not necessary (for the sake of authentication or verification) to store biometric data in central databases, but in the medium (e.g., a card) remaining in the control of the user. the creation and use of centralised databases should always be carefully assessed before their deployment, including prior checking by data protection authorities. in any case, all appropriate security measures should be put in place. framing biometrics is more than just deciding between central or local storage. even storage of biometric data on a smart card should be accompanied by other regulatory measures that take the form of rights for the card-holders (to know what data and functions are on the card; to exclude certain data or information from being written onto the card; to reveal at discretion all or some data from the card; to remove specific data or information from the card). biometric data should not be used as unique identifiers, mainly because biometric data still do not have sufficient accuracy. of course, this might be remedied in the progress of science and technological development. there remains, however, a second objection: using biometrics as the primary key will offer the possibility of merging different databases, which can open the doors for abuses (function creep). european advisory bodies have considered biometric data as a unique identifier. generally speaking, since the raw data might contain more information than actually needed for certain finalities (including information not known at the moment of the collection, but revealed afterwards due to progress in science, e.g., health information related to biometric data), it should not be stored. other examples of opacity rules applied to biometrics might be prohibitions on possible use of "strong" multimodal biometrics (unless for high security applications) for everyday activities. codes of conduct can be appropriate tools to further regulate the use of technology in particular sectors. ami will depend on profiling as well as authentication and identification technologies. to enable ubiquitous communication between a person and his or her environment, both things and people will have to be traced and tracked. rfid seems to offer the technological means to implement such tracking. like biometrics, rfid is an enabling technology for real-time monitoring and decision making. like biometrics, rfids can advance the development of ami and provide many advantages for users, companies and consumers. no legislative action seems needed to support this developing technology. market mechanisms are handling this. there is, however, a risk to the privacy interests of the individual and for a violation of the data protection principles, as caspian and other privacy groups have stated. rfid use should be in accordance with privacy and data protection regulations. the article working party has already given some guidelines on the application of the principles of eu data protection legislation to rfids. it stresses that the data protection principles (purpose limitation principle, data quality principle, conservation principle, etc.) must always be complied with when the rfid technology leads to processing of personal data in the sense of the data protection directive. as the article working party points out, the consumer should always be informed about the presence of both rfid tags and readers, as well as of the responsible controller, the purpose of the processing, whether data are stored and the means to access and rectify data. here, techniques of (visual) indication of activation would be necessary. the data subject would have to give his consent for using and gathering information for any specific purpose. the data subject should also be informed about what type of data is gathered and whether the data will be used by the third parties. in ami, such rights may create a great burden, both on the data subject, on the responsible data controller and on all data processors. though adequate, simplified notices about the data processors' policy would be welcome (e.g., using adequate pictograms or similar means). in our opinion, such information should always be provided to consumers when rfid technology is used, even if the tag does not contain personal data in itself. the data subject should also be informed how to discard, disable or remove the tag. the right to disable the tag can relate to the consent principle of data protection, since the individual should always have the possibility to withdraw his consent. disabling the tag should at least be possible when the consent of the data subject is the sole legal basis for processing the data. disabling the tag should not lead to any discrimination of the consumer (e.g., in terms of the guarantee conditions). technological and organisational measures (e.g., the design of rfid systems) are of crucial importance in ensuring that the data protection obligations are respected (privacy by design, e.g., by technologically blocking unauthorised access to the data). thus, availability and compliance with privacy standards are of particular importance. the concept of "personal data" in the context of rfid technology is contested. wp states: in assessing whether the collection of personal data through a specific application of rfid is covered by the data protection directive, we must determine: (a) the extent to which the data processed relates to an individual, and (b) whether such data concerns an individual who is identifiable or identified. data relates to an individual if it refers to the identity, characteristics or behaviour of an individual or if such information is used to determine or influence the way in which that person is treated or evaluated. in assessing whether information concerns an identifiable person, one must apply recital of the data protection directive which establishes that "account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person." and further: "finally, the use of rfid technology to track individual movements which, given the massive data aggregation and computer memory and processing capacity, are if not identified, identifiable, also triggers the application of the data protection directive." article data protection working party, working document on data protection issues related to rfid further research on the rfid technology and its privacy implications is recommended. this research should also aim at determining whether any legislative action is needed to address the specific privacy concerns of rfid technology. further development of codes of conducts and good practices is also recommended. profiling is as old as life, because it is a kind of knowledge that unconsciously or consciously supports the behaviour of living beings, humans not excluded. it might well be that the insight that humans often "intuitively know" something before they "understand" it can be explained by the role profiling spontaneously plays in our minds. thus, there is no reason to prohibit automated profiling and data mining concerning individuals with opacity rules. profiling activities should in principle be ruled by transparency tools. in other words, the processing of personal datacollection, registration and processing in the strict sense -is not prohibited but submitted to a number of conditions guaranteeing the visibility, controllability and accountability of the data controller and the participation of the data subjects. data protection rules apply to profiling techniques (at least in principle). the collection and processing of traces surrounding the individual must be considered as processing of personal data in the sense of existing data protection legislation. both individual and group profiling are dependent on such collection and on the processing of data generated by the activities of individuals. and that is precisely why, in legal terms, no profiling is thinkable outside data protection. there is an ongoing debate in contemporary legal literature about the applicability of data protection to processing practices with data that are considered anonymous, i.e., they do not allow the identification of a specific individual. some contend that data protection rules do not allow processing practices that bring together data on certain individuals without trying to identify the said individual (in terms of physical location or name). some contend that data protection rules do not apply to profiling practices that process data relating to non-identifiable persons (in the sense of the data protection directive). we hold that it is possible to interpret the european data protection rules in a broad manner covering all profiling practices, but the courts have not spoken on this yet. data protection should apply to all profiling practices. when there is confusion in the application and interpretation of the legal instruments, they should be adapted so that they do apply to all profiling practices. profiling practices and the consequent personalisation of the ambient intelligence environment lead to an accumulation of power in the hands of those who control the profiles and should therefore be made transparent. the principles of data protection are an appropriate starting point to cope with profiling in a democratic constitutional state as they do impose good practices. nevertheless, while the default position of data protection is transparency ("yes, you can process, but …"), it does not exclude opacity rules ("no, you cannot process, unless …"). in relation to profiling, two examples of such rules are relevant. on the one hand, of course, there is the explicit prohibition against taking decisions affecting individuals solely on the basis of the automated application of a profile without human intervention (see article of the data protection directive). this seems obvious because in such a situation, probabilistic knowledge is applied to a real we recall that personal data in the eu data protection directive refers to "any information relating to an identified or identifiable natural person" (article ). the role of data protection law and non-discrimination law in group profiling in the private sector." article on automated individual decisions states: " . member states shall grant the right to every person not to be subject to a decision which produces legal effects concerning him or significantly affects him and which is based solely on automated processing of data intended to evaluate certain personal aspects relating to him, such as his performance at work, creditworthiness, reliability, conduct, etc. . subject to the other articles of this directive, member states shall provide that a person may be subjected to a decision of the kind referred to in paragraph if that decision: (a) is taken in the course of the entering into or performance of a contract, provided the request for the entering into or the performance of the contract, lodged by the data subject, has been satisfied or that there are suitable measures to safeguard his legitimate interests, such as arrangements allowing him to put his point of view; or (b) is authorized by a law which also lays down measures to safeguard the data subject's legitimate interests." person. on the other hand, there is the (quintessential) purpose specification principle, which provides that the processing of personal data must meet specified, explicit and legitimate purposes. as a result, the competence to process is limited to welldefined goals, which implies that the processing of the same data for other incompatible aims is prohibited. this, of course, substantially restricts the possibility to link different processing and databases for profiling or data mining objectives. the purpose specification principle is definitely at odds with the logic of interoperability and availability of personal data: the latter would imply that all databases can be used jointly for profiling purposes. in other words, the fact that the legal regime applicable to profiling and data mining is data protection does not give a carte blanche to mine and compare personal data that were not meant to be connected. the european data protection supervisor indicated in his annual report a number of processing operations that are likely to encompass specific risks to the rights and freedoms of data subjects, even if the processing does not occur upon sensitive data. this list relates to processing operations (a) of data relating to health and to suspected offences, offences, criminal convictions or security measures, (b) intended to evaluate personal aspects relating to the data subject, including his or her ability, efficiency and conduct, (c) allowing linkages, not provided for pursuant to national or community legislation, between data processed for different purposes, and (d) for the purpose of excluding individuals from a right, benefit or contract. software can be the tool for regulating one's behaviour by simply allowing or not allowing certain acts. thus, technology constituting the "software code" can affect the architecture of the internet (and thus potentially of ami) and can provide effective means for enforcing the privacy of the individual. for example, cryptology might give many benefits: it could be used for pseudonymisation (e.g., encrypting ip addresses) and ensuring confidentiality of communication or commerce. privacy-enhancing technologies can have an important role to play, but they need an adequate legal framework. the directive on the legal protection of software obliges member states to provide appropriate remedies against a person committing any act of putting into circulation, or the possession for commercial purposes of, any means the sole intended purpose of which is to facilitate the unauthorised removal or circumvention of any technical devices which may have been applied to protect a computer program. this mechanism aims to protect programmes enforcing the intellectual property rights against circumvention. similar legal protection against circumvention of privacy-enhancing technologies could be legally foreseen. technology might go beyond what the law permits (e.g., drm prevents intellectual property infringements but at the same time might limit the rights of the lawful user). negative side effects of such technologies should be eliminated. more generally, when introducing new technology on the market, manufacturers together with relevant stakeholders should undertake a privacy impact assessment. development of a participatory impact assessment procedure would allow stakeholders to quickly identify and react to any negative features of technology (see also section . . ). the european data protection directive imposes obligations on the data controller and gives rights to the data subject. it aims to give the individual control over the collection and processing of his data. many provisions in the data protection directive have several weaknesses in an ami environment. principles of proportionality and fairness are relative and may lead to different assessments in similar situations; obtaining consent might not be feasible in the constant need for the collection and exchange of data; obtaining consent can be simply imposed by the stronger party. individuals might not be able to exercise the right to consent, right to information, access or rectification of data due to the overflow of information. thus, those rules might simply become unworkable in an ami environment. and even if workable (e.g., thanks to the help of the digital assistants), are they enough? should we not try to look for an approach granting the individual even more control? several european projects are involved in research on identity management. they focus on a decentralised approach, where a user controls how much and what kind of information he or she wants to disclose. identity management systems, while operating on a need-to-know basis, offer the user the possibility of acting under pseudonyms, under unlinkability or anonymously, if possible and desirable. among the other examples of such systems, there are projects that base their logic on the assumption that the individual has property over his data, and then could use licensing schemes when a transfer of data occurs. granting him property over the data is seen as giving him control over the information and its usage in a "distribution chain". however, it is doubtful if granting him property over the data will really empower the individual and give him a higher level of protection and control over his data. the property model also assumes that the data are disseminated under a contract. thus, the question might arise whether the data protection directive should serve as a minimum standard and thus limit the freedom of contracts. but as our dark scenarios show, there exist many cases in which the individual will not be able to freely enter into a contract. another question arises since our data are not always collected and used for commercial purposes. in most situations, the processing of personal data is a necessary condition for entering into a contractual relation (whereas the data protection directive states in article that data processing without the individual's consent to use of his personal data is legitimate when such processing is necessary for the performance of a contract). the most obvious example is the collection of data by police, social insurance and other public institutions. the individual will not always be free to give or not give his data away. the property model will not address these issues. it will also not stop the availability of the data via public means. a weakness of the property model is that it might lead to treating data only as economic assets, subject to the rules of the market. but the model's aim is different: the aim is to protect personal data, without making their processing and transfer impossible. regarding data as property also does not address the issue of the profile knowledge derived from personal data. this knowledge is still the property of the owner or the licenser of the profile. the data-as-property option also ignores the new and increasingly invisible means of data collection, such as rfids, cameras or online data collection methods. discussing the issue of whether personal data should become the individual's property does not solve the core problem. on the one hand, treating data as property may lead to a too high level of protection of personal information, which would conflict with the extensive processing needs of ami. on the other hand, it would, by default, turn personal data into a freely negotiable asset, no longer ruled by data protection, but left to market mechanisms and consent of the data subjects (more often than not to the detriment of the latter). finally, the data-as-property option loses its relevance in the light of a focus upon anonymisation and pseudonymisation of data processed in ami applications. the prime consortium proposes identity management systems controlled by data subjects. it aims to enable individuals to negotiate with service providers the disclosure of personal data according to the conditions defined. such agreement would constitute a contract. an intelligent agent could undertake the management on the user side. this solution is based on the data minimisation principle and on the current state of legislation. it proposes the enforcement of (some) current data protection and privacy laws. it seems to be designed more for the needs of the world today than for a future ami world. the user could still be forced to disclose more information than he or she wishes, because he or she is the weaker party in the negotiation; he or she needs the service. the fidis consortium has also proposed a decentralised identity management, the vision of which seems to go a bit further than the prime proposal. it foresees that the user profiles are stored on the user's device, and preferences relevant for a particular service are (temporarily) communicated to the service provider for the purpose of a single service. the communication of the profile does not have to imply disclosure of one's identity. if there is information extracted from the behaviour of the user, it is transferred by the ambient intelligent device back to the user, thus updating his profile. thus, some level of exchange of knowledge is foreseen in this model, which can be very important for the data subject's right to information. a legal framework for such sharing of knowledge from an ami-generated profile needs to be developed, as well as legal protection of the technical solution enabling such information management. such schemes rely on automated protocols for the policy negotiations. the automated schemes imply that the consent of the data subject is also organised by automatic means. we need a legal framework to deal with the situation wherein the explicit consent of the data subject for each collection of data is replaced by a "consent" given by an intelligent agent. in such automated models, one could envisage privacy policies following the data. such "sticky" policies, attached to personal data, would provide for clear information and indicate to data processors and controllers which privacy policy applies to the data concerned. sticky policies could facilitate the auditing and self-auditing of the lawfulness of the data processing by data controllers. in any event, research in this direction is desirable. since ami is also a mobile environment, there is a need to develop identity management systems addressing the special requirements of mobile networks. the fidis consortium has prepared a technical survey of mobile identity management. it has identified some special challenges and threats to privacy in the case of mobile networks and made certain recommendations: • location information and device characteristics both should be protected. • ease of use of the mobile identity management tools and simplified languages and interfaces for non-experts should be enhanced. • a verifiable link between the user and his digital identity has to be ensured. accordingly, privacy should also be protected in peer-to-peer relationships. the importance of consumer protection will grow in ambient intelligence, because of the likelihood that consumers will become more dependent on online products and services, and because product and service providers will strengthen their bargaining position through an increasing information asymmetry. without the constraints of law, ambient intelligence service providers could easily dictate the conditions of participation in new environments. consumer protection should find the proper balance in ami. consumer protection law defines the obligations of the producers and the rights of consumer and consists of a set of rules limiting the freedom to contract, for the benefit of the consumer. consumer protection law plays a role of its own, but can support the protection of privacy and data protection rights. the basis for the european framework for consumer protection rules can be found in article of the ec treaty: "in order to promote the interests of consumers and to ensure a high level of consumer protection, the community shall contribute to protecting the health, safety and economic interests of consumers, as well as to promoting their right to information, education and to organise themselves in order to safeguard their interests." consumer protection at european level is provided by (amongst others) directive / on unfair terms in consumer contracts, directive / on consumer protection in respect of distance contracts and the directive on liability for defective products (discussed below). directive / and directive / were both already discussed (in chapter , sections . . . and . . . ) . in many respects, their rules are not fitted to ami and they need to be re-adapted. this especially relates to extending the scope of protection of those directives, thereby making sure that all services and electronic means of communications and trading are covered (including those services on the world wide web not currently covered by the distance contract directive). due to the increasing complexity of online services, and due to the possibility of information overflow, it seems necessary to find legal ways to assess and recognise contracts made through the intervention of intelligent agents. is the legal system flexible enough to endorse this? moreover, the same should apply to the privacy policies and to the consent of individuals for the collection of data (because, in identity management systems, intelligent agents will decide what data are to be disclosed to whom). here is a challenge: how to technologically implement negotiability of contracts and the framework of binding law in electronic, machine-readable form? suppliers should not be allowed to set up privacy conditions which are manifestly not in compliance with the generally applicable privacy rules and which disadvantage the customer. data protection legislation and consumer protection law could constitute the minimum (or default) privacy protection level. similar rules as those currently applicable under the consumer protection of directive / on unfair terms in consumer contracts could apply. mandatory rules of consumer protection require, inter alia, that contracts be drafted in plain, intelligible language, that the consumer be given an opportunity to examine all terms, that -in cases of doubt -the interpretation most favourable to the consumer prevail. suppliers should not be allowed to unfairly limit their liability for security problems in the service they provide to the consumer. in this respect, more attention could be given to a judgment of the court of first instance of nanterre (france) in in which the online subscriber contract of aol france was declared illegal in that it contained not less than abusive clauses in its standard contractual terms (many of which infringed consumer protection law). the directive on unfair terms in consumer contracts and the directive on consumer protection in respect of distance contracts provide a broad right to information for the consumer. it should be sufficient to dispense such information in electronic form, in view of the large amount of information directed towards consumers that would have to be managed by intelligent agents. an increasing number of service providers will be involved in ami services and it cannot be feasible to provide the required information about all of them. the solution may be to provide such information only about the service provider whom the consumer directly pays and who is responsible towards the consumer. joint liability would apply (for liability issues, see below). the right to withdrawal, foreseen by the directive / on consumer protection with respect to distance contracts, may not apply (unless otherwise agreed) to contracts in which (a) the provision of services has begun with the consumer's agreement before the end of the seven-working-day period and (b) goods have been made to the consumer's specifications or clearly personalised or which, by their nature, cannot be returned or are liable to deteriorate or expire rapidly. in an ami world, services will be provided instantly and will be increasingly personalised. this implies that the right of withdrawal will become inapplicable in many cases. new solutions should be developed to address this problem. currently, insofar as it is not received on a permanent medium, consumers must also receive written notice in good time of the information necessary for proper performance of the contract. in ami, payments will often occur automatically, at the moment of ordering or even offering the service. temporary accounts, administered by trusted third parties, could temporarily store money paid by a consumer to a product or service provider. this can support consumer protection and enforcement, in particular with respect to fraud and for effectively exercising the right of withdrawal. this would be welcome for services that are offered to consumers in the european union by service providers located in third countries, as enforcement of consumer protection rights is likely to be less effective in such situations. the possibility of group consumer litigation can increase the level of law enforcement and, especially, enforcement of consumer protection law. often an individual claim does not represent an important economic value, thus, individuals are discouraged from making efforts to enforce their rights. bodies or organisations with a legitimate interest in ensuring that the collective interests of consumers are protected can institute proceedings before courts or competent administrative authorities and seek termination of any behaviour adversely affecting consumer protection and defined by law as illegal. however, as far as actions for damages are concerned, issues such as the form and availability of group litigation are regulated by the national laws of the member states as part of procedural law. the possibility to bring such a claim is restricted to a small number of states. group litigation is a broad term which captures collective claims (single claims brought on behalf of a group of identified or identifiable individuals), representative actions (single claims brought on behalf of a group of identified individuals by, e.g., a consumer interest association), class action (one party or group of parties may sue as representatives of a larger class of unidentified individuals), among others. these definitions as well as the procedural shape of such claims vary in different member states. waelbroeck d., d. slater and g. the directive on electronic commerce aims to provide a common framework for information society services in the eu member states. an important feature of the directive is that it also applies to legal persons. similar to the consumer protection legislation, the directive contains an obligation to provide certain information to customers. in view of the increasing number of service providers, it may not be feasible to provide information about all of them. providing information about the service provider whom the customer pays directly and who is responsible towards him could be a solution to the problem of the proliferating number of service providers (joint liability may also apply here). the directive should also be updated to include the possibility of concluding contracts by electronic means (including reference to intelligent agents) and to facilitate the usage of pseudonyms, trusted third parties and credentials in electronic commerce. unsolicited commercial communication is an undesirable phenomenon in cyberspace. it constitutes a large portion of traffic on the internet, using its resources (bandwidth and storage capacity) and forcing internet providers and users to adopt organisational measures to fight it (by filtering and blocking spam). spam can also constitute a security threat. the dark scenarios show that spam may become an even more serious problem than it is today. an increase in the volume of spam can be expected because of the emergence of new means of electronic communication. zero-cost models for e-mail services encourage these practices, and similar problems may be expected when mobile services pick up a zero-cost or flat-fee model. as we become increasingly dependent on electronic communication -ambient intelligence presupposes that we are almost constantly online -we become more vulnerable to spam. in the example from the first dark scenario, spamming may cause irritation and make the individual reluctant to use ambient intelligence. fighting spam may well demand even more resources than it does today as new methods of spamming -such as highly personalised and location-based advertising -emerge. currently, many legal acts throughout the world penalise unsolicited communication, but without much success. the privacy & electronic communication directive provides for an opt-in regime, applicable in the instance of commercial communication, thus inherently prohibiting unsolicited marketing. electronic communications are, however, defined as "any information exchanged or conveyed between a finite number of parties by means of a publicly available electronic communications service. this does not include any information conveyed as part of a broadcasting service to the public over an electronic communications network except to the extent that the information can be related to the identifiable subscriber or user receiving the information." the communications need to have a commercial content in order to fall under the opt-in regulation of the privacy & electronic communication directive. consequently, this directive may not cover unsolicited, location-based advertisements with a commercial content that are broadcast to a group of people ("the public"). the impact of this exception cannot be addressed yet since location-based services are still in their infancy. a broad interpretation of electronic communications is necessary (the directive is technology-neutral). considering any unsolicited electronic communication as spam, regardless of the content and regardless of the technological means, would offer protection that is adequate in ambient intelligence environments in which digital communications between people (and service providers) will exceed physical conversations and communications. civil damages address a harm already done, and compensate for damages sustained. effective civil liability rules might actually form one of the biggest incentives for all actors involved to adhere to the obligations envisaged by law. one could establish liability for breach of contract, or on the basis of general tort rules. to succeed in court, one has to prove the damage, the causal link and the fault. liability can be established for any damages sustained, as far as the conditions of liability are proven and so long as liability is not excluded (as in the case of some situations in which intermediary service providers are involved ). however, in ami, to establish such proof can be extremely difficult. as we have seen in the dark scenarios, each action is very complex, with a multiplicity of actors involved, and intelligent agents acting for service providers often undertake the action or decision causing the damage. who is then to blame? how easy will it be to establish causation in a case where the system itself generates the information and undertakes the actions? how will the individual deal with such problems? the individual who is able to obtain damages addressing his harm in an efficient and quick way will have the incentive to take an action against the infringer, thus raising the level of overall enforcement of the law. such an effect would be desirable, especially since no state or any enforcement agency is actually capable of providing a sufficient level of control and/or enforcement of the legal rules. the liability provisions of the directive on electronic commerce can become problematic. the scope of the liability exceptions under the directive is not clear. the directive requires isps to take down the content if they obtain knowledge on the infringing character of the content (notice-and-take-down procedure). however, the lack of a "put-back" procedure (allowing content providers whose content has been wrongfully alleged as illegal, to re-publish it on the internet) or the verification of take-down notices by third parties is said to possibly infringe freedom of speech. it is recommended that the liability rules be strengthened and that consideration be given to means that can facilitate their effectiveness. the directive provides for exceptions to the liability of intermediary service providers (isps) under certain conditions. in the case of hosting, for example, a service provider is not liable for the information stored at the request of a recipient of the service, on condition that (a) the provider does not have actual knowledge of illegal activity or information and, as regards claims for damages, is not aware of facts or circumstances from which the illegal activity or information is apparent or (b) the provider, upon obtaining such knowledge or awareness, acts expeditiously to remove or to disable access to the information. see also section . . . in addition to the general considerations regarding liability presented in this section, we also draw attention to the specific problems of liability for infringement of privacy, including security infringements. currently, the right to remedy in such circumstances is based on the general liability (tort) rules. the data protection directive refers explicitly to liability issues stating that an immediate compensation mechanism shall be developed in case of liability for an automated decision based on inadequate profiles and refusal of access. however, it is not clear whether it could be understood as a departure from general rules and a strengthening of the liability regime. determining the scope of liability for privacy breach and security infringements might also be problematic. in any case, the proof of the elements of a claim and meeting the general tort law preconditions (damage, causality and fault) can be very difficult. opacity instruments, as discussed above, aiming to prohibit the interference into one's privacy can help to provide some clarity as to the scope of the liability. in addition, guidelines and interpretations on liability would be generally welcome, as would standards for safety measures, to provide for greater clarity and thus greater legal certainty for both users and undertakings. as already mentioned, it can be difficult for a user to identify the party actually responsible for damages, especially if he or she does not know which parties were actually involved in the service and/or software creation and delivery. the user should be able to request compensation from the service provider with whom he or she had direct contact in the process of the service. joint and several liability (with the right to redress) should be the default rule in the case of providers of ami services, software, hardware or other products. the complexity of the actions and multiplicity of actors justify such a position. moreover, this recommendation should be supplemented by the consumer protection recommendation requiring the provision of consumer information by the service or product provider having the closest connection with the consumer, as well as the provision of information about individual privacy rights (see above) in a way that would enable the individual to detect a privacy infringement and have a better chance to prove it in court. there is a need to consider the liability regime with other provisions of law. liability regime. in fact, the answer depends on national laws and how the directive has been implemented. the directive applies to products defined as movables, which might suggest that it refers to tangible goods. software not incorporated into a tangible medium (available online) will not satisfy such a definition. there are a growing number of devices (products) with embedded software (e.g., washing machines, microwaves, possibly rfids), which fall under the directive's regime. this trend will continue; the software will be increasingly crucial for the proper functioning of the products themselves, services and whole environments (smart car, smart home). should the distinction between the two regimes remain? strict liability is limited to death or personal injury, or damage to property intended for private use. the damage relating to the product itself, to the product used in the course of business and the economic loss will not be remedied under the directive. currently, defective software is most likely to cause financial loss only, thus the injured party would not be able to rely on provisions of the directive in seeking redress. however, even now in some life-saving applications, personal injury dangers can emerge. such will also be the case in the ami world (see, e.g., the first and second dark scenarios in which software failures cause accidents, property damage and personal injury) so the importance and applicability of the directive on liability for defective products will grow. the increasing dependence on software applications in everyday life, the increasing danger of sustaining personal injury due to a software failure and, thus, the growing concerns of consumers justify strengthening the software liability regime. however, the directive allows for a state-of-the-art defence. under this defence, a producer is not liable if the state of scientific and technical knowledge at the time the product was put into circulation was not such that the existence of the defect would be discovered. it has been argued that the availability of such a defence (member states have the discretion whether to retain it in national laws ) will always be possible since, due to the complexity of "code", software will never be defect-free. these policy and legal arguments indicate the difficulty in broadening the scope of the directive on liability for defective products to include software. reversal of the burden of proof might be a more adequate alternative solution, one that policymakers should investigate. it is often difficult to distinguish software from hardware because both are necessary and interdependent to provide a certain functionality. similarly, it may be difficult to draw the line between software and services. transfer of information via electronic signals (e.g., downloaded software) could be regarded as a service. some courts might also be willing to distinguish between mass-market software and software produced as an individual product (on demand). ami is a highly personalised environment where software-based services will surround the individual, thus the tendency to regard software as a service could increase. strict liability currently does not apply to services. service liability is regulated by national laws. extending such provision to services may have far-reaching consequences, not only in the ict field. the ami environment will need the innovation and creativity of service providers; therefore, one should refrain from creating a framework discouraging them from taking risks. however, some procedural rules could help consumers without upsetting an equitable balance. the consumer, usually the weaker party in a conflict with the provider, often has difficulty proving damages. reversing the burden of proof might facilitate such proof. most national laws seem to provide a similar solution. since national law regulates the issue of service liability, differences between national regulations might lead to differences in the level of protection. the lack of a coherent legal framework for service liability in europe is regrettable. learning from the differences and similarities between the different national legal regimes, as indicated in the analysis of national liability systems for remedying damage caused by defective consumer services, is the first step in remedying such a situation. reversing the burden of proof is less invasive than the strict liability rules where the issue of fault is simply not taken into consideration. such a solution has been adopted in the field of the non-discrimination and intellectual property laws, as well as in national tort regimes. an exception to the general liability regime is also provided in directive / /ec on the community framework for electronic signatures. in that directive, the certification service provider is liable for damage caused by non-compliance with obligations imposed by the directive unless he proves he did not act negligently. technology could potentially remedy the information asymmetry between users and ami service suppliers or data processors. the latter could have an obligation to inform consumers what data are processed, how and when and what is the aim of such activities (thus actually fulfilling their obligations under the data protection directive). this information could be stored and managed by an intelligent agent on behalf of the user, who is not able to deal with such information flow. however, the user would have the possibility to use such information to enforce his rights (e.g., to prove causation). other technological solutions (e.g., watermarking) could also help the user prove his case in court. in many cases, the damage sustained by the individual will be difficult to assess in terms of the economic value or too small to actually provide an incentive to bring an action to court. however, acts causing such damage can have overall negative effects. spam is a good example. fixed damages, similar to the ones used in the united states, or punitive damages could remedy such problems (some us state laws provide for fixed damages such as us$ for each unsolicited communication without the victim needing to prove such damage). they would also provide clarity as to the sanctions or damages expected and could possibly have a deterrent effect. the national laws of each member state currently regulate availability of punitive damages; a few countries provide for punitive and exemplary damages in their tort systems. the universal service directive provides for a minimum of telecommunication services for all at an affordable price as determined by each member state. prices for universal services may depart from those resulting from market conditions. such provisions aim at overcoming a digital divide and allowing all to enjoy a certain minimum of electronic services. the directive is definitely a good start in shaping the information society and the ami environment. the development of new technologies and services generates costs, both on individuals and society. many high-added-value ami services will be designed for people who will be able to pay for them. thus, ami could reinforce the inequalities between the poor and rich. everyone should be able to enjoy the benefits of ami, at least at a minimum level. the commission should consider whether new emerging ami services should be provided to all. some services (e.g., emergency services) could even be regarded as public and provided free of charge or as part of social security schemes. as shown in scenario , ami might cause major problems for current intellectual property protection, because ami requires interoperability of devices, software, data and information, for example, for crucial information systems such as health monitoring systems used by travelling seniors. there is also a growing need to create means of intellectual property protection that respect privacy and allow for anonymous content viewing. intellectual property rights give exclusive rights over databases consisting of personal data and profiles, while the data subjects do not have a property right over their own information collected. we discuss these issues below. the directive on the legal protection of databases provides for a copyright protection of databases, if they constitute the author's own intellectual creation by virtue of his selection or arrangement of their content. the directive also foresees a sui generis protection if there has been a qualitatively and/or quantitatively substantial investment in either the acquisition, verification or presentation of the content. sui generis protection "prevents the extraction and/or the re-utilization of the whole or of a substantial part, evaluated qualitatively and/or quantitatively, of the contents of that database". this implies that the database maker can obtain a sui generis protection of a database even when its content consists of personal data. although the user does not have a property right over his personal data, the maker of a database can obtain an exclusive right over this type of data. hence, a profile built on the personal data of a data subject might constitute somebody else's intellectual property. the right to information about what knowledge has been derived from one's data could, to some extent, provide a safeguard against profiling. we recommend that further research be undertaken on how to reconcile this with intellectual property rights. the copyright directive provides for the protection of digital rights management (drms) used to manage the licence rights of works that are accessed after identification or authentication of a user. but drms can violate privacy, because they can be used for processing of personal data and constructing (group) profiles, which might conflict with data protection law. less invasive ways of reconciling intellectual property rights with privacy should be considered. this not only relates to technologies but also to an estimation of the factual economic position of the customer. for example, the general terms and conditions for subscribing to an interactive television service -often a service offered by just a few players -should not impose on customers a condition that personal data relating to their viewing behaviour can be processed and used for direct marketing or for transfer to "affiliated" third parties. as the article working party advises, greater attention should be devoted to the use of pets within drm systems. in particular, it advises that tools be used to see also section . . , specific recommendation regarding security. article data protection working party, working document on data protection issues related to intellectual property rights (wp/ ), adopted on january . http://ec.europa.eu/ justice_home/fsj/privacy preserve the anonymity of users and it recommends the limited use of unique identifiers. use of unique identifiers allows profiling and tagging of a document linked to an individual, enabling tracking for copyright abuses. such tagging should not be used unless necessary for performance of the service or unless with the informed consent of individual. all relevant information required under data protection legislation should be provided to users, including categories of collected information, the purpose of collecting and information about the rights of the data subject. the directive on the legal protection of software obliges member states to provide appropriate remedies against a person's committing any act of putting into circulation or the possession for commercial purposes of any means the sole intended purpose of which is to facilitate the unauthorised removal or circumvention of any technical device which may have been applied to protect a computer program. the software directive only protects against the putting into circulation of such devices and not against the act of circumventing as such. it would be advisable to have a uniform solution in that respect. drms can also violate consumer rights, by preventing the lawful enjoyment of the purchased product. the anti-circumvention provisions should be then coupled with better enforcement of consumer protection provisions regarding information disclosure to the consumer. the consumer should always be aware of any technological measures used to protect the content he wishes to purchase, and restrictions in use of such content as a consequence of technological protection (he should also be informed about technological consequences of drms for his devices, if any, e.g., about installing the software on his computer). product warnings and consumer notifications should always be in place and should aim to raise general consumer awareness about the drms. as interoperability is a precondition for ami, ami would have to lead to limitations on exclusive intellectual property rights. one could argue that software packages should be developed so that they are interoperable with each other. those restrictions might, inter alia, prevent the user from making backups or private copies, downloading music to portable devices, playing music on certain devices, or constitute the geographical restrictions such as regional coding of dvds. privacy protection. a broader scope of the decompilation right under software protection would be desirable. the ec's battle with microsoft was in part an attempt to strengthen the decompilation right with the support of competition law. currently, there is no international or european framework determining jurisdiction in criminal matters, thus, national rules are applicable. the main characteristics of the legal provisions in this matter have already been discussed in chapter , section . . . ; however, it seems useful to refer here to some of our earlier conclusions. the analysis of the connecting factors for forum selection (where a case is to be heard) shows that it is almost always possible for a judge to declare himself competent to hear a case. certain guidelines have already been developed, both in the context of the cybercrime convention as well as the eu framework decision on attacks against information systems on how to resolve the issue of concurrent competences. according to the cybercrime convention, "the parties involved shall, where appropriate, consult with a view to determining the most appropriate jurisdiction for prosecution." the eu framework decision on attacks against information systems states, "where an offence falls within the jurisdiction of more than one member state and when any of the states concerned can validly prosecute on the basis of the same facts, the member states concerned shall co-operate in order to decide which of them will prosecute the offenders with the aim, if possible, of centralizing proceedings in a single member state." legal experts and academics should follow any future developments in application of those rules that might indicate whether more straightforward rules are needed. the discussion on the green paper on double jeopardy should also be closely followed. scenario ("a crash in ami space") turns on an accident involving german tourists in italy, while travelling with a tourist company established in a third country. it raises questions about how ami might fit into a legal framework based on territorial concepts. clear rules determining the law applicable between the parties are an important guarantee of legal certainty. private international law issues are dealt at the european level by the rome convention on the law applicable to contractual obligations as well as the rome ii regulation on the law applicable to non-contractual obligations, the brussels regulation on jurisdiction and enforcement of judgments. the regulation on jurisdiction and enforcement of judgments in civil and commercial matters covers both contractual and non-contractual matters. it also contains specific provisions for jurisdiction over consumer contracts, which aim to protect the consumer in case of court disputes. these provisions should be satisfactory and workable in an ami environment. however, provisions of this regulation will not determine the forum if the defendant is domiciled outside the european union. also, the provisions on the jurisdiction for consumer contracts apply only when both parties are domiciled in eu member states. although the regulation provides for a forum if the dispute arises from an operation of a branch, agency or other establishment of the defendant in a member state, a substantial number of businesses offering services to eu consumers will still be outside the reach of this regulation. this emphasises again the need for a more global approach beyond the territory of the member states. clarification and simplification of forum selection for non-consumers would also be desirable. the complexity of the business environment, service and product creation and delivery would justify such approach. it would be of special importance for smes. currently, the applicable law for contractual obligations is determined by the rome convention. efforts have been undertaken to modernise the rome convention and replace it with a community instrument. recently, the commission has presented a proposal for a regulation of the european parliament and the council on the law applicable to contractual obligations. the provisions of the rome convention refer to contractual issues only. recently, the so-called rome ii regulation has been adopted, which provides for rules applicable to non-contractual obligations. the rome convention on law applicable to contractual obligations relies heavily on the territorial criterion. it refers to the habitual residence, the central administration or place of business as the key factors determining the national law most relevant to the case. but it services can be supplied at a distance by electronic means. the ami service supplier could have his habitual residence or central administration anywhere in the world and he could choose his place of residence (central administration) according to how beneficial is the national law of a given country. the habitual residence factor has been kept and strengthened in the commission's proposal for a new regulation replacing the rome convention (rome i proposal, article ). the new proposal for the rome i regulation amends the consumer protection provisions. it still relies on the habitual residence of the consumer, but it brings the consumer choice of contract law in line with the equivalent provisions of the brussels regulation, and broadens the scope of the application of its provisions. the commission proposal for the regulation on the law applicable to contractual obligations is a good step forward. the rome ii regulation on law applicable to non-contractual obligations applies to the tort or delict, including claims arising out of strict liability. the basic rule under the regulation is that a law applicable should be determined on the basis of where the direct damage occurred (lex loci damni). however, some "escape clauses" are foreseen and provide for a more adequate solution if more appropriate in the case at hand. this allows for flexibility in choosing the best solution. special rules are also foreseen in the case of some specific torts or delicts. uniform rules on applicable law at the european level are an important factor for improving the predictability of litigation, and thus legal certainty. in that respect, the new regulation should be welcomed. the regulation will apply from january . some other legislative acts also contain rules on applicable law. most important are provisions in the data protection directive. this directive also chooses the territorial criterion to determine the national law applicable to the processing of data, which is the law of the place where the processing is carried out in the context of an establishment of the data controller. such a criterion, however, might be problematic: more than one national law might be applicable. moreover, in times of globalisation of economic activity, it is easy for an undertaking to choose the place of establishment, which offers the most liberal regime, beyond the reach of european data protection law. in situations when a non-eu state is involved, the directive points out a different relevant factor, the location of the equipment used, thus enabling broader application of the eu data protection directive. as recital of the proposal states, these amendments aim to take into account the developments in distance selling, thus including ict developments. article ( ) of the directive stipulates: each member state shall apply the national provisions it adopts pursuant to this directive to the processing of personal data where: (a) the processing is carried out in the context of the activities of an establishment of the controller on the territory of the member state; when the same controller is established on the territory of several member states, he must take the necessary measures to ensure that each of these establishments complies with the obligations laid down by the national law applicable. the directive stipulates in article ( ) that the national law of a given member state will apply when the controller is not established on community territory and, for purposes of processing personal data, makes use of equipment, automated or otherwise, situated on the territory of the said member state, unless such equipment is used only for purposes of transit through the territory of the community. the article data protection working party interprets the term "equipment" as referring to all kinds of tools or devices, including personal computers, which can be used for many kinds of processing operations. the definition could be extended to all devices with a capacity to collect data, including sensors, implants and maybe rfids. (active rfid chips can also collect information. they are expensive compared to passive rfid chips but they are already part of the real world.) see article data protection working party, working document on determining the international application of eu data protection law to personal data processing on the internet by non-eu based websites ( / /en/final wp ), may . http://ec.europa.eu/justice_home/ fsj/privacy/ docs/wpdocs/ /wp _en.pdf as we see, in all these cases, the territorial criterion (establishment) prevails. we should consider moving towards a more personal criterion, especially since personal data are linked with an identity and a state of the data subject (issues which are regulated by the national law of the person). such a criterion could be more easily reconciled with the ami world of high mobility and without physical borders. the data subject will also be able to remain under the protection of his/her national law, and the data controller/service provider will not have the possibility of selecting a place of establishment granting him the most liberal treatment of law. data transfer is another issue highlighting the need for international co-operation in the creation of a common playing field for ami at the global level. what is the sense of protecting data in one country if they are transferred to a country not affording comparable (or any) safeguards? also, the globalisation of economic and other activities brings the necessity of exchanging personal data between the countries. the data protection directive provides a set of rules on data transfer to third countries. data can be transferred only to countries offering an adequate level of protection. the commission can conclude agreements (e.g., the safe harbour agreement) with third countries which could help ensure an adequate level of protection. the commission can also issue a decision in that respect. however, the major problem is enforcement of such rules, especially in view of the fact that some "safeguards" rely on self-regulatory systems whereby companies merely promise not to violate their declared privacy policies (as is the case with the safe harbour agreement). attention by the media and consumer organisations can help in the enforcement of agreed rules. the problem of weak enforcement also emphasises the need to strengthen international co-operation with the aim of developing new enforcement mechanisms. providing assistance in good practices in countries with less experience than the european union would also be useful. such a solution has the advantage of covering, with the protection of eu legislation, third country residents whose data are processed via equipment in the eu. a broad interpretation of the term "equipment" would help guarantee the relatively broad application of such a rule (see above). as a result, in most cases, application of the domicile/nationality rule or the place of the equipment used as the relevant factor would have the same result. however, we can envisage the processing of data not using such equipment, for example, when the data are already posted online. then the eu law could not be applicable. see chapter , section . . . . biometrics: legal issues and implications, background paper for the institute of prospective technological studies report on the protection of personal data with regard to the use of smart cards biometrics at the frontiers: assessing the impact on society, study commissioned by the libe committee of the european parliament comments on the communication of the commission on interoperability of european databases article data protection working party, working document on biometrics descriptive analysis and inventory of profiling practices, fidis (future of identity in the information society), deliverable d rfid and the perception of control: the consumer's view article data protection working party, working document on data protection issues related to rfid technology ( / /en -wp ) though these are good examples of the involvement of stakeholders in the discussion, the results are not fully satisfactory. as a compromise between the different actors, the guidelines do not go far enough in protecting the interests of consumers huge practical difficulties of effectively enforcing and implementing data protection, more particularly in the field of profiling structured overview on prototypes and concepts of identity management systems code': privacy's death or saviour? the weaker party in the contract is now protected by the general principles of law privacy and identity management for europe report on actual and possible profiling techniques in the field of ambient intelligence, fidis (future of identity in the information society) deliverable d . ami -the european perspective on data protection legislation and privacy policies privacy and human rights , produced by the electronic privacy information center article (d) of directive / /ec safeguards should be provided for subscribers against intrusion of their privacy by unsolicited communications for direct marketing purposes in particular by means of automated calling machines, telefaxes, and e-mails, including sms messages cogitas, ergo sum. the role of data protection law and non-discrimination law in group profiling in the private sector spam en electronische reclame a strict product liability regime based on the directive is the basis of the claims under the general tort regime. see giensen, i the precautionary principle in the information society, effects of pervasive computing on health and environment in such a case, the intelligent software agent's failure and the pet's failure might be covered by the strict liability regime the applicability of the eu product liability directive to software computer software and information licensing in emerging markets, the need for a viable legal framework the contract intended to treat it as such (as opposed to an information service). see singsangob a., computer software and information licensing in emerging markets, the need for a viable legal framework article of the directive liability article of the directive on liability for defective products liability for defective products and services: the netherlands the oecd has treated software downloads as a service for the vat and custom duties purposes the distance selling directive: points for further revision comparative analysis of national liability systems for remedying damage caused by defective consumer services: a study commissioned by the european commission (the scenario script) and section . . . on the legal analysis of private international law aspects of the scenario regulation (ec) no / of the european parliament and of the council of on jurisdiction and the recognition and enforcement of judgments in civil and commercial matters if the defendant is not domiciled in a member state, the jurisdiction of the courts of each member state shall, subject to articles and , be determined by the law of that member state; . any person domiciled in a member state may, whatever his nationality, avail himself of the rules of jurisdiction in force in that state, and in particular those specified in annex i, in the same way as the nationals of that state. by a translation of existing powers and structures the commission has presented the proposal for a regulation of the european parliament and the council on the law applicable to contractual obligations (rome i), com ( ) final it shall be presumed that the contract is most closely connected with the country where the party who is to effect the performance which is characteristic of the contract has, at the time of conclusion of the contract, his habitual residence, or, in the case of a body corporate or unincorporated, its central administration. however, if the contract is entered into in the course of that party's trade or profession the new proposal does not use the presumption that the country of habitual residence is the most closely connected with the case, as it is under the rome convention. in the proposal, the relevant factor of the habitual residence of the directive on liability for defective products provides for a liability without fault (strict liability). as a recital to the directive states, strict liability shall be seen as "the sole means of adequately solving the problem, peculiar to our age of increasing technicality, of a fair apportionment of the risks inherent in modern technological production." we should keep this reasoning in mind since it seems even more adequate when thinking about the liability issues in ami.most of the "products" offered in the ami environment will consist of softwarebased, highly personalised services. we should then think about adjusting the liability rules to such an environment. if it is difficult to distinguish between hardware and software from a technological perspective, why should we draw such a distinction from a legal perspective? an explicit provision providing for strict liability for software can be considered. nevertheless, such a proposal is controversial as it is said to threaten industry. since software is never defect-free, strict liability would expose software producers unfairly to claims against damages. thus, the degree of required safety of the programmes is a policy decision. strict liability could also impede innovation, especially the innovation of experimental and lifesaving applications. others argue that strict liability might increase software quality by making producers more diligent, especially, in properly testing their products. despite these policy considerations, there are some legal questions about the applicability of strict liability to software. the first question is whether the software can be regarded as "goods" or "products" and whether they fall under the strict actions allowing consolidation of the small claims of individuals could be also examined (i.e., group consumer actions). non-discrimination law can regulate and forbid the unlawful usage of processed data, for example, in making decisions or undertaking other actions on the basis of certain characteristics of the data subjects. this makes non-discrimination law of increasing importance for ami. the creation of profiles does not fall under non-discrimination law (potential use), but decisions based on profiling (including group profiling based on anonymous data) that affect the individual might provide the grounds for application of the non-discrimination rules. they apply in the case of identifiable individuals as well as to anonymous members of the group. profiles or decisions based on certain criteria (health data, nationality, income, etc.) may lead to discrimination against individuals. it is difficult to determine when it is objectively justified to use such data and criteria, and when they are discriminatory (e.g., the processing of health-related data by insurance companies leading to decisions to raise premiums). further legislative clarity would be desirable.however, certain negative dimensions of profiling still escape the regime of non-discrimination law (e.g., manipulation of individuals' behaviour by targeted advertising). here no remedies have been identified.the non-discrimination rules should be read in conjunction with the fairness principle of data protection law. the application of the two may have similar aims and effects; they might also be complementary: can the limitations of non-discrimination law be justified if they are regarded as not fair, as in the example of insurance companies raising premiums after processing health data? they can address a range of actions undertaken in ami, such as dynamic pricing or refusal to provide services (e.g., a refusal of service on the grounds that no information (profile) is available could be regarded as discriminatory).non-discrimination rules should be taken into consideration at the design stage of technology and service development. however, such issues might be addressed by the data protection legislation. in the opinion of gutwirth and de hert, principles of data protection are appropriate to cope with profiling. key: cord- -brrrg fr authors: jayasighe, ravindri; ranasinghe, sonali; jayarajah, umesh; seneviratne, sanjeewa title: quality of online information for the general public on covid- date: - - journal: patient educ couns doi: . /j.pec. . . sha: doc_id: cord_uid: brrrg fr objectives: to analyse the quality of information included in websites aimed at the public on covid- . methods: yahoo!, google and bing search engines were browsed using selected keywords on covid- . the first websites from each search engine for each keyword were evaluated. validated tools were used to assess readability [flesch reading ease score (fres)], usability and reliability (lida tool) and quality (discern instrument). non-parametric tests were used for statistical analyses. results: eighty-four eligible sites were analysed. the median fres score was . (range: . - . ). the median lida usability and reliability scores were (range: - ) and (range: - ), respectively. a low (< %) overall lida score was recorded for . % (n = ) of the websites. the median discern score was . (range: - ). the discern score of ≤ % was found in ( . %) websites. the discern score was significantly associated with lida usability and reliability scores (p < . ) and the fres score (p = . ). conclusion: the majority of websites on covid- for the public had moderate to low scores with regards to readability, usability, reliability and quality. practice implications: prompt strategies should be implemented to standardize online health information on covid- during this pandemic to ensure the general public has access to good quality reliable information. the coronavirus covid- pandemic has become the greatest global health crisis of the st century [ ] . during this pandemic, the demand for information on covid- has skyrocketed. information such as latest news updates on the pandemic, its symptoms, prevention and mechanism of transmission are highly sought by the public [ ] . on the other hand, free access to information, especially through social media, which is accessed by the majority [ ] has led to an increase in misinformation and panic associated with covid- [ ] . although, high quality health information is known to be related to lower stress levels and better psychological health [ ] , previous studies have shown that online information on many medical disorders to be of substandard quality [ , ] . a previous study done on websites related to covid- has reported substandard quality information that could potentially mislead the public [ ] . however, this study has used a limited search strategy and had not assessed some important areas including usability and reliability of the information. therefore, this topic remains a knowledge gap in covid- research [ ] . therefore, we conducted this study to analyse the current covid- websites targeting the general public in terms of quality, usability, readability, and reliability using a wide search strategy and validated instruments. yahoo!, google and bing, were searched using the keywords "novel coronavirus","sars-cov- ", "severe acute respiratory syndrome coronavirus- ", "covid- " and "coronavirus". the search was performed during j o u r n a l p r e -p r o o f the first week of may . the details of the search strategy and the piloting process are provided in the supplementary material (file s ) [ ] . two independent investigators with previous experience of conducting similar studies assessed the selected websites [ , ] . prior to the assessment, a pilot run was conducted to ensure uniformity and accuracy. the information on symptoms, investigations, public health measures, and available treatment modalities were collected. the accuracy of the content was assessed using the national institute for health and care excellence (nice) guidelines on covid- [ ] . a rating was given as all or none based on congruence with the guidelines. validated instruments were used to assess the quality of websites. readability was assessed using the flesch reading ease score (fres) [ ] . the lida instrument ( )-version: . was used to analyse the content and the design of the websites using the usability and reliability domains [ , ] . the quality of the content was assessed using the discern questionnaire which has questions in two separate groups [ ] . the detailed assessment criteria and the scoring system is included in the supplementary material (file s ). a website was classified as governmental if it was maintained by the country's public health authority. if managed by private institutions, non-governmental organizations, or voluntary institutions independent from the government, they were considered as non-governmental. the online health-related websites are standardized in terms of their credibility and reliability by online certification sites. we chose the health on the net code of conduct (hon-code) which is the oldest and widely used out of the quality evaluation tools available [ ] . data analysis was performed using spss (version- ) software and the associations were determined with nonparametric tests. a p-value of < . was considered statistically significant. of the retrieved websites, were excluded and the remaining websites were included in the analysis. the characteristics of the websites are mentioned in table . half ( %) were governmental websites and only . % (n= ) were hon accredited websites. the median fres was . (range: . - . , th - th grade readability level) which is classified as fairly difficult to read. only three websites ( . %) had a readability score of above (equivalent to th grade) which is the recommended standard. the overall median lida score was (range: - ), while the median lida usability and reliability scores were (range: - ) and (range: - ), respectively. the median discern score was . (range: - ) which classifies websites as being of "fair quality". (excellent= - ; good= - ; fair= - ; poor= - ; very poor= - ). however, the top websites (table a ) were of excellent quality. significant correlations were observed between the discern score and the overall lida score as well as lida usability and reliability scores ( pertaining to the currency of information, only ( . %) publishers stated the date of the publication. most websites (n= , . %) did not declare the sources of evidence. this was further established by the "low" median reliability score of . nevertheless, the authors have included a disclosure statement in most (n= , . %) websites. more than half of the websites failed to discuss the treatment options available (n= , . %), benefits or risks (n = , . %), and effects of no treatment (n = , . %). furthermore, potential complications and prognosis were stated only in ( . %) and ( %) websites, respectively. this study has shown that still most of the websites on online health information on covid- are of suboptimal quality except for a few credible sources of good-quality health information. nevertheless, the websites ranked among the top according to the discern score (table a. ) had high scores indicating the potential for publishing credible high-quality information online which would benefit the public. misinformation is a major concern during this pandemic as people fail to spend adequate time to critically analyse the online information. this, in turn, causes panic which ranges from hoarding medical supplies to panic shopping and using drugs without prescription with negative social and medical consequences [ ] . therefore, j o u r n a l p r e -p r o o f measures implemented to ensure the quality and accuracy of online information by the responsible authorities may help negate these adverse consequences. stating the methods of content production, with names of the contributing authors may help increase the credibility of online health information while displaying the date of the publication provides an idea of the currency of the information. absence of such information in over half of the websites was a major drawback, especially for covid- where new information is generated almost daily. health authorities should therefore ensure that the patient information websites provide the above information and certify websites based on such details so that the public can get information from trusted sources [ ] . most users of the worldwide web only have an average level of education and reading skills [ ] . guidance from the national institute of health (nih) had shown that the readability should be below the level of seventh grade for the lay public to adequately understand the content [ ] . however, the median readability level was found to be equivalent to th - th grade readability. such complexities with the readability of information may increase the risk of misunderstandings or misinterpretation. using short sentences in writing, using the active voice, using -point or larger font size, using illustrations and non-textual media as appropriate and accompanying explanations with examples would be helpful to overcome this problem [ ] . so far only a limited number of studies have been done to assess the quality of health information websites related to covid- . the study by cuan-baltazar et al prior to february , reported poor quality information with approximately % of included websites with low discern scores [ ] . our study done three months later shows similar results with only a minimal improvement in the quality of information. furthermore, the cuan-baltazar study had several limitations which includes the limited search strategy and non-inclusion of key quality parameters including readability. furthermore, out of the ( . %) sites they had included were online news sites that are not considered as patient information websites. in that study, the hon-code seal was present only in . % (n= ) websites, whereas in our study, . % of the sites were hon-code certified. there were several limitations in this study. although most popular search engines were used in this study under default settings, they may produce variable results depending on many factors including geographical location and popularity of websites at a given point of time. the algorithms unique to those search engines are subjected j o u r n a l p r e -p r o o f to constant change, and therefore the exact results of our study may not be reproducible. however, we believe the general patterns observed in our study are valid. this study has shown the quality, readability, usability, and reliability of the information on covid- on majority of websites providing health information to the general public are to be of substandard quality. to improve the credibility of the content, the websites should state methods of content production and display the date of the publication to give an idea about the currency of the information. to improve the readability of the content, the websites should incorporate more non-textual media, write in short sentences, using the activevoice and use larger font sizes. the patient information websites should display scores of reliability, quality, and readability as a guidance for its users. furthermore, it is vital for medical regulatory authorities and the government to impose regulations to ensure quality and to prevent the spread of misinformation. availability of data and materials: on reasonable request from the corresponding author, the data used in the above study can be made available. ethics approval and consent to participate: unnecessary in this type of study. informed consent and patient details: not applicable in this type of study. competing interests -all authors declare that there are no competing interests text-only % novel coronavirus infection (covid- ) in humans: a scoping review and meta-analysis demand for health information on covid- among vietnamese a longitudinal study on the mental health of general population during the covid- epidemic in china mental health strategies to combat the psychological impact of covid- beyond paranoia and panic immediate psychological responses and associated factors during the initial stage of the coronavirus disease (covid- ) epidemic among the general population in china quality and scientific accuracy of patient-oriented information on the internet on minimally invasive surgery for colorectal cancer quality of patient-oriented web-based information on oesophageal cancer misinformation of covid- on the internet: infodemiology study covid- ) pandemic: a global analysis of literature how do consumers search for and appraise health information on the world wide web? qualitative study using focus groups, usability tests, and in-depth interviews assessment of the quality of patient-oriented information over internet on testicular cancer quality of the patientoriented information on thyroid cancer in the internet coronavirus (covid- ) the flesch reading ease and flesch-kincaid grade level the minervation validation instrument for healthcare websites (lida tool) is the lida website assessment tool valid? the discern instrument quality of patient health information on the internet: reviewing a complex and evolving landscape as disinfectant use soars to fight coronavirus, so do accidental poisonings coverage of health information by different sources in communities: implication for covid- epidemic response the readability of pediatric patient education materials on the world wide web acknowledgements-none declared.this research did not receive any specific grant from funding agencies in the public, commercial, or not-forprofit sectors.j o u r n a l p r e -p r o o f key: cord- - scnkiy authors: hackl, w. o.; hoerbst, a. title: trends in clinical information systems research in : an overview of the clinical information systems section of the international medical informatics association yearbook date: - - journal: yearb med inform doi: . /s- - sha: doc_id: cord_uid: scnkiy objective : to give an overview of recent research and to propose a selection of best papers published in in the field of clinical information systems (cis). method : each year, we apply a systematic process to retrieve articles for the cis section of the imia yearbook of medical informatics. for six years now, we use the same query to find relevant publications in the cis field. each year we retrieve more than , papers. as cis section editors, we categorize the retrieved articles in a multi-pass review to distill a pre-selection of candidate best papers. then, yearbook editors and external reviewers assess the selected candidate best papers. based on the review results, the imia yearbook editorial committee chooses the best papers during the selection meeting. we used text mining, and term co-occurrence mapping techniques to get an overview of the content of the retrieved articles. results : we carried out the query in mid-january and retrieved a de-duplicated result set of , articles from , different journals. this year, we nominated papers as candidate best papers, and three of them were finally selected as best papers in the cis section. as in previous years, the content analysis of the articles revealed the broad spectrum of topics covered by cis research. conclusions : we could observe ongoing trends, as seen in the last years. patient benefit research is in the focus of many research activities, and trans-institutional aggregation of data remains a relevant field of work. powerful machine-learning-based approaches, that use readily available data now often outperform human-based procedures. however, the ethical perspective of this development often comes too short in the considerations. we thus assume that ethical aspects will and should deliver much food for thought for future cis research. the clinical information systems (cis) subfield of biomedical informatics is multi-faceted and complex. as section editors of the cis section of the international medical informatics association (imia) yearbook, we could observe ongoing research trends in this domain over the last years [ ] [ ] [ ] [ ] . trans-institutional information exchange and data aggregation are vital research fields. clinical information systems are not just tools for health professionals. the patient increasingly moved in the center of research activities during the last years, and it has often been shown that cis can create significant benefits for patients. so, during the last years, we identified the trend of moving away from clinical documentation to patient-focused knowledge generation and support of informed decision. in the analysis performed in the previous issue of the imia yearbook of medical informatics, we concluded that this trend was gaining momentum by the application of new or already known but, due to technological advances, now applicable methodological approaches. we had also found inspiring work that dealt with data-driven management of processes and the use of blockchain technology to support data aggregation beyond institutional boundaries [ ] . these trends are ongoing in our recent analysis. we found a lot of high-quality contributions, but, on the other hand, we did not see outstanding innovations in the cis field. as "ethics in health informatics" is the special topic for the issue of the imia yearbook of medical informatics, we intensified our focus on this topic when screening the cis publications. but we had to realize that ethical aspects seem to be only a side issue as a research topic in the cis domain. the selection process used in the cis section is stable now for six years. we described it in detail in [ ] , and the full queries are available upon request. we carried out the queries in mid-january . this year, the cis search result set comprised , unique papers. from these papers, we retrieved , from pubmed and found additional publications (de-duplicated) in web of science™. the resulting articles have been published in , different journals. table depicts the top- journals with the highest numbers of resulting articles. again, we used rayyan, an online systematic review tool [ ] , to carry out the multi-pass review. as section editors, we both (woh, ah) independently reviewed all , publications. ineligible articles were excluded based on their titles and/or abstracts (woh: n= , ; ah: n= , ). the agreement between the two editors was n= , for "exclude", and n= for "not exclude" (i.e., include). we calculated an agreement rate of . % (cohen's kappa . ) for this assessment. we solved the remaining conflicts on mutual consent, which resulted in nine additional inclusions. the final candidate best papers selection from the remaining publications was done based on full-text review and yielded candidate best papers published in . we then had to remove one paper as it also had been selected as a candidate best paper for the decision support systems section of the imia yearbook. for each of the remaining candidate best papers, at least five independent reviews were collected. due to covid- restrictions, the selection meeting of the imia yearbook editorial committee was held as a videoconference on apr , . in this meeting, three papers [ ] [ ] [ ] were finally selected as best papers for the cis section (table ) . a content summary of these three cis best papers can be found in the appendix of this synopsis. as section editors, we get a broad overview of the research field of the cis section during the selection of the best papers. as this overview may be biased and to avoid selective perception, as in the previous years [ ] [ ] [ ] [ ] , we additionally apply a more formal text mining and bibliometric network visualizing approach [ ] to summarize the content of titles and abstracts of the articles in the cis result set. as in the past year, we extracted the authors' keywords (n= , ) from all articles and present their frequency in a tag cloud ( figure ). we found , different keywords, of which , were only used once. as in the previous year, most frequent keywords were "human" (n= ), followed by "female" (n= ), "electronic health record(s)" (n= ), "male" (n= ), "adult" (n= ), "middle aged" (n= ), and "health communication" (n= ). the bibliometric network reveals more details on the content of the cis publications. figure depicts the resulting co-occurrence map of the top- terms (n= , , most relevant % of the terms) from the titles and abstracts of the , papers of the cis result set. the cluster analysis of titles and abstracts yielded five clusters. the two most massive clusters, the yellow one on the right side with items and the green one on the left with items, describe some context factors from the studies. whereas the yellow cluster seems to represent an intramural view with "hospital record" as a prominent item, the green cluster represents the trans-institutional perspective of cis with "health record" as a prominent item. all three of the best papers in the cis section can be assigned to these two clusters. the contribution by brian l. hill and colleagues [ ] , who successfully created a fully automated machine-learning-based model for postoperative mortality prediction is in the yellow cluster. we selected this paper from the british journal of anaesthesia as the approach is innovative, uses only preoperative available medical record data, and can better predict in-hospital mortality than other state-of-the-art methods. we also selected it because, on the other hand, we believe that there is a need for discussion from an ethical point of view when an automated approach "decides" on the "eligibility" of patients for specific treatments. so, it perfectly fits in the special topic "ethics in health informatics" of the imia yearbook . the next of the best papers can be assigned to the green cluster. nelson shen and colleagues conducted a systematic review that helps to better understand an essential aspect of health information exchange: the patient privacy perspective [ ] . this contribution is also interesting given this imia yearbook edition's special topic. the last of the best papers comes from william j. gordon and colleagues [ ] , who investigated health care employees' susceptibility to phishing attacks. this study can be assigned to both clusters, and should all of us remember that cybersecurity is increasingly critical and should be tackled accordingly. from the remaining candidate best papers, we can assign a reasonable proportion to the two main clusters. to the green cluster, we can assign four candidate best papers. esmaeilzadeh and colleagues investigated the effects of data entry structure on patients' perceptions of information quality in health information exchange [ ] . the proportion of "blockchain papers" is slightly growing. we thus again selected one systematic review, a paper by vazirani and colleagues [ ] , as an excellent read to dive into this emerging field and to learn about the applicability of this technology in healthcare, health record management, and health information exchange. trust is also an essential aspect of health information exchange. therefore, we selected a paper of um and colleagues who designed a trust information management framework for social internet of things environments [ ] . although this paper has no obvious and direct connection to the health care system at first glance, we find it describes an interesting approach that can advance the realization of trustful health services, which always have to exchange data to some extent. the next paper in this cluster comes from kim and colleagues who propose an ontology and a simple classification scheme for clinical data elements based on semantics [ ] . three candidate best papers represent the intramural perspective in the yellow cluster. this perspective often also includes a patient safety aspect. thomas and colleagues report on the use of digital facial images in a children's hospital to confirm patient identity before anesthesia to increase patient safety [ ] . the next candidate paper in this cluster comes from fig. clustered co-occurrence map of the top- , terms (top % of the most relevant terms, n= , ) from the titles and abstracts of the , papers in the cis query result set. only terms that were found in at least seven different papers were included in the analysis. node size corresponds to the frequency of the terms (binary count, once per paper). edges indicate co-occurrence. the distance between nodes corresponds to the association strength of the terms within the texts (only top , of , edges are shown). colors represent the five different clusters. the network was created with vosviewer [ ] . signaevsky and colleagues who show how a deep-learning-based approach can help to improve diagnostic assessments in neuropathology [ ] . the third "yellow" candidate paper comes from bernard and colleagues [ ] . they present a very inspiring visualization technique for representing multiple patient histories and their course over time in graphical dashboard networks. the development process of these dashboards is well described and can give valuable hints to all who are interested in dashboard design. the golden cluster in the middle ( items) with the term "geographic information system" as central hub divides the two main clusters. the blue cluster (bottom left, items) mainly holds items from the studies' objectives, target measures, and methods sections. the purple cluster on the top right ( items) is continuously present over the years. it contains items that are associated with adverse events and patient safety research. an assignment to one of these clusters is difficult for the rest of the candidate papers. however, they have one aspect in common. they address various ethical aspects that are relevant in the cis field. sure, for the most part, these aspects are not explicitly mentioned in the papers. nevertheless, we want to present them and put it in the hands of the reader to think about. the first of the papers in this group comes from blijleven and colleagues who developed a framework for the sociotechnical analysis of electronic health system workarounds [ ] . very inspiring. the next one, a paper on ethical and regulatory considerations for using social media platforms to locate and track research participants by bhatia-lin and colleagues in the american journal of bioethics [ ] , also made us think a lot. the next one, a position paper from steil and colleagues in methods of information in medicine [ ] , brought our thoughts in a completely different direction. every reader who has wondered how the use of robotic systems in the operating room can or will bring new forms of team-machine interaction should put this paper to their reading list. to complete our selection, we want to direct the light to the dark side of cis and health information technology (hit), which also exists, no question. gardner and colleagues surveyed physicians on their hit use. more than a quarter of the > , respondents reported burnout and % reported hit-related stress [ ] . we think this is also an ethical cis aspect worth considering. as every year, at the very end of our review of findings and trends for the clinical information systems section, we want to recommend a reading of this year's survey article in the cis section by ursula hübner, nicole egbert and georg schulte. they investigated ethical aspects in recent cis research in more detail [ ] . as in the previous years, we could observe major trends being further continued. these trends include research about the actual benefits of patients with regard to health information exchange and their active participation in healthcare. another trend that now remained valid for several years is the trans-institutional aggregation of data. it seems that the challenges around this topic are still not sufficiently solved. however, we could observe an ongoing shift away from fundamental technical problems to more content/context-related questions of data aggregation. the observed popularity of machine-learning approaches on readily available clinical data sets such as electronic health record data in our analysis seems to increase, especially, their application in supporting clinical processes such as risk assessment or the proactive implementation of interventions. however, ethical aspects are, in many cases, not considered at all or are only regarded as a peripheral topic. these aspects leave a broad gap for further investigations. appendix: content summaries of selected best papers for the imia yearbook section "clinical information systems" gordon the current paper from gordon et al., picks up on an important topic from the field of data security and investigates the susceptibility of healthcare employees to phishing attacks in the us. the recent past has shown that attackers have increasingly targeted healthcare organizations, not only with substantial economic impact but also with a strong influence on patient treatment. the authors illustrate multiple examples ranging from partial unavailability of systems up to a two-week complete shutdown of systems. the authors have carried out an investigation to get an insight into the reasons why employees of healthcare organizations fall victims of phishing campaigns. the investigation is based on a retrospective, multicenter quality improvement study that included six us healthcare organizations that represent the entire spectrum of care and a range of us geographies. all organizations have an information security program in place. the respective organizations have carried out phishing simulations in their facilities in the past, based on vendor-or custom-developed software tools. data about the phishing attacks were collected from the different institutions, and emails were classified according to their content in three categories: office-related, personal, or information technology-related. several statistical values were calculated, such as the click rates, median click rates, and odds ratios (with % ci). correlation was, amongst others, computed for the year, number of campaigns, email category, and season. in total, the data set included campaigns with emails sent from to . the overall click rate across all institutions and campaigns was . %, although the authors observed considerable differences in the click rate of institutions ranging from . % to . %. the authors found out that repeated phishing campaigns were associated with decreased odds of clicking on a subsequent phishing email. interestingly, the year is not significantly associated with the click rate. further, emails that were related to the personal email category had a significantly higher probability of being clicked. the same is true for seasons, both the spring and summer seasons were associated with higher click rates. in models adjusted for several potential confounders, including year, the institutional campaign number, institution, and email category, the odds of clicking on a phishing email were . lower for six to ten campaigns at an institution and . lower for more than ten campaigns at an institution. the study could well demonstrate that the healthcare domain compares well to other industries and that employees benefit from education, training, and that experiences made from other simulated phishing campaigns can help employees to stay aware. in addition, the healthcare domain has some particularities that make it especially vulnerable to attacks such as turnover of employees, endpoint complexity, or information system interdependence. it is therefore inevitable that all participants in the healthcare domain understand these security risks, particularly as safe and effective health care delivery involves more and more information technology. the majority of surgical complications is associated with a small group of high-risk patients. often these patients would substan-tially benefit from early identification of their high-risk of potential complications, as proactive, early interventions can help reduce or even avoid perioperative complications. existing approaches to this problem either require a clinician to review a patient's chart such as the american society of anesthesiologists (asa) physical status classification or lack specificity. the work of hill et al., is dedicated to the investigation of a machine learning approach that uses readily available patient data for the prediction of certain risks and takes changing patient conditions into account. data from , surgical patients ( . % mortality rate) who underwent general anesthesia between and were collected from the perioperative data warehouse at ucla health to populate a series of , distinct measures and metrics. in the next step, classification models to predict in-hospital mortality as a binary outcome were trained (model endpoint) and the outcome for a subset of patients was checked with trained clinicians. for the actual creation and training of models, four different classification models, logistic regression, elasticnet, random forests, and gradient boosted trees were evaluated. the performance of the created models was then compared with existing clinical risk scores such as the asa score, pospom score, and charlson comorbidity score. the mean value of the area under the receiver operating characteristic (auroc) curve ( % ci, , predictions) was used to compute the performance. when using the asa status or the charlson comorbidity score as the only input features, the linear models (logistic regression, elasticnet) outperform the non-linear models (random forest, xg-boost). however, for the other feature sets, the non-linear models outperform the linear models. in particular, the random forest has the highest auroc compared with the other models. the authors were able to show that a fully automated preoperative risk prediction score can better predict in-hospital mortality than the asa score, the pospom score, and the charlson comorbidity score. unlike previously developed models, the results also indicate that the inclusion of the asa score in the model did not improve the predictive ability. another advantage of such an automated model is that it allows for the continuous recalculation of risk longitudinally over time. however, the authors also state several limitations of the study such that the incidence of mortality in the testing set was less than %, implying that a model that blindly reports 'survives' every time will have an accuracy greater than %. nonetheless, the model outperforms current major models in use. the exchange of health information and the ability to share information regarding the patient and treatment has become an essential element in the effective and efficient provision of healthcare services. on the other side, these developments have also led to increasing concerns by patients not being able to properly control these information flows. although privacy concerns are often quoted in publications regarding the exchange of health information, they are seldom investigated with regard to their influence on the patient-provider relationship in healthcare. the paper of shen et al., focuses on an indepth exploration of the patient's perspective towards privacy in the context of health information exchange. for this reason, the authors have conducted a systematic review, which was based on prisma and aimed at providing a conceptual synthesis of the patient privacy perspective and its associated antecedents and outcomes; identify gaps in the apco model (antecedent, privacy concern, outcomes macro-model) and describe the current state of privacy research. the apco model was developed by the authors in advance to the present study. major databases were queried for empirical studies focused on patient/public privacy perspectives in the context of hit that were published between and . data was extracted based on the elements of the apco model, and subsequently, new elements were added if outside the apco model. the authors found quantitative, qualitative, and five mixed-methods studies that were relevant. the analysis of the antecedent factors with regard to their influence on patient privacy concerns showed a mixed picture, and an evident positive or negative influence was often not deducible, or the number of studies was low. the authors assumed income, political ideology, and quality of care as being agreed upon studies. the same is true for privacy concerns related to outcome factors, where the authors assume a willingness to share, protective behaviors, benefits, and risks, as agreed by the studies found. summarizing, the patient privacy perspective seems to be of dynamic and nuanced meaning that is strongly dependent on its context. so, it is difficult to characterize the patient perspective, although privacy concerns, as such, are found in many studies and are expressed as a serious concern. this may also be because many studies have analyzed the concept of privacy only as a peripheral topic and have distilled privacy into a single question. the authors plead that future research needs to place greater emphasis on understanding how antecedent factors can alleviate privacy concerns, build trust, and empower patients. in addition, they claim that building a base of evidence on the actual effects of privacy concerns will help to reduce value-laden discussions and normative assumptions. new problems -new solutions: a never ending story. findings from the clinical information systems perspective for clinical information systems as the backbone of a complex information logistics process: findings from the clinical information systems perspective for on the way to close the loop in information logistics: data from the patient -value for the patient managing complexity. from documentation to knowledge integration and informed decision findings from the clinical information systems perspective for rayyan-a web and mobile app for systematic reviews understanding the patient privacy perspective on health information exchange: a systematic review assessment of employee susceptibility to phishing attacks at us health care institutions an automated machine learning-based model predicts postoperative mortality using readily-extractable preoperative electronic health record data a unified approach to mapping and clustering of bibliometric networks software survey: vosviewer, a computer program for bibliometric mapping the effects of data entry structure on patients' perceptions of information quality in health information exchange (hie) implementing blockchains for efficient health care: systematic review design and implementation of a trust information management platform for social internet of things environments clinical metadata ontology: a simple classification scheme for data elements of clinical data based on semantics the use of patient digital facial images to confirm patient identity in a children's hospital's anesthesia information management system artificial intelligence in neuropathology: deep learning-based assessment of tauopathy using dashboard networks to visualize multiple patient histories: a design study on postoperative prostate cancer sewa: a framework for sociotechnical analysis of electronic health record system workarounds ethical and regulatory considerations for using social media platforms to locate and track research participants robotic systems in operating theaters: new forms of team-machine interaction in health care physician stress and burnout: the impact of health information technology correspondence to dr. werner o hackl institute of medical informatics umit -private university for health sciences hörbst medical technologies department mci -the entrepreneurial school universitätsstrasse e-mail: alexander we would like to acknowledge the support of lina soualmia, adrien ugon, brigitte séroussi, martina hutter, and the whole yearbook editorial committee as well as the numerous reviewers in the selection process of the cis best papers. key: cord- - mzcvfm authors: afzal, waseem title: what we can learn from information flows about covid‐ : implications for research and practice date: - - journal: proc assoc inf sci technol doi: . /pra . sha: doc_id: cord_uid: mzcvfm covid‐ has become a global pandemic affecting billions of people. its impact on societies worldwide will be felt for years to come. the purpose of this research is to examine information flows about covid‐ to understand the information‐specific underpinnings that are shaping understandings of this crisis. as a starting point, this research analyzes information about covid‐ from a selection of information sources, including the world health organization (who), the national health commission of the people's republic of china (nhcprc), and three news outlets with vast global coverage. the analysis reveals some distinctive information underpinnings about covid‐ , including (a) flows of information becoming regular and larger around certain dates, (b) preponderance of information imperfections such as incomplete information, misinformation, and disinformation, and (c) absence of information about some key turning points. the implications of these information imperfections in that they create information failures and, hence, ineffective approaches to dealing with this crisis warrant further investigation. due to the magnitude of covid- , there have been massive flows of information about different aspects of this pandemic. information about covid- is coming through multiple sources and in different formats. often, this information, instead of providing answers, has left people bewildered. though there have been efforts to provide information about specific aspects of covid- (e.g., coronavirus resource center at john hopkins university (https://coronavirus.jhu.edu/map. htm), world health organization [who] situation reports, abc [australian broadcasting corporation] covid- cases projection), there remain many unanswered questions and the covid- information environment is also rampant with many information imperfections. the purpose of this research is to examine information flows about covid- and to identify the information-specific underpinnings that are shaping the information environment of this pandemic and contributing to understandings of and abilities to manage and control this crisis. an information environment represents a collection of information sources, communication technologies, and the myriad of social, cultural, and political factors shaping the information produced, disseminated, used, and discarded in an environment. this information environment plays a highly significant role in informing public views and guiding human behavior. information available through different sources create an information environment at the most general level (lievrouw, , p. ) which becomes highly important to individuals when they get to know the available information and find it relevant to their needs. according to lievrouw ( ) , various institutions, including government, cultural, and business organizations, produce information, which is then shaped by media. in addition to shaping and filtering information, media also produce information based on what is happening regionally, nationally, and globally. any information environment can have numerous flows of information coming from different sources and about different issues. for instance, sources such as newspapers, websites, social media, databases etc. can all be used to get information about one particular issue or about different issues. an information flow is a communication of information between senders and receivers (durugbo, tiwari, & alcock, ) ; the information underpinning these flows is the real substance of any information environment and determines to a great extent how people will perceive, understand, and respond to an issue. information in its variety of forms (e.g., visual, verbal, written) is essential for all human endeavors, ranging from day-to-day activities to cutting-edge research and policy making. failure to possess the correct information can lead to political debacles, adverse health care events (e.g., macintosh-murray & choo, ) , and inability to respond to emerging challenges. information has different properties, which can play important roles in shaping an information flow and, thereby, an overall information environment. these properties include completeness, accuracy, relevance, and timeliness (e.g., knight & burn, ; lee, strong, kahn, & wang, ) . there is a notable body of research in economics and finance demonstrating the impact of incomplete information on valuation (e.g., aboody & lev, ; akerlof, ; kim & park, ) . the marketing discipline has also produced a significant amount of research on the role of information properties in shaping consumer perceptions (e.g., keller & staelin, ; lurie, ) . in information science, many scholars (e.g., rieh, ; savolainen, ) have investigated the properties of information and their impact on human behavior. the nature of the information used to create information flows about any crisis can have significant implications for individuals, policy making, and the public. for instance, if incomplete information is used to report any event, then it is likely that significant and even serious information failures will result. the preceding point is supported by studies on information failures during disasters (e.g., choo, ; turner & pidgeon, ) . turner and pidgeon ( ) analysed official accident reports published by the british government and noted that it is important to consider not only the available information before a disaster but also the distribution of this information during the disaster and any factors that inhibited the flow of this information. similarly, choo ( ) examined some organizational disasters and identified three information impairments that can contribute to mishaps: (a) blind spots, (b) risk denials, and (c) structural impediments. therefore, it can be suggested that information that is incomplete, misleading, or ambiguous can have serious implications for both understanding a crisis and managing it. to chart information flows about covid- , it was important to understand the basic building blocks of the information environment around this crisis. further, due to the huge amounts of existing information about covid- , it was also important to limit the scope of this research to a certain time frame. regarding the basic building blocks of the covid- information environment, the work of lievrouw ( ) guided the identification of organizations highly relevant to this crisis and news outlets with vast global coverage. the who and the national health commission of the people's republic of china (nhcprc) were identified as two highly relevant organizations generating information about covid- . additionally, the british broadcasting corporation (bbc), cable news network (cnn), and abc news were selected to analyze regarding information about covid- due to their large audiences in english-speaking countries. information from all of these sources was content analyzed. the content analysis, in the case of websites, focused to gain understanding of the overall scope of information provided. in the case of news outlets, the content analysis aimed to identify the main theme of each news item and to guide categorization of collected information. the websites of the who (https://www.who.int/) and the nhcprc (http://en.nhc.gov.cn/index.html) contain large amounts of information on covid- . the who provides information about many aspects of covid- but this research focused on "rolling updates" and "situation reports." on the nhcprc website, regular updates about covid- were categorized under "news," and information from this section was also reviewed. the bbc (http://news.bbc.co.uk/hi/english/static/ help/archive.stm), cnn (http://edition.cnn.com/world/ archive/archive- .html), and abc news (https://www. abc.net.au/news/archive/) web archives were searched using keywords of "coronavirus" and "covid- " to identify the first instance when information about covid- or coronavirus was mentioned. following that identification, searches were conducted to identify the instances when information about any aspect of covid- was disseminated through these news outlets. the searches were restricted to the time frame starting from the first mention of covid- in news to the end of february . from the bbc, abc, and cnn news web archives, a total of news items about covid- were identified (see table ). these news items appeared between january , and february , . the gathered information was reviewed to ascertain the aspects of covid- that were covered in these news items. each news item was reviewed to identify its central topic. information pertaining to the same topics was grouped into common categories, leading to a total of eight categories. these categories included: (a) outbreak, (b) economy, (c) educational, (d) misinformation/ disinformation, (e) sports, (f) politics, (g) racism, and (h) evacuation. information pertaining to the spread of covid- , the number of fatalities, and the number of identified cases was grouped under the "outbreak" category. the category of "economy" included information about the impact of covid- on trade, industry, stocks, and overall business activities. information aiming to educate people about the virus and measures of protection against it was grouped under the "educational" category. all news items highlighting the instances of misinformation and/or disinformation were used to develop "misinformation/disinformation" category. the category of "politics" covered news that casted doubts about a national or a foreign government's ability to deal with covid- and any news that used the virus to depict a turbulent domestic political environment. instances involving discriminatory behavior against people of chinese descent due to coronavirus was coded under "racism" category. the "sports" category included information about the events being canceled or postponed due to covid- whereas the "evacuation" category included information about the passengers evacuated from different cruise ships (see table ). four categories (i.e. outbreak, educational, economy, and politics) were common across all three news outlets. the category of "misinformation/disinformation" was common between bbc and cnn whereas the categories of "sports," and "racism" were common between bbc and abc. in addition to the topics of news items, it was important to identify the time frame within which information flows about covid- became regular and greater in magnitude across the news outlets. for this identification, the lag between the dates on which covid- news items appeared and the frequency of news stories about the virus appearing on the same date were examined. for instance, the regular flow of information about covid- on bbc started around january , ; almost every day after this date, some form of news item about the virus appeared. in the cases of cnn and the abc, the starting point for the regular covid- information flow was january . the magnitude of this flow on all three news outlets increased around january , ; from this date, almost every day, multiple news items about covid- appeared. though there was near uniformity in terms of the kinds of topics covered by the three news outlets, some topics were highlighted more by one or two channels. for instance, the bbc strongly emphasized disinformation and misinformation, whereas the abc and cnn highlighted the economic implications of covid- . the who rolling updates and situation reports were reviewed to understand (a) the kind of information about covid- that was disseminated to the world and (b) the timing of events. the rolling updates were used to provide brief information about different aspects of covid- ranging from reporting a field visit to china to providing training to african health workers. these updates about covid- started on december , , when china first reported pneumonia of unknown cause to the who's china office, thereby making december a key date. similarly, a rolling update on january , is important because the who issued its first guidance on the coronavirus with an objective to help nations assess their capacities in the areas of detection and response. situation reports, on the other hand, followed a structured approach to provide regular updates about (a) countries reporting first cases of covid- , (b) total number of confirmed cases in china on the day of the report, (c) confirmed cases of covid- outside of china on the day of report, and (d) who's evolving initiatives to tackle this virus. information about covid- available under the news sections at the nhcprc website highlights (a) the efforts made by the chinese government in managing and controlling this virus, (b) efforts aiming to develop a vaccine, (c) the extraordinary commitment of medical staff to dealing with the crisis, (d) encouraging signs in terms of recoveries, and (e) praise of the chinese response to this crisis by international organizations and governments. this information is important because it tells a story about the crisis; however, its scope is rather limited and does not help much in knowing the scale of crisis, the problems and challenges faced by medical staff and the public, and the exact nature of the efforts involved in managing the virus in china. the analysis of information about covid- is revealing in many regards. for example, regular flow of information in the three news outlets, and the increased magnitude of this flow, started on almost the same dates (january - and january , respectively). this suggests that some events must have been occurring around these dates and, hence, information about them was disseminated through these news channels. in terms of regular information flow about covid- , the following are some possible explanations: (a) cases of covid- outside china began to be reported around january - (https://www.abc.net.au/news/ - - /coronavirustimeline-from-wuhan-china-to-global-crisis/ ? nw= &pfmredir=sm), (b) the number of cases in china also started to increase at this time, and (c) the first meeting of the who emergency committee regarding the coronavirus outbreak (https://www.who.int/ emergencies/diseases/novel-coronavirus- /eventsas-they-happen) occurred between february and . regarding the increased magnitude of information flow about covid- , events such as the following occurred: (a) the confirmation of infections outside china started to become more regular and the virus reached countries by january (https://www.abc.net.au/news/ - - /coronavirus-map-tracks-spread-throughout-world/ ), (b) concerns about the virus becoming a global crisis began to be voiced by researchers in scholarly communication and in mass media (e.g., riou & althaus, ; https://www.bbc.cm/news/world-asiachina- ), and (c) the who started issuing "situation reports" on january , providing daily updates on covid- , leading to the declaration of the novel coronavirus as a "public health emergency of international concern" on january , . some of the most frequently appearing categories in the news shed light on the emphasis that was placed on certain aspects of covid- . the category of "outbreak" included information that informed the public of the progression of covid- and, as the situation evolved, to know the number of fatalities. when the spread of the virus outside china started to become a reality rather than a possibility, "educational" information also began to appear with greater frequency. for instance, abc news provided educational information about the virus, ways in which it spreads, and ways to avoid infection. misinformation and disinformation also appeared numerous times across the three news outlets. perhaps realizing the dangers of information imperfections associated with any crisis, news channels variously discussed misinformation and disinformation. the bbc, at the very onset of covid- , raised questions about the accuracy of the numbers reported in wuhan (https://www.bbc. com/news/health- ). cnn also reported on january about the possible lack of transparency in the availability of vital information about covid- (https:// edition.cnn.com/ / / /asia/wuhan-virus-chinacensorship-intl-hnk/index.html). preponderance of misinformation about the spread and alleged cures for this virus has also been noted (https://www.bbc.com/ news/technology- ). the analysis of information about covid- in this study highlights important information problems. these problems include incomplete information, disinformation, misinformation, and lack of timely information. for instance, crucial events such as when exactly the first case of covid- was identified and the exact number of infections and fatalities in china still require more information. further, it is important to identify the underlying reason for this incomplete information-that is, whether it was caused due to disinformation or due to absence of timely information. as long as this incomplete information exists, understandings about the evolution of covid- as a global pandemic will remain partial. it is also crucial to understand the impact, if any, of incomplete information on the handling of covid- at the international level. misinformation about different aspects of covid- was found to be rampant by the three news outlets and reported quite often in their stories. misinformation about (a) the reason the virus appeared and became a global crisis (e.g., different conspiracy theories), (b) suggested cures for the virus, and (c) the ways in which it spreads was noted by different news stories. these information problems led to what turner and pidgeon ( ) called "information failures"-that is, instances in which crucial pieces of information either were missing, buried under other information, or were available but were simply not heeded. these information problems and information failures warrant further investigation if researchers and policy makers are ever to paint a complete picture of covid- and avoid a crisis of such a nature in the future. there are important implications for practice as well, for instance, regarding what role information professionals can play both during this crisis and after it. most importantly, information professionals can use their education and skills to develop simple and easy-to-use information packages aiming to inform the public about essential information regarding covid- . this information need not be limited to only health aspects but can also include, for example, ways to access social security benefits and links to online help to deal with the effects of social distancing. misinformation and disinformation are widespread during this crisis, so online resources offering education against these two information problems will be helpful. finally, researchers are and will be seeking reliable and easy-to-use information sources and data sets on covid- . therefore, information professionals can assist research efforts to control and eliminate the virus by developing such information sources. the data set in this study was limited to months, omitting other significant pieces of information about covid- . consideration of information flows in later months will be essential to developing a comprehensive understanding of this pandemic and the role played by different information imperfections in shaping some of the impacts of covid- . though an effort was made to cover all news items in the three news outlets, it is likely that inadvertent omission of some news items may have occurred. the purpose of this research was to examine information flows about covid- with the objective of understanding different information properties underlying these flows. information available about covid- through the who, nhcprc, and three news outlets was analyzed. the analysis revealed that that there are significant information imperfections in covid- information flows, including the preponderance of incomplete information, misinformation, and disinformation, and the absence of information about some key turning points. moreover, covid- information flows present a distinctive pattern in terms of the amplification of frequency and magnitude of information dissemination. the implications of these information imperfections in terms of creating information failures and hence ineffective approaches to deal with this crisis warrant further investigation. information asymmetry, r&d, and insider gains the market for 'lemons': quality uncertainty and the market mechanism australian broadcasting corporation news archives british broadcasting corporation news archives organizational disasters: why they happen and how they may be prevented modelling information flow for organisations: a review of approaches and future challenges effects of quality and quantity of information on decision effectiveness organizational structure of a global supply chain in the presence of a gray market: information asymmetry and valuation difference developing a framework for assessing information quality on the world wide web aimq: a methodology for information quality assessment. information & management the information environment and universal service decision making in information-rich environments: the role of information structure information failures in health care national health commission of the people's republic of china judgement of information quality and cognitive authority in the web pattern of early human-to-human transmission of wuhan judging the quality and credibility of information in internet discussion forums man-made disasters how to cite this article: afzal w. what we can learn from information flows about covid- : implications for research and practice acknowledgments i acknowledge with thanks the anonymous reviewers for their valuable feedback. i am also grateful to talat shahzad who assisted with data collection. key: cord- -iz alys authors: francis, john g.; francis, leslie p. title: fairness in the use of information about carriers of resistant infections date: - - journal: ethics and drug resistance: collective responsibility for global public health doi: . / - - - - _ sha: doc_id: cord_uid: iz alys one standard menu of approaches to the prevalence of anti-microbial resistance diseases is to enhance surveillance, fund research to develop new antimicrobials, and educate providers and patients to reduce unnecessary antimicrobial use. the primarily utilitarian reasoning behind this menu is unstable, however, if it fails to take fairness into account. this chapter develops an account of the fair uses of information gained in public health surveillance. we begin by sketching information needs and gaps in surveillance. we then demonstrate how analysis of information uses is incomplete if viewed from the perspectives of likely vectors of disease who may be subjects of fear and stigma and likely victims who may be coerced into isolation or quarantine. next, we consider aspects of fairness in the use of information in non-ideal circumstances: inclusive participation in decisions about information use, resource plans for those needing services, and assurances of reciprocal support. fairness in information use recognizes the ineluctable twinning of victims and vectors in the face of serious pandemic disease. antimicrobials; and to intervene through education, treatment, and careful stewardship of the existing antimicrobials that retain some efficacy. this combination of approaches is founded primarily in utilitarian reasoning, attempting to achieve the best possible mitigation of the current crisis in the hopes that effective new treatment methods may soon become available. such utilitarian reasoning is not entirely stable in practice, however. on the one hand, when the prospects of exposure to untreatable and potentially fatal disease appear imminent, fear may become the overriding reaction to those who are identified as ill. the result may be forms of coercion against people suspected of being vectors of disease that appear prudential in the short term but that are insufficiently grounded in science and potentially counter-productive in the longer term. people may hide to avoid disclosure and deleterious consequences of over-regulation may lead to under-regulation. recent examples include demands to compel isolation of people believed to have been exposed to ebola or for banning travel from regions where outbreaks of conditions such as ebola or zika have been identified. on the other hand, concerns for victims may generate outpourings of resources for treatment, calls for investment in public health resources in underserved areas, and renewed emphasis on privacy protections. these too may be counterproductive if they result in confusion and waste of resources or multiple conflicting strategies. the upshot may be policies that oscillate between treating people as vectors and treating them as victims but without significant or coordinated progress against the problem of resistance. each of these perspectives-victim-hood and vector-hood-is morally important. but in our judgment analysis that is limited to these perspectives is incomplete in its failure to take certain considerations of fairness into account. our specific focus here is the use of information, but similar points could be made about other types of resources as well. collection, uses, and access to information, we contend in what follows, must be rooted in the effort to make progress against serious public health problems in a manner that is reasonably fair under the circumstances. this requires not only concern for people as victims and vectors but concerns about how the impact of policies are distributed and foster cooperative connections in both the shorter and the longer term. traditional public health surveillance methods are both individual and population based. where particular individuals are concerned, the role of information is primarily to enable strategies to interrupt disease transmission. case identification, case reporting, contact tracing, treatment if possible, and education and intervention if needed to prevent transmission come to the fore. at every stage, information is critical. if individuals with transmissible disease are unknown or cannot be located, efforts to interrupt transmission will fail. efforts will also fail if information is not transmitted to those who are capable of acting, whether they be authorities designated to enforce quarantine or isolation or health care personnel equipped to offer treatment or prophylaxis. education requires information, too, about where to direct educational efforts and what these efforts might contain. importantly, if people who might suffer exposures are insufficiently informed about the likelihood and seriousness of contagion and the need for precautions, they may unwittingly become infected vectors as well as victims themselves. such was the case for health care workers during the sars epidemic of and for many during the ebola epidemic of . information gleaned in population-level surveillance plays many additional important roles in addressing the problem of anti-microbial resistance. a longstanding recommendation of the who, codified in the world health regulations that entered into force in in article , is international cooperation in the development of surveillance capacities for the identification of potential global health emergencies of international concern (who ). surveillance can help to identify rates of incidence and prevalence of resistant disease. testing samples can yield information about histories and patterns of disease spread. samples also can be used to identify biological characteristics of resistant infectious agents that may be helpful in developing methods of treatment or identifying new anti-microbial agents. population level surveillance can be targeted to identifying the incidence and prevalence of resistant disease in particular geographical areas. gonorrhea is an example. there were . million estimated new cases of gonorrhea worldwide in ; the highest number occurred in low-income areas of the western pacific. resistant disease has become increasingly prevalent, especially in these areas and among groups such as sex workers and truck drivers (unemo et al. ) . extensively drug-resistant (xdr) gonorrhea cases also have appeared in spain and in france, although these strains do not appear to have spread, possibly because they are less hardy and so less likely to be passed on. however, significant resistance may not be detected because of "suboptimal antimicrobial resistance surveillance in many settings" (unemo et al. ) . a recent international panel reviewing resistant gonorrhea recommends strategies of case management, partner notification, screening (especially of sex workers and men having sex with men), and evidence-based treatment (unemo et al. ) ; these recommendations are based on surveillance data. population-level surveillance information may also be useful in identifying risks associated with providing humanitarian treatment. over , young people wounded in the libyan civil war that began in were evacuated elsewhere for treatment. concerns arose that many of these patients were recognized to carry with them resistant organisms-thus bringing along with their needs for treatment risks to other patients being treated in the host facilities (zorgani and ziglam ) . institutions accepting these patients were informed of this risk so that they could take appropriate precautions. libya itself was identified as a region with high prevalence of resistant organisms, despite the limited surveillance capacities in that conflict-torn nation. recommendations included improving surveillance in libyawhich lacks a national surveillance system-and implementation of infection prevention measures in libyan hospitals. surveillance is also used to identify practices that might contribute to the development of resistance. use of antimicrobials in agriculture is one area of inquiry, although its precise contribution to the problem is not easy to quantify (e.g. hoelzer et al. ). there have been many studies of problematic prescribing practices among physicians in the us (wigton et al. ) , europe (e.g. jørgensen et al. ) , asia (lam and lam ) , and elsewhere (trap and hansen ) , along with efforts to educate physicians about appropriate antimicrobial use. ever since the recognition grew that crowds celebrating the return of soldiers from world war i had created a ready opportunity for transmission of the spanish influenza, epidemiologists have observed the potential health risks of large gatherings that concentrate people together, even for brief periods of time. examples include music festivals, major sporting competitions, other large festivals, and religious gatherings such as the hajj or other pilgrimages. the largest estimated gathering is the periodic kumbh mela pilgrimage in which hindus come together to bathe in a sacred river such as the ganges; over million people, drawn largely from the indian subcontinent but increasingly international, attend the event (gautret and steffen ) . the largest annual gathering of pilgrims is the hajj at mecca which draws over two million people; the fifth pillar of islam is the obligation to undertake the once in a lifetime journey for those who can physically or financially afford to do so. with such great numbers of people together for sustained periods of time, there is a risk of disease outbreaks and the spread of resistant infections. such events may strain existing sanitation systems or health care facilities if people become ill. crowding and inadequate facilities contribute to the potential for disease outbreaks (gautret and steffen ) . these events draw people from around the globe and thus may result in the international spread of disease (gautret and steffen ) . at the same time, many of these events are of great cultural importance and suppression of them is neither a realistic nor a desirable option. there have been extensive discussions of how to address the public health needs of the great numbers of people who undertake pilgrimages or who attend other events that draw great numbers of people together. vaccination may create herd immunities that reduce risks of disease transmission; for example, for this year's hajj the saudi arabian government is requiring proof of a quadrivalent meningococcal vaccination in order to receive a visa (ministry of hajj ). nonetheless, risks may remain significant for conditions that cannot currently be addressed by vaccination or that are difficult to treat, such as middle east respiratory syndrome coronavirus (mers-cov) or resistant infections. information too is critical: such well-attended events require imaginative and thoughtful surveillance that informs short-term medical care. because saudi arabia has had the largest number of human cases of mers-cov-an estimated % (who b)-travelers for this year's hajj are being warned to take extra precautions with respect to sanitation and personal hygiene measures such as handwashing or avoiding direct contact with non-human animals (new zealand ministry of foreign affairs ). still other social factors may contribute to the development of resistant disease that can be identified through surveillance. given the difficulties for women in saudi arabia to see physicians without being escorted, it is understandable that in saudi arabia many community pharmacies will dispense antibiotics without a prescription. zowasi ( ) recommends addressing these issues by increased education especially through social media as to the best approach to respond to the risk of anti-microbial resistant organism. still other recommendations about information use involve research on the development of new forms of antimicrobials. according to the most recent review article (butler et al. ) , antibiotics "are dramatically undervalued by society, receiving a fraction of the yearly revenue per patient generated by next-generation anticancer drugs." they are in the judgment of these authors an "endangered species,"-but there is some faint encouraging news. who and a number of national governments have recently begun to direct attention to the potential threat of resistance and lack of new drugs. since , five new-in-class antibiotics have been marketed, but these unfortunately only target gram-positive organisms not the gram-negative organisms that are likely to be resistant. other compounds are also in various stages of the process of clinical trials, but these too are more likely to be active against gram-positive bacteria. in the judgment of the authors of this review article, "the acute positive trend of new approvals masks a chronic underlying malaise in antibiotic discovery and development." interest in antibiotic development is more likely to be present in smaller biotech companies and in biotech companies located in europe. the authors conclude: "the only light on the horizon is the continued increase in public and political awareness of the issue." they also observe that with the retrenchment in investment, "we potentially face a generational knowledge gap" and drug development "is now more important than ever." to address this perilous juncture in antimicrobial research, the pew charitable trust convened a scientific expert group in . the premise of the group was that regulatory challenges, scientific barriers, and diminishing economic returns have led drug companies largely to abandon antibiotic research-yet antimicrobial resistance is accelerating. no entirely new classes of antibiotics useful against resistant organisms have been brought to market that are not derivatives of classes developed before -over years ago. the pew report advances many explanations for this dismal situation, including importantly the lack of coordinated investment in the relevant basic and translational research. one aspect of the report detailed the major role played by information gaps. published research is out of date and out of print. moreover, in today's world of investment in drug discovery, "creating an environment in which data exchange and knowledge sharing are the status quo will be difficult given proprietary concerns and the variety of information types and formats, which may range from historical data to new findings produced as part of this research effort." the pew consensus is that the following forms of information sharing are needed: a review of what is known about compounds that effectively penetrate gram-negative bacteria, a searchable catalogue of chemical matter including an ongoing list of promising antibacterial compounds, information on screening assays and conditions tested, and an informational database of available biological and physicochemical data. mechanisms must also be developed for sharing drug discovery knowledge in the area (pew, . in line with pew, a european antimicrobial resistance project suggests that research is seriously underfunded (kelly et al. ) . this group argues that the bulk of the publicly funded research is in therapeutics ( %); among the remainder, % of the research was on transmission and only % specifically on surveillance. this group also concluded that research is not coordinated and there is little attention to data sharing or sharing of research results. funding is fragmented, too, with many smaller grants addressing smaller projects independently rather than in a way that builds. this group summarizes: "to conclude, investment at present might not correspond with the burden of antibacterial resistance and the looming health, social, and economic threat it poses on the treatment of infections and on medicine in general. antibacterial resistance clearly warrants increased and new investment from a range of sources, but improved coordination and collaboration with more informed resource allocation are needed to make a true impact. hopefully, this analysis will prompt nations to pay due consideration to the existing research landscape when considering future investments." additional recommendations from other groups include novel methods for management of resistant disease, such as addressing the intestinal microbiome (e.g. bassetti et al. ) ; these methods, too, may be furthered by surveillance information as well as information about individual patients. analysis of these uses of information from the perspective of vector or victim are, we now argue, incomplete. when contagious diseases are serious or highly likely to be fatal and treatments for them are limited at best, fear is understandable. fear may be magnified if the disease is poorly understood, especially until modes of transmission have been identified. fear may also be magnified if there are no known effective treatments for the disease, as may be the case for extremely drug resistant infections. it is therefore understandable that proposals may come to the fore that emphasize isolation of those who are known to be infected, quarantine of those who have been exposed, or travel bans from areas of known disease outbreaks. proposals may even include criminalization of those who knowingly or even negligently take risks of infecting others. all of these possibilities and more were features of the hiv epidemic. even as understanding of the disease grew and effective treatment became increasingly available, some of these remain. criminalization of hiv transmission has not waned, despite the many objections raised to it (e.g. francis and francis a, b) . although the us ended its immigration ban on hiv+ individuals in , concerns remain about the risks of undiagnosed infections among immigrant populations in the u.s. (winston and beckwith ) and some countries (for example, singapore) continue to ban entry for hiv+ travelers planning stays over thirty days (the global database ). as epidemic fears have waxed and waned over recent decades, so have imperatives for identifying vectors and constraining their activities. these patterns have been apparent for avian influenza, sars, ebola, and zika, among others. the us still bars entry by non-citizens with a list of conditions including active tb, infectious syphilis, gonorrhea, infectious leprosy, and other conditions designated by presidential executive order such as plague or hemorrhagic fevers (cdc ). indeed, resistant tb has been a frequent illustration of the vector perspective in operation. multi-drug resistant tuberculosis is transmissible, difficult to treat, and poses a significant public health problem. its presence can be identified by methods such as testing of sputum samples. when patients are identified with resistant disease, public health authorities may seek to compel treatment or isolation, especially for patients judged unreliable about compliance with treatment. to avoid transmission, public health authorities have proposed isolating patients who have been identified as infected. because a course of treatment for tb may take many months-and failure to complete the full course may increase the likelihood of resistant diseaseisolation may continue for long periods of time. controversially, during the early s public health officials in new york isolated over patients identified with mdr tb on roosevelt island for treatment out of concern that they would be noncompliant with treatment even when they were unlikely to infect others (coker ) . perhaps one of the most highly publicized events involving a single patient was the odyssey of andrew speaker, a lawyer believed to have extremely resistant tb who eluded authorities as he took airplane flights around the globe in the effort to return home. speaker's journey created an international scare and calls for travel restrictions. speaker's lawsuit against the centers for disease control and prevention alleging violations of the federal privacy act, he claimed by revealing more information than was necessary for public health purposes, was ultimately resolved on summary judgment for the government, largely because the challenged disclosures had been made by speaker himself. who travel guidelines provide that individuals known to be infected with resistant tb should not travel until sputum analysis confirms that they are not at risk of disease transmission (who ). evidence is limited, however, about the need for this policy. the most recent literature review suggests that risks of transmission during air travel are very low and that there is need for ongoing international collaboration in contact tracing and risk assessment (kotila et al. ) . blanket travel bans encouraging actions that elude detection may reduce, rather than enhance, this needed collaboration. more subtle policies tailored to need would be preferable, but the fears generated by a focus on fear of vectors may make them unlikely to be developed or implemented. at best, therefore, the vector perspective is incomplete. focus on it may be counter-productive, if people hide or try to avoid education. it may encourage expenditures on efforts to identify suspected vectors rather than on evidence based efforts to identify risks of transmission and effective modes of prevention. and, of course, it ignores the plight of victims, to which we now turn. people with resistant infections are not only vectors, they are also victims of disease and have ethical claims to be treated as such (battin et al. ) . indeed, it is likely that vectors will themselves be victims, unless they are carriers of the disease in a manner that does not affect them symptomatically. concern for victims may take the form of seeking to ease the burdens of constraints such as isolation. a good illustration of the victim perspective in operation is the who publication of a pamphlet on "psychological first aid" to those affected by ebola. the pamphlet is designed to provide comfort to and meet the basic needs of people infected by ebola and those who are close to them, while maintained the safety of aids workers (who ). the recommendations rest on the importance of respect for the dignity of those who are suffering amidst disease outbreaks. it also emphasizes the importance of respect for rights such as confidentiality and nondiscrimination. the pamphlet is provisional and designed to be updated as knowledge of safety measures improves; this provisional nature is a recognition of the importance of ongoing development of information about how victims' needs can be safely met. despite the concern for victims, foremost in the pamphlet's recommendations is safety, both of aid workers and of disease victims, so that no one is further harmed including victims themselves and others close to them. overall, the pamphlet attempts to counter impulses to come to the aid of victims that may increase transmission risks, such as unprotected contact with those who are ill. but unexplored tensions remain in the document's recommendations. for example: "respect privacy and keep personal details of the person's story confidential, if this is appropriate" (p. ). nowhere does the document discuss when confidentiality is appropriate or what personal details may be revealed and in what ways. its manifest and important concern for victims is countered by safety but without discussion of how these goals might be implemented together or reasonably reconciled in practice. the who's most recently-adopted strategy for dealing with health emergencies, the health emergencies programme, provides another illustration of concern for victims that may lie in unexplored tension with other values. the programme urges cooperative methods to meet the immediate health needs of threatened populations through humanitarian assistance while also addressing causes of vulnerability and recovery (who ) . it is a coordinated strategy for emergency response that will move far beyond merely technical help; who describes it as a "profound change for who, adding operational capabilities to our traditional technical and normative roles" (who ) . it is aimed to provide crisis help, such as to hurricane matthew in haiti or to areas affected by the zika virus. it requires a major increase in funding devoted to core emergency efforts. core funding will come from assessed contributions, flexible contributions that the director-general has discretion to allocate, and earmarked voluntary contributions. but it is clearly under-funded; who reported a % funding gap as of october , just to meet the program's core capabilities. moreover, who also reported that it has raised less than a third of the funding needed for the who contingency fund for emergencies, a fund deployed for the initial months of an emergency before donor funding becomes available (who ) . the health emergencies programme reflects reactions to the humanitarian disaster of the ebola epidemic and criticisms of the who level of response. the who - budget reflects this response as well (who ) . that budget "demonstrates three strategic shifts" (who , p. ). the first is application of the lessons from ebola especially the need to strengthen core capacities in preparedness, surveillance and response. the second strategic shift is a focus on universal health coverage, which includes enhancing contributions to maternal and child health, speeding progress towards elimination of malaria, and enhancing work on noncommunicable diseases, among other worthy goals. the final strategic shift is towards "emerging threats and priorities"; illustrations of these are "antimicrobial resistance, hepatitis, ageing, and dementia." these are not an obvious group to characterize as "emerging," to the extent that this suggests a developing threat that has not yet become urgent but that may be expected to become so in the near future. nor are they an obvious group to link together in the same category. this mixture of budgetary priorities suggests is responsiveness to issues raised through consultation with who member states, rather than proactive planning. who specific efforts directed to resistance can be characterized as primarily coordination. the who website devoted to resistance promotes information sharing and lists research questions and potential funding agencies (who a). who expresses no judgment about either funding agencies or which of the nearly listed research questions-ranging from research on resistance in day care centers to the biological price that microorganisms pay for resistance-might be fruitfully addressed first or how they might be interconnected. concern for victims is surely part of a response to a humanitarian emergency. responsiveness to urgent health needs is an important goal. including antimicrobial resistance in a list of "emerging" issues is at least recognition of the problem. but the who response to ebola and the who budget overall can be characterized as less than fully set into context in a reasoned way. thus, we contend, neither vector nor victim perspectives are adequate. one risks falling prey to fear while the other risks responses that are well-intentioned but that may be difficult to meet or compete with other values in ways that remain underexplored. these perspectives are inevitable and important, but they are each incomplete. in our judgment, a primary difficulty with both vector and victim perspectives is that neither are set into context or seen as interconnected. this section suggests how fairness considerations may help in focusing attention to the most pressing questions to ask about antimicrobial resistance and the directions for surveillance and information use to take. fairness entered the philosophical lexicon in discussions about justice as procedural, most famously in john rawls's "justice as fairness" (rawls ) . as rawls initially conceptualized his view, it involved a decision procedure for selecting basic principles of justice in which people were unable to gain unfair advantage. as the debates about rawlsian justice unfolded, a fundamental issue was whether people with radically different capacities and views of the good life could be expected to accept the results of the decision procedure as formulated. thus critics raised the concern that people with disabilities might be left out of the decision procedure as "non-contributors" to the practice of justice (nussbaum ; stark ) . critics also pressed the argument that people with radically illiberal conceptions of the good would ultimately destabilize the practice of justice in a rawlsian ideal society (e.g. williams a, b) . rawls ultimately accepted the point that proceduralism could not yield a universal theory of justice, pulling back his view to the claim that it only represented a vision of justice for a certain kind of liberal society (rawls ) . but fairness also entered the debates about justice in a more substantive way, especially in bioethics. norman daniels ( ) , for example, expanded a rawlsian approach to consider justice in health care. the british idea of a "fair innings," in which the opportunities of each to reasonable health over a normal life span are prioritized, was raised particularly with respect to the distribution of health care resources to the elderly (bognar ; farrant ; harris ; williams b) . like the metaphor of a level playing field, the fair innings argument comes from sports (francis ) . it reflects the idea of everyone having a chance to participate in a game that at least gives them a reasonable opportunity for success. there are four aspects of such opportunity: who plays and whether the rules are constructed to give each an opportunity to win that is reasonable are two. also important is the balance among opportunities to succeed, so that there aren't consistent tilts in one direction or another, as might be characterized by the further metaphor of leveling the playing field. finally, attention to the interaction between advantages and disadvantages matters, so that participants are encouraged to continue playing the game rather than dropping out. our invocation of fairness as a concept is rooted in the judgment that antimicrobial resistance-or other pressing global public health problems, for that matterexemplify multiple aspects of non-ideal and partial compliance circumstances. natural circumstances are less than forgiving; new health threats emerge on a regular basis. antimicrobial resistance is an ongoing natural challenge to effective therapy for deadly diseases. social circumstances are imperfect, too: overcrowding, poor sanitation, straitened resources for public health and health care, and cultural practices that increase potential for disease transmission all play roles in the development of resistance. alexander fleming, the discoverer of penicillin, warned that the development of resistance was likely, but his warning appears not to have been well heard. finally, efforts to address antimicrobial resistance are riven with noncompliance: over-prescribing by physicians, over-use of antimicrobials in agriculture, individual failures to take medications as prescribed, and concealment of disease out of fear of discovery and persecution. because the conditions that give rise to these problems of non-compliance may seem urgent-people seeking antimicrobials are in pain or ill, perhaps gravely; people in hiding from health authorities may fear stigmatization or death-they raise in particularly poignant form questions of the extent of obligations under circumstances in which others are not doing what arguably is their fair share (e.g. stemplowska ; murphy ) . fairness as an ethical concept is especially suited to such imperfect circumstances. it directs attention to how improvements are distributed. distributions can be more or less fair, if they distribute benefits and burdens in an increasingly inclusive manner (e.g. francis and francis a, b) . fairness thus construed is at the heart of perhaps the most influential set of recommendations for ethical pandemic planning, the canadian stand on guard for thee (toronto joint centre ) . although much of the discussion of fairness in this document emphasizes inclusive procedures, so that engagement may lead to acceptance of choices as fairly made (e.g., p. ), the recommendations also contain substantive dimensions. these include fair resource plans for those who fall ill providing necessary services during a pandemic (p. ) and assurance that people who are affected by choices are reciprocally supported in a way that they do not suffer "unfair economic penalties" (p. ). here, the links between fairness and reciprocity are explicit. these four aspects of fairness-who is included in the play, what opportunities they have, how these opportunities are balanced, and whether there are elements of reciprocity-can be used to set vector and victim perspectives into context in addressing the gathering and use of information about antimicrobial resistance. over-emphasizing vectors threatens their opportunities and even possible participation. overemphasizing victims tilts the field unidirectionally, understandably directing resources to immediate need but without consideration of longer-term consequences. reciprocity may be the most important of all, creating commitment to workable strategies for addressing resistance when there are difficult choices to be made. fear, understood as a threat personal health, is often an ally in persuading people to seek preventive care and to change life styles, or to persuade policy makers to create incentives or penalties for decisions that contribute to poor health. but great fear can also lead to immobility. the real threat posed by the rise of antimicrobial resistance does not seem to be easily addressed by a successful alternative in the view of victims or policy makers. medical personnel are fearful of not responding to the demands of patients for immediate reductions in pain or suffering at relatively low costs. the scale of the threat posed by rapid rise of antimicrobial resistance may be daunting to policy makers especially as funders of research. the cost of developing ever-new generations of antibiotics seems to suggest a great series of short-term solutions especially as pharmaceutical companies respond to incentives to generate near-term profits. in this context, it is worth recalling how the development of the first antimicrobials contributed to more generally shared benefits: when penicillin became known to people as a wonderful drug it actually helped to speed the adoption of the national health service in britain. the popular expectation was health care for all facilitated with the rise of a new generation of low cost wonder drugs and reinforced by low cost vaccinations (webster ) . but some of the advantages were short-lived, as the costs of pharmaceuticals grew exponentially and inadequate attention was paid to the risks of overprescribing-once again a cautionary reminder of the importance of emphasizing balance rather than one particular perspective such as victimhood. if a promise of sustaining production at lower costs of ever-new generations of antimicrobials from how information is used can offer benefits more widely, then it becomes easier to impose tougher regulations on antimicrobial use that may to some extent stave off the development of resistance. this approach in terms of fairness directs attention not only to vectors and to victims seen as separate entities. it also directs attention to how they are often, and unpredictably, twinned-given the epidemiology of resistance spread, it is likely to begin within interlaced communities where vectors are also victims. but it also directs our attention to these issues set in distributive context, raising questions such as these: who is most likely to be affected by resistance? who will suffer the most severe consequences from resistance? who is most likely to be disadvantaged by information gained to counter resistance? who will suffer the most severe disadvantage? who will benefit from efforts to counter resistance? how can these benefits be spread more inclusively? and, how are the benefits and burdens of addressing resistance intertwined? are some primarily beneficiaries, while others are primarily burdened? are there ways to increase reciprocal linkages in these benefits and burdens, so that efforts to counter resistance are accepted and supported more widely? these are the kinds of questions that need to guide how surveillance is deployed in the effort to counter resistance, not vague generalities about the importance of addressing health infrastructure or bromides about the need to increase resources. open access this chapter is licensed under the terms of the creative commons attribution . international license (http://creativecommons.org/licenses/by/ . /), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons licence and indicate if changes were made. the images or other third party material in this chapter are included in the chapter's creative commons licence, unless indicated otherwise in a credit line to the material. if material is not included in the chapter's creative commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. georgia: suit against disease centers is revived. the new york times antimicrobial resistance in the next years, humankind, bugs and drugs: a visionary approach the patient as victim and vector: ethics and infectious disease antibiotics in the clinical pipeline at the end of immigrant and refugee health just coercion? detention of nonadherent tuberculosis patients just health care the fair innings argument and increasing life spans promoting equality in and through the paralympics hiv treatment as prevention: not an argument for continuing criminalization of hiv transmission informatics and public health surveillance communicable diseases as health risks at mass gatherings other than hajj: what is the evidence? the value of life antimicrobial drug use in food-producing animals and associated human health risks: what, and how strong, is the evidence? antibiotic prescribing in patients with acute rhinosinusitis is not in agreement with european recommendations public funding for research on antibacterial resistance in the jpiamr countries, the european commission, and related european union agencies: a systematic observational analysis review: systematic review on tuberculosis transmission on aircraft and update of the european centre for disease prevention and control risk assessment guidelines for tuberculosis transmitted on aircraft (ragida-tb) what are the non-biomedical reasons which make family doctors over-prescribe antibiotics for upper respiratory tract infection in a mixed private/public asian setting saudi ministry of health requirements and health matters moral demands in nonideal theory frontiers of justice: disability, nationality, species membership political liberalism how to include the severely disabled in the contractarian theory of justice doing more than one's fair share the global database on hiv-specific travel & residence restrictions. . regulations on entry, stay and residence for plhiv a scientific roadmap for antibiotic discovery treatment of upper respiratory tract infections-a comparative study of dispensing and non-dispensing doctors sexually transmitted infections: challenges ahead the national health service: a political history international health regulations who's new health emergencies programme how do community practitioners decide whether to prescribe antibiotics for acute respiratory infections intergenerational equity: an exploration of the 'fair innings' argument the beginning was the deed: realism and moralism in political argument the impact of removing the immigration ban on hiv-infected persons . international travel and health: tuberculosis (tb) letter: injured libyan combatant patients: both vectors and victims of multiresistance bacteria? antimicrobial resistance in saudi arabia: an urgent call for an immediate action key: cord- -z ianbw authors: celliers, marlie; hattingh, marie title: a systematic review on fake news themes reported in literature date: - - journal: responsible design, implementation and use of information and communication technology doi: . / - - - - _ sha: doc_id: cord_uid: z ianbw in this systematic literature review, a study of the factors involved in the spreading of fake news, have been provided. in this review, the root causes of the spreading of fake news are identified to reduce the encouraging of such false information. to combat the spreading of fake news on social media, the reasons behind the spreading of fake news must first be identified. therefore, this literature review takes an early initiative to identify the possible reasons behind the spreading of fake news. the purpose of this literature review is to identify why individuals tend to share false information and to possibly help in detecting fake news before it spreads. the increase in use of social media exposes users to misleading information, satire and fake advertisements [ ] . fake news or misinformation is defined as fabricated information presented as the truth [ ] . it is the publication of known false information and sharing it amongst individuals [ ] . it is the intentional publishing of misleading information and can be verified as false through fact-checking [ ] . social media platforms allow individuals to fast share information with only a click of a single share button [ ] . in previous studies the effect of the spreading and exposure to misleading information have been investigated [ ] . some studies determined that everyone has problems with identifying fake news, not just users of a certain age, gender or education [ ] . the literacy and education of fake news is essential in the combating of the spreading of false information [ ] . this review identify and discuss the factors involved in the sharing and spreading of fake news. the outcome of this review should be to equip users with the abilities to detect and recognise misinformation and also to cultivate a desire to stop the spreading of false information [ ] . literature background the internet is mainly driven by advertising [ ] . websites with sensational headlines are very popular, which leads to advertising companies capitalising on the high traffic to the site [ ] . it was subsequently discovered that the creators of fake news websites and information could make money through automated advertising that rewards high traffic to their websites [ ] . the question remains how misinformation would then influence the public. the spreading of misinformation can cause confusion and unnecessary stress among the public [ ] . fake news that is purposely created to mislead and to cause harm to the public is referred to as digital disinformation [ ] . disinformation has the potential to cause issues, within minutes, for millions of people [ ] . disinformation has been known to disrupt election processes, create unease, disputes and hostility among the public [ ] . these days, the internet have become a vital part of our daily lives [ ] . traditional methods of acquiring information have nearly vanished to pave the way for social media platforms [ ] . it was reported in that facebook was the largest social media platform, hosting more . million users world-wide [ ] . the role of facebook in the spreading of fake news possibly has the biggest impact from all the social media platforms [ ] . it was reported that % of worldwide users get their news from facebook [ ] . % of facebook users have indicated that they have shared false information, either knowingly or not [ ] . the spreading of fake news is fuelled by social media platforms and it is happening at an alarming pace [ ] . in this systematic literature review, a qualitative methodology was followed. a thematic approach was implemented to determine the factors and sub-factors that contribute to the sharing and spreading of fake news. the study employed the following search terms: ("fake news" (near/ ) "social media") and (defin* or factors or tools) ("misinformation" (near/ ) "social media") and (defin* or factors or tools). in this literature review, only published journal articles between and were considered. this review is not be specific to certain sectors i.e. the health sector or the tourism sector but rather consider on all elements that contribute to individuals sharing false information. studies that are not in english have been excluded in this review. only studies that are related to the research question have been taken into account. this article does not discuss the detection of fake news but rather the reasons behind the spreading of fake news. the analysis consisted of four phases: identification phase; screening phase; eligibility phase and inclusion phase. when conducting this literature review, the selection of articles were based on three main criteria: firstly, to search for and select articles containing the search terms identified above; secondly, selection based on the title and abstract of the article and finally selection based on the content of the article. in the identification phase of this literature review, science direct and emerald insight were selected to perform the literature review. science direct offered a total of journal articles matching the search terms. emerald insight generated journal articles that matched the search terms. continuing with the identification phase, the various articles were then combined and the duplicates were removed. in the screening phase of the source selection, all the article titles were carefully screened and a few articles were excluded as unconvincing. the eligibility of the abstract in the remaining articles were consulted and some articles were excluded based on the possible content of the article. the rest of the articles were further thoroughly examined to determine if they were valuable and valid to this research paper. upon further evaluation, these final remaining articles were further studied to make a final source selection. in this paper, possible reasons for and factors contributing to the sharing and spreading of false information are discussed. the reasons are categorized under various factors highlighted in the journal articles used to answer the research question. these factors include: social factors, cognitive factors, political factors, financial factors and malicious factors. while conducting the literature review, articles highlighted the social factors; articles discussed the role that cognitive factors have in contributing to the sharing and spreading of fake news; articles highlighted the role of political factors; nine articles discussed how financial gain could convince a social media users to spread false information and articles debated malicious factors and the effect that malicious factors have on the sharing and spreading of false information. figure gives a breakdown of all articles containing references to all the subcategories listed above. it was clearly evident that the two single sub-categories of social comparison and hate propaganda were the most debated. with the sub-factor, knowledge and education, closely behind. a high percentage of the articles, . % ( of ), refer to the effects of social comparison on the spreading of false information; followed by . % ( of ) of the articles referencing hate propaganda. knowledge and education was measured at . % ( of ). furthermore, it was concluded that the majority of the articles highlighted a combination of the social factors i.e. conformity and peer influence, social comparison and satire and humorous fakes, which measures at . % ( of ). where the combination of the cognitive factors e.g. knowledge and education and ignorance measured at . % ( of ) . political factors and sub-factors e.g. political clickbaits and political bots/cyborgs, were discussed in . % ( of ) of the articles. in addition, financial factors e.g. advertising and financial clickbaits were referenced in . % ( of ) of the journal articles. and lastly, malicious factors e.g. malicious bots and cyborgs, hate propaganda and malicious clickbaits measured at . % ( of ). fake news stories are being promoted on social media platforms to deceive the public for ideological gain [ ] . in various articles it was stated that social media users are more likely to seek information from people who are more like-minded or congruent with their own opinions and attitudes [ , ] . conformity and peer influence. it is the need of an individual to match his or her behaviour to a specific social group [ ] . the desire that social media users have to enhance themselves on social media platforms could blur the lines between real information and false information [ ] . consequently, social media users will share information to gain social approval and to build their image [ ] . recent studies have shown that certain false information can be strengthened if it belongs to the individuals in the same social environment [ ] . the real power lies with those certain individuals who are more vocal or influential [ ] . the need for social media users to endorse information or a message can be driven by the perception the social media user has about the messenger [ ] . these messengers or "influencers" can be anyone ranging from celebrities to companies [ ] . studies show that messages on social media platforms, like twitter, gain amplification because the message or information is associated with certain users or influencers [ ] . information exchanging depends on the ratings or the influential users associated with the information [ ] . social media users' influence among peers enhance the impact and spreading of all types of information [ ] . these social media influencers have the ability to rapidly spread information to numerous social media users [ ] . the level of influence these influencers have, can amplify the impact of the information [ ] . the lack of related information in online communities could lead to individuals sharing the information based on the opinions and behaviours of others [ ] . some studies show that social media users will seek out or share information that reaffirms their beliefs or attitudes [ ] . social comparison. the whole driving force of the social media sphere is to post and share information [ ] . social comparison can be defined as certain members within the same social environment who share the same beliefs and opinions [ ] . when they are unable to evaluate certain information on their own, they adapt to compare themselves to other members, within the same environment, with the same beliefs and opinions [ ] . the nature of social media allows social media users to spread information in realtime [ ] . social media users generate interactions on social media platforms to gain "followers" and to get "likes" which lead to an increasing amount of fake news websites and accounts [ ] . one of the biggest problems faced in the fake news dilemma, is that social media users' newsfeed on social media platforms, like facebook, will generally be populated with the user's likes and beliefs, providing a breeding ground for users with like-minded beliefs to spread false information among each other [ ] . social media users like to pursue information from other members in their social media environment whose beliefs and opinions are most compatible with their own [ ] . social media algorithms designed to make suggestions or filter information based on the social media users' preferences [ ] . the "like" button on social media platforms, e.g. facebook, becomes a measuring tool for the quality of information, which could make social media users more willing to share the information if the information has received multiple likes [ ] . social media users' belief in certain information depends on the number of postings or "re-tweets" by other social media users who are involved in their social media sphere [ ] . one article mentioned that the false news spreading process can be related to the patterns of the distribution of news among social media users [ ] . the more a certain piece of information is shared and passed along the more power it gains [ ] . this "endorsing" behaviour results in the spreading of misleading information [ ] . it is also known as the "herding" behaviour and is common among social media where individuals review and comment on certain items [ ] . it is also referred to as the "bandwagon effect" where individuals blindly concentrate on certain information based on perceived trends [ ] . the only thing that matters is that the information falls in line with what the social media user wants to hear and believe [ ] . many studies also refer to it as the "filter bubble effect" where social media users use social media platforms to suggest or convince other social media users of their cause [ ] . communities form as a result of these filter bubbles where social media users cut themselves off from any other individual that might not share the same beliefs or opinions [ ] . it was found that social media users tend to read news or information that are ideologically similar to their own ideologies [ ] . satire and humorous fakes. some of the content on social media are designed to amuse users and are made to deceive people into thinking that it is real news [ ] . satire is referred to as criticising or mocking ideas or opinions of people in an entertaining or comical way [ , ] . these satire articles consist of jokes or forms of sarcasm that can be written by everyday social media users [ ] . most satire articles are designed to mislead and instruct certain individuals [ ] . some social media users will be convinced that it is true information and will thus share the information [ ] . the study of cognition is the ability of an individual to make sense of certain topics or information by executing a process of reasoning and understanding [ ] . it is the ability of an individual to understand thought and execute valid reasoning and understanding of concept [ ] . with an increasing amount of information being shared across social media platforms it can be challenging for social media users to determine which of the information is closest to the original source [ ] . the issue of individuals not having the ability to distinguish between real and fake news have been raised in many articles [ ] . users of social media tend to not investigate the information they are reading or sharing [ ] . this can therefore lead to the rapid sharing and spreading of any unchecked information across social media platforms [ ] . [ ] . the trustworthiness of a certain article is based on how successful the exchange of the articles are [ ] . the more successful the exchange, the more likely social media users will share the information [ ] . social media users make supposedly reasonable justifications to determine the authenticity of the information provided [ ] . people creating fake news websites and writing false information exploit the nonintellectual characteristics of some people [ ] . for social media users to determine if the information they received is true or false, expert judgement of content is needed [ ] . in a recent study, it was found that many social media users judge the credibility of certain information based on detail and presentation, rather than the source [ ] . some individuals determine the trustworthiness of information provided to them through social media on how much detail and content it contains [ ] . it is believed that people are unable to construe information when the information given to them, are conflicting with their existing knowledge base [ ] . most social media users lack the related information to make a thorough evaluation of the particular news source [ ] . for many years, companies and people have been creating fake news articles to capitalize on the non-intellectual characteristics of certain individuals [ ] . a driving force of the spreading of false information is that social media users undiscerningly forward false information [ ] a reason for the spreading of false information in many cases are inattentive individuals who do not realise that some websites mimic real websites [ ] . these false websites are designed to look like the real website but in essence only contain false information. social media users tend to share information containing a provocative headline, without investigating the facts and sources [ ] . the absence of fact-checking by social media users on social media platforms, increases the spreading of false information [ ] . social media users tend to share information without verifying the source or the reliability of the content [ ] . information found on social media platforms, like twitter, are sometimes not even read before they are being spread among users, without any investigation into the source of the information [ ] . as mentioned earlier in sect. . , the bandwagon effect causes individuals to share information without making valued judgement [ ] . the spreading of false political information have increased due to the emergence of streamline media environments [ ] . there has been a considerable amount of research done on the influence of fake news on the political environment [ , ] . by creating false political statements, voters can be convinced or persuaded to change their opinions [ ] . critics reported that in the national election in the uk (regarding the nation's withdrawal out of the eu) and the presidential election in the us, a number of false information was shared on social media platforms that have influenced the outcome of the results [ , ] . social media platforms, like facebook, came under fire in the us presidential election, when fake news stories from unchecked sources were spread among many users [ ] . the spreading of such fake news have the sole purpose of changing the public's opinion [ , ] . various techniques can be used to change the public's opinion. these techniques include repeatedly retweeting or sharing messages often with the use of bots or cyborgs [ ] . it also includes misleading hyperlinks that lures the social media user to more false information [ ] . political clickbaits. clickbaits are defined as sources that provide information but use misleading and sensational headiness to attract individuals [ ] . in the us presidential elections it was apparent that clickbaits were used to shape peoples' opinions [ ] . in a recent study it was found that % ( of ) false news stories were shared on social media platforms, like twitter, with links to non-credible news websites [ ] . webpages are purposely created to resemble real webpages for political gain [ ] . news sources with urls similar to the real website url have been known to spread political fake news pieces, which could influence the opinion of the public [ ] . political bots/cyborgs. a social media users' content online is managed by algorithms to reflect his or her prior choices [ ] . algorithms designed to fabricate reports are one of the main causes of the spreading of false information [ ] . in recent years, the rapid growth of fake news have led to the belief that cyborgs and bots are used to increase the spreading of misinformation on social media [ ] . in the us election social bots were used to lead social media users to fake news websites to influence their opinions on the candidates [ ] . hundreds of bots were created in the us presidential elections to lure people to websites with false information [ ] these social bots can spread information through social media platforms and participate in online social communities [ ] . one of the biggest problems with fake news is that it allows the writers to receive monetary incentives [ ] . misleading information and stories are promoted on social media platforms to deceive social media users for financial gain [ , ] . one of the main goals of fake news accounts are to generate traffic to their specific website [ ] . articles with attractive headlines lure social media users to share false information thousands of times [ ] . many companies use social media as a platform to advertise their products or to promote their products [ ] . advertising. people earn money through clicks and views [ ] . the more times the link is clicked the more advertising money is generated [ ] . every click corresponds to advertising revenue for the content creator [ ] . the more traffic companies or social media users get to their fake news page, the more profit through advertising can be earned [ ] . the only way to prevent financial gain for the content creator is inaction [ ] . most advertising companies are more interested in how many social media users will be exposed to their product rather than the possible false information displayed on the page where their advertisement is displayed [ ] . websites today are not restricted on the content displayed to the public, as long as they attract users [ ] . this explains how false information is monetized, providing monetary value for writers to display sensational false information [ ] . financial clickbaits. clickbaits are used to lure individuals to other websites or articles for financial gain [ ] . one of the main reasons for falsifying information is to earn money through clicks and views [ ] . writers focus on sensational headlines rather than truthful information [ ] . appeal rather than truthfulness drives information [ ] . these attractive headlines deceive individuals into sharing certain false information [ ] . clickbaits are purposely implemented to misguide or redirect social media users to increase the views and web traffic of certain websites for online advertising earnings [ ] . social media users end up spending only a short time on these websites [ ] . clickbaits have been indicated as one of the main reasons behind the spreading of false information [ ] . studies debating the trustworthiness of information and veracity analytics of online content have increased recently due to the rise in fake news stories [ ] . social media has become a useful way for individuals to share information and opinions about various topics [ ] . unfortunately, many users share information with malicious intent [ ] . malicious users, also referred to as "trolls", often engage in online communication to manipulate other social media users and to spread rumours [ ] . malicious websites are specifically created for the spreading of fake news [ ] . malicious entities use false information to disrupt daily activities like the health-sector environment, the stock markets or even the opinions people have on certain products [ ] . some online fake news stories are purposely designed to target victims [ ] . websites, like reddit, have been known as platforms where users can get exposed to bullying [ ] . some individuals have been known to use the social media platform to cause confusion and fear among others [ ] . malicious bots/cyborgs. malicious users, with the help of bots, target absent-minded individuals who do not check the article facts or source before sharing it on social media [ ] . these ai powered bots are designed to mimic human behaviour and characteristics, and are used to corrupt online conversations with unwanted and misleading advertisements [ ] . in recent studies it was found that social bots are being created to distribute malware and slander to damage an individual's beliefs and trust [ ] . hate propaganda. many argue that the sharing of false information fuel vindictive behaviour among social media users [ ] . some fake news websites or pages are specifically designed to harm a certain user's reputation [ , ] . social media influencers influence users' emotional and health outcomes [ ] . fake news creators specifically target users with false information [ ] . this false information is specifically designed to deceive and manipulate social media users [ ] . fake news stories like this, intend to mislead the public and generate false beliefs [ ] . in some cases, hackers have been known to send out fake requests to social media users to gain access to their personal information [ ] . the spreading of hoax has also become a problem on social media. the goal of hoaxes is to manipulate the opinion of the public and to maximize public attention [ ] . social spammers have also become more popular over the last few years with the goal to launch different kinds of attacks on social media users, for example spreading viruses or phishing [ ] . fake reviews have also been known to disrupt the online community through writing reviews that typically aim to direct people away from a certain product or person [ ] . another method used by various malicious users, is to purchase fake followers to spread harmful malware more swiftly [ ] . malicious clickbaits. it was reported on in a previous article that employees in a certain company clicked on a link, disguised as important information, where they provide sensitive and important information to perpetrators [ ] . malicious users intending to spread malware and phishing hide behind a fake account to further increase their activities [ ] . clickbaits in some cases are designed to disrupt interactions or to lure individuals into arguing in disturbed online interactions or communications [ ] . these clickbaits have also been known to include malicious code as part of the webpage [ ] . this will cause the social media users to download malware onto their device once they select the link [ ] . various articles were used to identify and study the factors and reasons involved in the sharing and spreading of misinformation on social media. upon retrieving multiple reasons for the spreading of false information, they were categorized into main factors and sub-factors. these factors included social factors, cognitive factors, political factors, financial factors and malicious factors. considering the rapidly expanding social media environment, it was found that especially social factors have a very significant influence in the sharing of fake news on social media platforms. its sub-factors of conformity and peer-influence; social comparison and satire and humorous fakes have great influence when deciding to share false information. secondly, it was concluded that malicious factors like hate propaganda also fuel the sharing of false information with the possibility to financially gain or to do harm. in addition, it was concluded from this review that knowledge and education plays a very important role in the sharing of false information, where social media users sometimes lack the logic, reasoning and understanding of certain information. it was also evident that social media users may sometimes be ignorant and indifferent when sharing and spreading information. fact-checking resources are available but the existence thereof is fairly unknown and therefore often unused. hopefully better knowledge and education will encourage a desire among social media users to be more aware of possible unchecked information and the sources of information and to stop the forwarding of false information. a better understanding of the motives behind the sharing of false information can potentially prepare social media users to be more vigilant when sharing information on social media. the goal of this literature review was only to identify the factors that drive the spreading of fake news on social media platforms and did not fully address the dilemma of combatting the sharing and spreading of false information. while this literature review sheds light on the motivations behind the spreading of false information, it does not highlight the ways in which one can detect false information. this proposes further suggestions for follow-up research or literature studies using these factors in an attempt to detect and limit or possibly eradicate the spreading of false information across social media platforms. despite the limitations of this literature review, it helps to educate and provide insightful knowledge to social media users who share information across social media platforms. algorithmic detection of misinformation and disinformation: gricean perspectives a survey on fake news and rumour detection techniques an overview of online fake news: characterization, detection, and discussion detecting fake news in social media networks why students share misinformation on social media: motivation, gender, and study-level differences the diffusion of misinformation on social media: temporal pattern, message, and source behind the cues: a benchmarking study for fake news detection why do people believe in fake news over the internet? an understanding from the perspective of existence of the habit of eating and drinking this is fake news': investigating the role of conformity to other users' views when commenting on and spreading disinformation in social media the current state of fake news: challenges and opportunities identifying fake news and fake users on twitter why do people share fake news? associations between the dark side of social media use and fake news sharing behavior history of fake news getting acquainted with social networks and apps: combating fake news on social media what the fake? assessing the extent of networked political spamming and bots in the propagation of #fakenews on twitter fake news: belief in post-truth real . keeping it real in digital media. disinformation destroys democracy democracy, information, and libraries in a time of post-truth discourse attention-based convolutional approach for misinformation identification from massive and noisy microblog posts third person effects of fake news: fake news regulation and media literacy interventions good news, bad news, and fake news: going beyond political literacy to democracy and libraries a computational approach for examining the roots and spreading patterns of fake news: evolution tree analysis effects of group arguments on rumor belief and transmission in online communities: an information cascade and group polarization perspective beyond misinformation: understanding and coping with the 'post-truth' era virtual zika transmission after the first u.s. case: who said what and how it spread on twitter distance-based customer detection in fake follower markets exploring users' motivations to participate in viral communication on social media detecting rumors in social media: a survey fake news judgement: the case of undergraduate students at notre dame university-louaize the emergence and effects of fake social information: evidence from crowdfunding fake news and its credibility evaluation by dynamic relational networks: a bottom up approach understanding the majority opinion formation process in online environments: an exploratory approach to facebook social media and the future of open debate: a user-oriented approach to facebook's filter bubble conundrum fake news': incorrect, but hard to correct. the role of cognitive ability on the impact of false information on social impressions social media hoaxes, political ideology, and the role of issue confidence fake news and the willingness to share: a schemer schema and confirmatory bias perspective the dark side of news community forums: opinion manipulation trolls social media? it's serious! understanding the dark side of social media social media security and trustworthiness: overview and new direction key: cord- -yqu ykc authors: nan title: early warning systems a state of the art analysis and future directions date: - - journal: nan doi: . /j.envdev. . . sha: doc_id: cord_uid: yqu ykc nan united nations' international strategy for disaster reduction (isdr), it integrates (united nations, ) : . risk knowledge: risk assessment provides essential information to set priorities for mitigation and prevention strategies and designing early warning systems. . monitoring and predicting: systems with monitoring and predicting capabilities provide timely estimates of the potential risk faced by communities, economies and the environment. . disseminating information: communication systems are needed for delivering warning messages to the potentially affected locations to alert local and regional governmental agencies. the messages need to be reliable, synthetic and simple to be understood by authorities and the public. . response: coordination, good governance and appropriate action plans are key points in effective early warning. likewise, public awareness and education are critical aspects of disaster mitigation. failure of any part of the system will imply failure of the whole system. for example, accurate warnings will have no impact if the population is not prepared or if the alerts are received but not disseminated by the agencies receiving the messages. the basic idea behind early warning is that the earlier and more accurately we are able to predict short-and long term potential risks associated with natural and human induced hazards, the more likely we will be able to manage and mitigate a disaster's impact on society, economies, and environment. environmental hazards can be associated with: ongoing and rapid/sudden-onset threats and slow-onset (or ''creeping'') threats: . ongoing and rapid/sudden-onset: these include such hazards as follows: accidental oil spills, nuclear plant failures, and chemical plant accidents -such as inadvertent chemical releases to the air or into rivers and water bodies -geological hazards and hydro-meteorological hazards (except droughts). . slow-onset (or ''creeping''): incremental but long-term and cumulative environmental changes that usually receive little attention in their early phases but which, over time, may cause serious crises. these include such issues as deteriorating air and water quality, soil pollution, acid rain, climate change, desertification processes (including soil erosion and land degradation), drought, ecosystems change, deforestation and forest fragmentation, loss of biodiversity and habitats, nitrogen overloading, radioactive waste, coastal erosion, pressures on living marine resources, rapid and unplanned urban growth, environment and health issues (emerging and re-emerging infectious diseases and links to environmental change), land cover/land changes, and environmental impacts of conflict, among others. such creeping changes are often left unaddressed as policy-makers choose or need to cope with immediate crises. eventually, neglected creeping changes may become urgent crises that are more costly to deal with. slow-onset threats can be classified into location-specific environmental threats, new emerging science and contemporary environmental threats (table ) . rapid/sudden-onset and slow-onset events will provide different amounts of available warning time. fig. shows warning times for climatic hazards. early warning systems may provide seconds of available warning time for earthquakes to months of warning for droughts, which are the quickest and slowest onset hazards, respectively. specifically, early warning systems provide tens of seconds of warning for earthquakes, days to hours for volcanic eruptions, and hours for tsunamis. tornado warnings provide minutes of lead-time for response. hurricane warning time varies from weeks to hours. the warning time provided by warning systems, increases to years or even decades of leadtime available for slow-onset threats (such as el niñ o, global warming etc., as shown in fig. ). drought warning time is in the range of months to weeks. slow-onset (or creeping) changes may cause serious problems to environment and society, if preventive measures are not taken when needed. such creeping environmental changes require effective early warning technologies due to the high potential impact of incremental cumulative changes on society and the environment. (golnaraghi, ) . the graph shows the timeliness of early warning systems for hydrometeorological hazards and the area of impact (by specifying the diameter of the spherical area) for climatic hazards. early warning systems help to reduce economic losses and mitigate the number of injuries or deaths from a disaster, by providing information that allows individuals and communities to protect their lives and property. early warning information empowers people to take action prior to a disaster. if well integrated with risk assessment studies and communication and action plans, early warning systems can lead to substantive benefits. effective early warning systems embrace the following aspects: risk analysis; monitoring and predicting location and intensity of the disaster; communicating alerts to authorities and to those potentially affected; and responding to the disaster. the early warning system has to address all aspects. monitoring and predicting is only one part of the early warning process. this step provides the input information for the early warning process that needs to be disseminated to those whose responsibility is to respond (fig. ) . early warnings may be disseminated to targeted users (local early warning applications) or broadly to communities, regions or to media (regional or global early warning applications). this information gives the possibility of taking action to initiate mitigation or security measures before a catastrophic event occurs. when monitoring and predicting systems are associated with communication systems and response plans, they are considered early warning systems (glantz, ) . commonly, however, early warning systems lack one or more elements. in fact, a review of existing early warning systems shows that in most cases communication systems and adequate response plans are missing. to be effective, warnings also must be timely so as to provide enough lead-time for responding; reliable, so that those responsible for responding to the warning will feel confident in taking action; and simple, so as to be understood. timeliness often conflicts with the desire to have reliable predictions, which become more accurate as more observations are collected from the monitoring system . thus, there is an inevitable trade-off between the amount of warning time available and the reliability of the predictions provided by the ews. an initial alert signal may be sent to give the maximum amount of warning time when a minimum level of prediction accuracy has been reached. however, the prediction accuracy for the location and size of the event will continue to improve as more data are collected by the monitoring system part of the ews network. it must be understood that every prediction, by its very nature, is associated with uncertainty. because of the uncertainties associated with the predicted parameters that characterize the incoming disaster, it is possible that a wrong decision may be made. two kinds of wrong decisions may occur : missed alarm (or false negative), when the mitigation action is not taken when it should have been or false alarm (or false positive), when the mitigation action is taken when it should not have been. finally, the message should communicate the level of uncertainty and expected cost of taking action but also be stated in simple language so as to be understood by those who receive it. most often, there is a communication gap between ew specialists who use technical and engineering language and the ews users, who are generally outside of the scientific community. to avoid this, these early warnings need to be reported concisely, in layman's terms and without scientific jargon. an effective early warning system needs an effective communication system. early warning communication systems have two main components (second international conference on early warning (ewcii), ): communication infrastructure hardware that must be reliable and robust, especially during the disaster; and appropriate and effective interactions among the main actors of the early warning process, such as the scientific community, stakeholders, decision-makers, the public, and the media. redundancy of communication systems is essential for disaster management, while emergency power supplies and back-up systems are critical in order to avoid the collapse of communication systems after disasters occur. in addition, to ensure the communication systems operate reliably and effectively during and after a disaster occurs, and to avoid network congestion, frequencies and channels must be reserved and dedicated to disaster relief operations. many communication tools are currently available for warning dissemination, such as short message service (sms) (cellular phone text messaging), e-mail, radio, tv and web service. information and communication technology (ict) is a key element in early warning, which plays an important role in disaster communication and disseminating information to organizations in charge of responding to warnings and to the public during and after a disaster (tubtiang, ) . today, the decentralization of information and data through the world wide web makes it possible for millions of people worldwide to have easy, instantaneous access to a vast amount of diverse online information. this powerful communication medium has spread rapidly to interconnect our world, enabling near-real-time communication and data exchanges worldwide. according to the internet world stats database, as of december , global documented internet usage was . billion people. thus, the internet has become an important medium to access and deliver information worldwide in a very timely fashion. in addition, remote sensing satellites now provide a continuous stream of data. they are capable of rapidly and effectively detecting hazards, such as transboundary air pollutants, wildfires, deforestation, changes in water levels, and natural hazards. with rapid advances in data collection, analysis, visualization and dissemination, including technologies such as remote sensing, geographical information systems (gis), web mapping, sensor webs, telecommunications and ever-growing internet connectivity, it is now feasible to deliver relevant information on a regular basis to a worldwide audience relatively inexpensively. in recent years, commercial companies such as google, yahoo, and microsoft have started incorporating maps and satellite imagery into their products and services, delivering compelling visual images and providing easy tools that everyone can use to add to their geographic knowledge. ews: decision making procedure based on cost-benefit analysis. to improve the performance of ews, a performance based decision making procedure needs to be based on the expected consequences of taking action, in terms of the probability of a false and missed alarm. an innovative approach sets the threshold based on the acceptable probability of false (missed) alarms, from a cost-benefit analysis . consider the case of a ews decision making strategy based on raising the alarm if a critical severity level, a, is predicted to be exceeded at a site. the decision of whether to activate the alarm or not is based on the predicted severity of the event. a decision model that takes into account the uncertainty of the prediction and the consequences of taking action will be capable of controlling and reducing the incidence of false and missed alerts. the proposed decision making procedure intends to fill this gap. the ews will provide the user with a real-time prediction of the severity of the event,ŜðtÞ, and its error, e tot ðtÞ. during the course of the event, the increase in available data will improve prediction accuracy. the prediction and its uncertainty are updated as more data come in. the actual severity of the event, e tot , is unknown and may be defined by adding the prediction error to the predicted value,Ŝ. the potential probability of false (missed) alarm is given by the probability of being less (greater) than the critical threshold; it becomes an actual probability of false (missed) alarm if the alarm is (not) raised: referring to the principle of maximum entropy (jaynes, ) , the prediction error is modelled by gaussian distribution, representing the most uninformative distribution possible due to lack of information. hence, at time t, the actual severity of the event, s, may be modelled with a gaussian distribution, having mean equal to the predictionŜðtÞ and uncertainty equal to s tot ðtÞ, that is the standard deviation of the prediction error e tot ðtÞ. eqs. ( ) and ( ) may be written as : where f represents the gaussian cumulative distribution function. the tolerable level at which mitigation action should be taken can be determined from a cost-benefit analysis by minimizing the cost of taking action: where c save are the savings due to mitigation actions and c fa is the cost of false alert. note that the tolerable levels a and b sum up to one which directly exhibits the trade-off between the threshold probabilities that are tolerable for false and missed alarms. the methodology offers an effective approach for decision making under uncertainty focusing on user requirements in terms of reliability and cost of action. information is now available in a near-real-time mode from a variety of sources at global and local levels. in the coming years, the multi-scaled global information network will greatly improve thanks to new technological advances that facilitate the global distribution of data and information at all levels. globalization and rapid communication provides an unprecedented opportunity to catalyse effective action at every level by rapidly providing authorities and the general public with high-quality and scientifically credible information in a timely fashion. the dissemination of warnings often follows a cascade process, which starts at the international or national level and then moves outwards or downwards in scale to regional and community levels (twigg, ) . early warnings may activate other early warnings at different authoritative levels, flowing down in responsibility roles, although all are equally necessary for effective early warning. standard protocols play a fundamental role in addressing the challenge of effective coordination and data exchange among the actors in the early warning process and it aids in the process for warning communication and dissemination. the common alerting protocol (cap), really simple syndication (rss) and extensible markup language (xml) are examples of standard data interchange formats for structured information that can be applied to warning messages for a broad range of information management and warning dissemination systems. the advantage of standard format alerts is that they are compatible with all information systems, warning systems, media, and most importantly, with new technologies such as web services. cap, for example, defines a single standard message format for all hazards, which can activate multiple warning systems at the same time and with a single input. this guarantees consistency of warning messages and would easily replace specific application-oriented messages with a single multihazard message format. cap is compatible with all types of information systems and public alerting systems (including broadcast radio and television), public and private data networks, multi-lingual warning systems and emerging technologies such as internet web services and existing systems such as the us national emergency alert system and the national oceanic and atmospheric organization (noaa) weather radio. cap uses extensible markup language (xml), which contains information about the alert message, the specific hazard event, and appropriate responses, including the urgency of action to be taken, severity of the event, and certainty of the information. for early warning systems to be effective, it is essential that they be integrated into policies for disaster mitigation. good governance priorities include protecting the public from disasters through the implementation of disaster risk reduction policies. it is clear that natural phenomena cannot be prevented, but their human, socio-economic and environmental impacts can and should be minimized through appropriate measures, including risk and vulnerability reduction strategies, early warning, and appropriate action plans. most often, these problems are given attention during or immediately after a disaster. disaster risk reduction measures require long term plans and early warning should be seen as a strategy to effectively reduce the growing vulnerability of communities and assets. the information provided by early warning systems enables authorities and institutions at various levels to immediately and effectively respond to a disaster. it is crucial that local government, local institutions, and communities be involved in the entire policy-making process, so they are fully aware and prepared to respond with short and long-term action plans. the early warning process, as previously described, is composed of main stages: risk assessment, monitoring and predicting, disseminating and communicating warnings, and response. within this framework, the first phase, when short-and long-term actions plans are laid out based on risk assessment analysis, is the realm of institutional and political actors. then ew acquires a technical dimension in the monitoring and predicting phase, while in the communication phase, ew involves both technical and institutional responsibility. the response phase then involves many more sectors, such as national and local institutions, non-governmental organizations, communities, and individuals. below is a summary of recommendations for effective decision making within the early warning process (sarevitz et al., ) . prediction efforts by the scientific community alone are insufficient for decision making. the scientific community and policy-makers should outline the strategy for effective and timely decision making by indicating what information is needed by decision-makers, how predictions will be used, how reliable the prediction must be to produce an effective response, and how to communicate this information and the tolerable prediction uncertainty so that the information can be received and understood by authorities and public. a miscommunicated or misused prediction can result in costs to society. prediction, communication, and use of the information are necessary factors in effective decision making within the early warning process. wishing not to appear ''alarmist'' or to avoid criticism, local and national governments have sometimes kept the public in the dark when receiving technical information regarding imminent threats. the lack of clear and easy-to-use information can sometimes confuse people and undermine their confidence in public officials. conversely, there are quite a few cases where the public may have refused to respond to early warnings from authorities, and have therefore exposed themselves to danger or forced governments to impose removal measures. in any case, clear and balanced information is critical, even when some level of uncertainty remains. for this reason, the information's uncertainty level must be communicated to users together with the early warning . resources must be allocated wisely and priorities should be set, based on risk assessment, for long-and short-term decision making, such as investing in local early warning systems, education, or enhanced monitoring and observational systems. in addition, decision-makers need to be able to set priorities for timely and effective response to a disaster when it occurs based on the information received from the early warning system. decision-makers should receive necessary training on how to use the information received when an alert is issued and what that information means. institutional networks should be developed with clear responsibilities. complex problems such as disaster mitigation and response require multidisciplinary research, multi-sector policy and planning, multi-stakeholder participation, and networking involving all the participants of the process, such as the scientific research community (including social sciences aspects), land use planning, environment, finance, development, education, health, energy, communications, transportation, labour, and social security and national defence. decentralization in the decision making process could lead to optimal solutions by clarifying local government and community responsibilities. collaboration will improve efficiency, credibility, accountability, trust, and costeffectiveness. this collaboration consists of joint research projects, sharing information and participatory strategic planning and programming. because there are numerous actors involved in early warning response plans (such as governing authorities, municipalities, townships, and local communities), the decision making and legal framework of responsibilities should be set up in advance in order to be prepared when a disaster occurs. hurricane katrina in showed gaps in the legal frameworks and definition of responsibilities that exacerbated the disaster. such ineffective decision making must be dealt with to avoid future disasters such as the one in new orleans. earth observation (eo), through measuring and monitoring, provides an insight and understanding into earth's complex processes and changes. eo includes measurements that can be made directly or by sensors in-situ or remotely (i.e. satellite remote sensing, aerial surveys, land or oceanbased monitoring systems, fig. ), to provide key information to models or other tools to support decision making processes. eo assists governments and civil society to identify and shape corrective and new measures to achieve sustainable development through original, scientifically valid assessments and early warning information on the recent and potential long-term consequences of human activities on the biosphere. at a time when the world community is striving to identify the impacts of human actions on the planet's life support system, time sequenced satellite images help to determine these impacts and provide unique, visible and scientifically convincing evidence that human actions are causing substantial changes to the earth's environment and natural resource base (i.e. ecosystems changes, urban growth, transboundary pollutants, loss of wetlands, etc.). by enhancing the visualization of scientific information on environmental change, satellite imagery will enhance environmental management and raise the awareness of emerging environmental threats. eo provides the opportunity to explore, to discover, and to understand the world in which we live from the unique vantage point of space. the following section discusses the potential role of eo for each type of environmental threat. . . ongoing and rapid/sudden-onset environmental threats . . . oil spills earth observation is increasingly used to detect illegal marine discharges and oil spills. infra-red (ir) video and photography from airborne platforms, thermal infrared imaging, airborne laser fluorosensors, airborne and satellite optical sensors, as well as airborne and satellite synthetic aperture radar (sar) are used for this purpose. sar has the advantage of also providing data during cloud cover conditions and darkness, unlike optical sensors. in addition, optical-sensor techniques applied to oil spills detection are associated to a high number of false alarms, more often cloud shadows, sun glint, and other conditions such as precipitation, fog, and the amounts of daylight present also may be erroneously associated with oil spills. for this reason, sar is preferred over optical sensors, especially when spills cover vast areas of the marine environment, and when the oil cannot be seen or discriminated against the background. sar detects changes in sea-surface roughness patterns modified by oil spills. the largest shortcoming of oil spills detection using sar images is accurate discrimination between oil spills and natural films (brekke and soldberg, ) . to date, operational application of satellite imagery for oil spill detection still remains a challenge due to limited spatial and temporal resolution. in addition, processing times are often too long for operational purposes, and it is still not possible to measure the thickness of the oil spill (mansor et al., ; us fig. . illustration of multiple observing systems in use on the ground, at sea, in the atmosphere and from space for monitoring and researching the climate system (wmo, ). interior, minerals management service, ) . existing applications are presented in the section . chemical and nuclear accidents may have disastrous consequences, such as the accident in bhopal, india, which killed more than and injured about , and the explosion of the reactors of the nuclear power plant in chernobyl, ukraine, which was the worst such accident to date, affecting part of the soviet union, eastern europe, scandinavia, and later, western europe. meteorological factors such as wind speed and direction, turbulence, stability layers, humidity, cloudiness, precipitation and topographical features, influence the impact of chemical and nuclear accidents and have to be taken into account in decision models. in some cases, emergencies are localized while in others, transport processes are most important. eo provides key data for monitoring and forecasting the dispersion and spread of the substance. geohazards associated with geological processes such as earthquakes, landslides, and volcanic eruptions are mainly controlled by ground deformation. eo data allows monitoring of key physical parameters associated with geohazards, such as deformation, plate movements, seismic monitoring, baseline topographic, and geoscience mapping. eo products are useful for detection and mitigation before the event, and for damage assessment during the aftermath. for geohazards, stereo optical and radar interferometry associated with ground-based global positioning system (gps) and seismic networks are used. for volcanic eruptions additional parameters are observed such as temperature and gas emissions. ground based measurements have the advantage of being continuous in time but have limited spatial extent, while satellite observations cover wide areas but are not continuous in time. these data need to be integrated for an improved and more comprehensive approach (committee on earth observation satellites (ceos), ; integrated global observing strategy (igos-p), ). earthquakes are due to a sudden release of stresses accumulated around the faults in the earth's crust. this energy is released through seismic waves that travel from the origin zone, which cause the ground to shake. severe earthquakes can affect buildings and populations. the level of damage depends on many factors, such as the intensity and depth of the earthquake, and the vulnerability of structures and their distance from the earthquake's origin. for earthquakes, information on the location and magnitude of the event first needs to be conveyed to responsible authorities. this information is used by seismic early warning systems to activate security measures within seconds after the earthquake's origin and before strong shaking occurs at the site. shakemaps generated within five minutes provide essential information to assess the intensity of ground shaking and the damaged areas. the combination of data from seismic networks and gps may help to increase reliability and timeliness of this information. earthquake frequency and probability shakemaps based on historical seismicity and base maps (geological, soil type, active faults, hydrological, and dems), assist in the earthquake mitigation phase and need to be included in the building code design process for improved land use and building practices. for responses, additional data are needed, such as seismicity, intensity, strain, dems, soil type, moisture conditions, infrastructure and population, to produce post-event damage maps. thermal information needs to continuously be monitored. this is obtained from low/medium resolution ir imagery from polar and geostationary satellites for thermal background characterization (advanced very high resolution radiometer (avhrr), atsr, modis and goes) together with deformation from edm and/or gps network; borehole strainmetres; and sar interferometry. landslides are displacements of earth, rock, and debris caused by heavy rains, floods, earthquakes, volcanoes, and wildfires. useful information for landslides and ground instability include the following: hazard zonation maps (landslides, debris flows, rockfalls, subsidence, and ground instability scenarios) during the mitigation phase, associated with landlside inventories, dem, deformation (gps network; sar interferometry; other surveys such as leveling, laser scanning, aerial, etc.), hydrology, geology, soil, geophysical, geotechnical, climatic, seismic zonation maps, land cover, land use, and historical archives. forecasting the location and extent of ground instability or landslides is quite challenging. landslides can be preceded by cracks, accelerating movement, and rock fall activity. real-time monitoring of key parameters thus becomes essential. the observed acceleration, deformation or displacement, exceeding a theoretical pre-fixed threshold is the trigger for issuing an alert signal. an alternative approach is based on hydrologic forecasting. it should be said that for large areas site-specific monitoring is not feasible. in this case, hazard mapping associated with monitoring of high risk zones remains the best option for warning. local rapid mapping of affected areas, updated scenarios and real-time monitoring (deformation, seismic data, and weather forecasts) assist during the response phase. a tsunami is a series of ocean waves generated by sudden displacements in the sea floor, landslides, or volcanic activity. although a tsunami cannot be prevented, the impact of a tsunami can be mitigated through community preparedness, timely warnings, and effective response. observations of seismic activity, sea floor bathymetry, topography, sea level data (tide gauge observations of sea height; real-time tsunami warning buoy data; deep ocean assessment and reporting of tsunamis (dart) buoys and sea-level variations from the topex/poseidon and jason, the european space agency's envisat, and the us navy's geosat follow-on), are used in combination with tsunami models to create inundation and evacuation maps and to issue tsunami watches and warnings. volcanic eruptions may be mild, releasing steam and gases or lava flows, or they can be violent explosions that release ashes and gases into the atmosphere. volcanic eruptions can destroy land and communities living in their path, affect air quality, and even influence the earth's climate. volcanic ash can impact aviation and communications. data needs for volcanic eruptions include hazard zonation maps, real-time seismic, deformation (electronic distance measurement (edm) and/or gps network; leveling and tilt networks; borehole strainmeters; gravity surveys; sar interferometry), thermal (landsat, aster, geostationary operational environmental satellites (goes), modis); air borne ir cameras; medium-high resolution heat flux imagery and gas emissions (cospec, licor surveys); satellite imagery (i.e., aster) and digital elevation maps (dem). as soon as the volcanic unrest initiates, information needs to be timely and relatively high-resolution. once the eruption starts, the flow of information has to speed up. seismic behaviour and deformation patterns need to be observed throughout the eruption especially to detect a change of eruption site ( - seismometers ideally with -directional sensors; a regional network). hydro-meteorological hazards include the wide variety of meteorological, hydrological and climate phenomena that can pose a threat to life, property and the environment. these types of hazards are monitored using the meteorological, or weather, satellite programs, beginning in the early s. in the united states, nasa, noaa, and the department of defense (dod) have all been involved with developing and operating weather satellites. in europe, esa and eumetsat (european organisation for the exploitation of meteorological satellites) operate the meteorological satellite system (us centennial of flight commission). data from geostationary satellite and polar microwave derived products (goes) and polar orbiters (microwave data from the defense meteorological satellite program (dmsp), special sensor microwave/imager (ssm/i), noaa/advanced microwave sounding unit (amsu), and tropical rainfall measuring mission (trmm)) are key in weather analysis and forecasting. goes has the capability of observing the atmosphere and its cloud cover from the global scale down to the storm scale, frequently and at high resolution. microwave data are available on only an intermittent basis, but are strongly related to cloud and atmospheric properties. is key for monitoring meteorological processes from the global scale to the synoptic scale to the mesoscale and finally to the storm scale. (scofield et al., ) . goes and poes weather satellites provide useful information on precipitation, moisture, temperature, winds and soil wetness, which is combined with ground observation. floods are often triggered by severe storms, tropical cyclones, and tornadoes. the number of floods has continued to rise steadily; together with droughts, they have become the most deadly disasters over the past decades. the increase in losses from floods is also due to climate variability, which has caused increased precipitation in parts of the northern hemisphere (natural hazards working group, ) . floods can be deadly, particularly when they arrive without warning. in particular, polar orbital and geostationary satellite data are used for flood observation. polar orbital satellites include optical low (avhrr), medium (landsat, spot, irs) and high resolution (ikonos) and microwave sensors (high (sar-radarsat, jers and ers) and low resolution passive sensors (ssmi). meteorological satellites include goes and , meteosat, gms, the indian insat and the russian goms; and polar orbitals such as noaa (noaa ) and ssmi. for storms, additional parameters are monitored, such as sea surface temperature, air humidity, surface wind speed, rain estimates (from dmsp/ssmi, trmm, ers, quikscat, avhrr, radarsat). trmm offers unique opportunities to examine tropical cyclones. with trmm, scientists are able to make extremely precise radar measurements of tropical storms over the oceans and identify their intensity variations, providing invaluable insights into the dynamics of tropical storms and rainfall. epidemics such as malaria and meningitis are linked to environmental factors. satellite data can provide essential information on these factors and help to better understand diseases. as an example, the esa epidemio project, launched in , utilizes data from esa's envisat or the french space agency's spot, and field data to gather information on the spread of epidemics, helping to better prepare for epidemic outbreaks. geo, with who and other partners, are working together on the meningitis environmental risk information technologies (merit) project to better understand the relationship between meningitis and environmental factors using remote sensing. wildfires pose a threat to lives and properties and are often connected to secondary effects such as landslides, erosion, and changes in water quality. wildfires may be natural processes, human induced for agriculture purposes, or just the result of human negligence. wildfire detection using satellite technologies is possible thanks to significant temperature difference between the earth's surface (usually not exceeding - c) and the heat of fire ( - c), which results in a thousand times difference in heat radiation generated by these objects. noaa (avhrr radiometer with m spatial resolution and km swath width) and earth observing satellites (eos) (terra and aqua satellites with modis radiometer installed on them with , and m spatial resolution and km swath width) are the most widely used modern satellites for operative fire monitoring (klaver et al., ) . high-resolution sensors, such as the landsat thematic mapper, spot multispectral scanner, or national oceanic and atmospheric administration's avhrr or modis, are used for fire potential definition. sensors used for fire detection and monitoring include avhrr, which has a thermal sensor and daily overflights, the defense meteorological satellite program's optical linescan system (ols) sensor, which has daily overflights and operationally collects visible images during its nighttime pass, and the modis land rapid response system. avhrr and higher resolution images (spot, landsat, and radar) can be used to assess the extent and impact of the fire. smog is the product of human and natural activities, such as industry, transportation, wildfires, volcanic eruptions, etc. and can have serious effects on human health and the environment. a variety of eo tools are available to monitor air quality. the national aeronautics and space administration (nasa) and the european space agency (esa) both have instruments to monitor air quality. the canadian mopitt (measurements of pollution in the troposphere) aboard the terra satellite monitors the lower atmosphere to observe how it interacts with the land and ocean biospheres, distribution, transport, sources, and sinks of carbon monoxide and methane in the troposphere. the total ozone mapping spectrometer (toms) instrument measures the total amount of ozone in a column of atmosphere as well as cloud cover over the entire globe. additionally, toms measures the amount of solar radiation escaping from the top of the atmosphere to accurately estimate the amount of ultraviolet radiation that reaches the earth's surface. the ozone monitoring instrument (omi) on aura will continue the toms record for total ozone and other atmospheric parameters related to ozone chemistry and climate. the omi instrument distinguishes between aerosol types, such as smoke, dust, and sulphates, and can measure cloud pressure and coverage. esa's schiamachy (scanning imaging absorption spectro-meter for atmospheric chartography) maps atmosphere over a very wide wavelength range ( - nm), which allows detection of trace gases, ozone and related gases, clouds and dust particles throughout the atmosphere (athena global, ) . the moderate resolution imaging spectroradiometer (modis) sensor measures the relative amount of aerosols and the relative size of aerosol particles-solid or liquid particles suspended in the atmosphere. examples of such aerosols include dust, sea salts, volcanic ash, and smoke. the modis aerosol optical depth product is a measure of how much light airborne particles prevent from passing through a column of atmosphere. new technologies are also being explored for monitoring air quality, such as mobile phones equipped with simple sensors to empower citizens to collect and share real-time air quality measurements. this technology is being developed by a consortium called urban atmospheres. the traditional methods of monitoring coastal water quality require scientists to use boats to gather water samples, typically on a monthly basis because of the high costs of these surveys. this method captures episodic events affecting water quality, such as the seasonal freshwater runoff, but is not able to monitor and detect fast changes. satellite data provide measures of key indicators of water qualityturbidity and water clarity -to help monitor fast changes in factors that affect water quality, such as winds, tides and human influences including pollution and runoff. geoeye's sea-viewing wide field-ofview sensor (seawifs) instrument, launched aboard the orbview- satellite in , collects ocean colour data used to determine factors affecting global change, particularly ocean ecology and chemistry. modis sensor, launched aboard the aqua satellite in , together with its counterpart instrument aboard the terra satellite, collects measurements from the entire earth surface every - days and can also provide measurements of turbidity (bjorn-hansen, ) . overall, air and water quality monitoring coverage still appears to be irregular and adequate and available in real-time only for some contaminants (global earth observation system of systems, ). . . . . droughts. noaa's national weather service (nws) defines a drought as ''a period of abnormally dry weather sufficiently prolonged for the lack of water to cause serious hydrologic imbalance in the affected area.'' drought can be classified by using different definitions: meteorological (deviation from normal precipitation); agricultural (abnormal soil moisture conditions); hydrological (related to abnormal water resources); and socio-economic (when water shortage impacts people's lives and economies). a comprehensive and integrated approach is required to monitor droughts, due to the complex nature of the problem. although all types of droughts originate from a precipitation deficiency, it is insufficient to monitor solely this parameter to assess severity and resultant impacts (world meteorological organization, ) . effective drought early warning systems must integrate precipitation and other climatic parameters with water information such as streamflow, snow pack, groundwater levels, reservoir and lake levels, and soil moisture, into a comprehensive assessment of current and future drought and water supply conditions (svoboda et al., ) . in particular, there are key parameters that are used in a composite product developed from a rich information stream, including climate indices, numerical models, and the input of regional and local experts. these are as follows: ) palmer drought severity index (based on precipitation data, temperature data, division constants (water capacity of the soil, etc.) and previous history of the indices). ) soil moisture model percentile (calculated through a hydrological model that takes observed precipitation and temperature and calculates soil moisture, evaporation and runoff. the potential evaporation is estimated from observed temperature). ) daily stream flow percentiles. ) percent of normal precipitation. ) standardized precipitation index, and ) remotely sensed vegetation health index. additional indicators may include the palmer crop moisture index, keetch-byram drought index, fire danger index, evaporation-related observations such as relative humidity and temperature departure from normal, reservoir and lake levels, groundwater levels, field observations of surface soil moisture, and snowpack observations. some of these indices and indicators are computed for point locations, and others are computed for climate divisions, drainage (hydrological) basins, or other geographical regions (svoboda et al., ) . a complete list of drought products can be found on noaa's national environmental satellite, data, & information service (noaanesdis) web page. . . . . desertification. desertification refers to the degradation of land in arid, semi-arid, and dry sub-humid areas due to climatic variations or human activity. desertification can occur due to inappropriate land use, overgrazing, deforestation, and over-exploitation. land degradation affects many countries worldwide and has its greatest impact in africa. in spite of the potential benefits of eo information, the lack of awareness of the value and availability of information, inadequate institutional resources and financial problems are the most frequent challenges to overcome in detecting desertification (sarmap, ) . in , through a project called desertwatch, esa has developed a set of indicators based principally on land surface parameters retrieved from satellite observations for monitoring land degradation and desertification. desertwatch is being tested and applied in mozambique, portugal, and brazil. the un food and agriculture organization's global information and early warning system (giews). these provide information on food availability, market prices and livelihoods. the observations of climate-related variables on a global scale have made it possible to document and analyse the behaviour of earth's climate, made available through programs as follows: the ioc-wmo-unep-icsu global ocean observing system (goos); the fao-wmo-unesco-unepicsu global terrestrial observing system (gtos); the wmo global observing system (gos) and global atmosphere watch (gaw); the research observing systems and observing systems research of the wmo-ioc-icsu world climate research programme (wcrp) and other climate relevant international programs; and wmo-unesco-icsuioc-unep global climate observing system (gcos). the intergovernmental panel on climate change (ipcc) periodically reviews and assesses the most recent scientific, technical and socio-economic information produced worldwide relevant to the understanding of climate change. hundreds of scientists worldwide contribute to the preparation and review of these reports. according to the recent ipcc report, the atmospheric buildup of greenhouse gases is already shaping the earth's climate and ecosystems from the poles to the tropics, which face inevitable, possibly profound, alteration. the ipcc has predicted widening droughts in southern europe and the middle east, sub-saharan africa, the american southwest and mexico, and flooding that could imperil low-lying islands and the crowded river deltas of southern asia. it stressed that many of the regions facing the greatest risks are among the world's poorest. information about the impacts of climate variability impact information is needed by communities and resource managers to adapt and prepare for larger fluctuations as global climate change becomes more evident. this information includes evidence of changes occurring due to climate variability, such as loss of ecosystems, ice melting, coastal degradation, and severe droughts. such information will provide policy-makers scientifically valid assessment and early warning information on the current and potential long-term consequences of human activities on the environment. (i.e., ecosystem changes, loss of biodiversity and habitats, land cover/land changes, coastal erosion, urban growth, etc.). landsat satellites (series - ) are extensively used to monitor location-specific environmental changes. they have the great advantage of providing repetitive, synoptic, global coverage of high-resolution multi-spectral imagery (fadhil, ) . landsat can be used for change detection applications to identify differences in the state of an object or phenomenon by comparing the satellite imagery at different times. change detection is key in natural resources management (singh, ) . central to this theme is the characterization, monitoring and understanding of land cover and land use change, since they have a major impact on sustainable land use, biodiversity, conservation, biogeochemical cycles, as well as land-atmosphere interactions affecting climate and they are indicators of climate change, especially at a regional level (igos-p, ) . the united nations environment programme's (unep) bestselling publication one planet, many people: atlas of our changing environment, which shows before and after satellite photos to document changes to the earth's surface over the past years, proves the importance and impact of visual evidence of environmental change in hotspots. the atlas contains some remarkable landsat satellite imagery and illustrates the alarming rate of environmental destruction. through the innovative use of some satellite images, ground photos and maps, the atlas provides visual proof of global environmental changes -both positive and negative -resulting from natural processes and human activities. case studies include themes such as atmosphere, coastal areas, waters, forests, croplands, grasslands, urban areas, and tundra and polar regions. the atlas demonstrates how our growing numbers and our consumption patterns are shrinking our natural resource base. the aim of this report is to identify current gaps and future needs of early warning systems through the analysis of the state of the art of existing early warning and monitoring systems for environmental hazards. among existing early warning/monitoring systems, only systems that provide publicly accessible information and products have been included in the analysis. for the present study, several sources have been used, such as the global survey of early warning systems (united nations, ) together with the online inventory of early warning systems on isdr's platform for the promotion of early warning (ppew) website, and several additional online sources, technical reports and scientific articles listed in the references. for each hazard type, a gap analysis has been carried out to identify critical aspects and future needs of ews, considering aspects such as geographical coverage, and essential ews elements such as monitoring and prediction capability, communication systems and application of early warning information in responses. below is the outcome of the review of existing early warning/monitoring systems for each hazard type. details of all systems, organized in tables by hazard type, are listed in the appendix. the current gaps identified for each hazard type could be related to technological, organizational, communication or geographical coverage aspects. to assess the geographical coverage of existing systems for each hazard type, the existing systems have been imposed on the hazard's risk map. for this analysis, the maps of risks of mortality and economic loss were taken from natural disaster hotspots: a global risk analysis, a report from the world bank (dilley et al., ) . to detect operational oil spills, satellite overpasses and aerial surveillance flights need to be used in an integrated manner. in many countries in northern europe, the ksat manual approach is currently used to identify oil spills from the satellite images. ksat has provided this operational service since , and in europe, use of satellites for oil spill detection is well established and well integrated within the national and regional oil pollution surveillance and response chains. operational algorithms utilizing satellite-borne c-band sar instruments (radarsat- , envisat, radarsat- ) are also being developed for oil-spill detection in the baltic sea area. releases of a hazardous substance from industrial accidents can have immediate adverse effects on human and animal life or the environment. wmo together with iaea provides specialized meteorological support to environmental emergency response related to nuclear accidents and radiological emergencies. the wmo network of eight specialized numerical modelling centres called regional specialized meteorological centres (rsmcs) provides predictions of the movement of contaminants in the atmosphere. the inter-agency committee on the response to nuclear accidents (iacrna) of the iaea, coordinates the international intergovernmental organizations responding to nuclear and radiological emergencies. iacrna members are: the european commission (ec), the european police office (europol), the food and agriculture organization of the united nations (fao), iaea, the international civil aviation organization (icao), the international criminal police organization (interpol), the nuclear energy agency of the organization for economic co-operation and development (oecd/nea), the pan american health organization (paho), unep, the united nations office for the co-ordination of humanitarian affairs (un-ocha), the united nations office for outer space affairs (unoosa), the world health organization (who), and wmo. the agency's goal is to provide support during incidents or emergencies by providing near real-time reporting of information through the following: the incident and emergency centre (iec), which maintains a h oncall system for rapid initial assessment, and if needed, triggers response operations; the emergency notification and assistance convention website (enac), which allows the exchange of information on nuclear accidents or radiological emergencies; and the nuclear event webbased system (news), which provides information on all significant events in nuclear power plants, research reactors, nuclear fuel cycle facilities and occurrences involving radiation sources or the transport of radioactive material. the global chemical incident alert and response system of the international programme on chemical safety, which is part of who, focuses on disease outbreaks from chemical releases and also provides technical assistance to member states for response to chemical incidents and emergencies. formal and informal sources are used to collect information and if necessary, additional information and verification is sought through official channels: national authorities, who offices, who collaborating centres, other united nations agencies, and members of the communicable disease global outbreak alert and response network (goarn), internet-based resources, particularly the global public health intelligence network (gphin) and promed-mail. based on this information, a risk assessment is carried out to determine the potential impact and if assistance needs to be offered to member states. . . . geological hazards . . . . earthquakes. earthquake early warning systems are a relatively new approach to seismic risk reduction. they provide a rapid estimate of seismic parameters such as magnitude and location associated with a seismic event based on the first seconds of seismic data registered at the epicentre. this information can then be used to predict ground motion parameters of engineering interest including peak ground acceleration and spectral acceleration. earthquake warning systems are currently operational in mexico, japan, romania, taiwan and turkey (espinosa aranda et al., ; wu et al., ; wu and teng, ; odaka et al., ; kamigaichi, ; nakamura, ; horiuchi et al., ) . systems are under development for seismic risk mitigation in california and italy. local and national scale seismic early warning systems, which provide seismic information between a few seconds and tens of seconds before shaking occurs at the target site, are used for a variety of applications such as shutting down power plants, stopping trains, evacuating buildings, closing gas valves, and alerting wide segments of the population through the tv, among others. on the global scale, multi-national initiatives, such as the us geological survey (usgs) and geo-forschungs netz (geofon), operate global seismic networks for seismic monitoring but do not provide seismic early warning information. today, the usgs in cooperation with incorporated research institutions for seismology (iris) operates the global seismic networks (gsn), which comprises more than stations providing free, real-time, open access data. geofon collects information from several networks and makes this information available to the public online. usgs earthquake notification service (ens) provides publicly available e-mail notification for earthquakes worldwide within minutes for earthquakes in us and within min for events worldwide. usgs also provides near-real-time maps of ground motion and shaking intensity following significant earthquakes. this product, called shakemap, is being used for post-earthquake response and recovery, public and scientific information, as well as for preparedness exercises and disaster planning. effective early warning technologies for earthquakes are much more challenging to develop than for other natural hazards because warning times range from only a few seconds in the area close to a rupturing fault to a minute or so (heaton, ; allen and kanamori, ; kanamori, ) . several local and regional applications exist worldwide but no global system exists or could possibly exist for seismic early warning at global scale, due to timing constraints. earthquake early warning systems applications must be designed at the local or regional level. although various early warning systems exist worldwide at the local or regional scale, there are still high seismic risk areas that lack early warning applications, such as peru, chile, iran, pakistan, and india. the international consortium on landslides (icl), created at the kyoto symposium in january , is an international non-governmental and non-profit scientific organization, which is supported by the united nations educational, scientific and cultural organization (unesco), the wmo, fao, and the united nations international strategy for disaster reduction (un/isdr). icl's mission is to promote landslide research for the benefit of society and the environment and promote a global, multidisciplinary programme regarding landslides. icl provides information about current landslides on its website, streaming this information from various sources such as the geological survey of canada. this information does not provide any early warning since it is based on news reports after the events have occurred. enhancing icl's existing organizational infrastructure by improving landslide prediction capability would allow icl to provide early warning to authorities and populations. technologies for slopes monitoring has greatly improved, but currently only few slopes are being monitored at a global scale. the use of these technologies would be greatly beneficial for mitigating losses from landslides worldwide. . . . . tsunamis. the indian ocean tsunami of december killed people and left . million homeless. it highlighted gaps and deficiencies in existing tsunami warning systems. in response to this disaster, in june the intergovernmental oceanographic commission (ioc) secretariat was mandated by its member states to coordinate the implementation of a tsunami warning system for the indian ocean, the northeast atlantic and mediterranean, and the caribbean. efforts to develop these systems are ongoing. since march , the indonesian meteorological, climatological and geophysical agency has been operating the german-indonesian tsunami early warning system for the indian ocean. milestones, such as the development of the automatic data processing software and underwater communication for the transmission of pressure data from the ocean floor to a warning centre, have been reached. these systems will be part of the global ocean observing system (goos), which will be part of geoss. the pacific basin is monitored by the pacific tsunami warning system (ptws), which was established by member states and is operated by the pacific tsunami warning center (ptwc), located near honolulu, hawaii. ptwc monitors stations throughout the pacific basin to issue tsunami warnings to member states, serving as the regional centre for hawaii and as a national and international tsunami information centre. it is part of the ptws effort. noaa national weather service operates ptwc and the alaska tsunami warning center (atwc) in palmer, alaska, which serves as the regional tsunami warning center for alaska, british columbia, washington, oregon, and california. ptws monitors seismic stations operated by ptwc, usgs and atwc to detect potentially tsunamigenic earthquakes. such earthquakes meet specific criteria for generation of a tsunami in terms of location, depth, and magnitude. ptws issues tsunami warnings to potentially affected areas, by providing estimates of tsunami arrival times and areas potentially most affected. if a significant tsunami is detected, the tsunami warning is extended to the pacific basin. the international tsunami information center (itic), under the auspices of ioc, aims to mitigate tsunami risk by providing guidance and assistance to improve education and preparedness. itic also provides a complete list of tsunami events worldwide. official tsunami bulletins are released by ptwc, atwc, and the japan meteorological agency (jma). regional and national tsunami information centres exist worldwide; the complete list is available from ioc. currently, no global tsunami warning system is in place. in addition, fully operational tsunami early warning systems are needed for the indian ocean and the caribbean. initial steps have been taken in this direction. in , noaa established the caribbean tsunami warning program as the first step towards the development of a caribbean tsunami warning center. since , steps have been taken to develop an indian ocean tsunami system, such as establishing tsunami information centres and deploying real-time sea level stations and deep ocean buoys in countries bordering indian ocean. in , the united states agency for international development (usaid) launched the us indian ocean tsunami warning systems program as the us government's direct contribution to the international effort led by the ioc. since then, there are ongoing activities, such as germany's -year german-indonesia tsunami early warning system program with indonesia, the tsunami regional trust fund established in , and the united kingdom's tsunami funds reserved for early warning capacity building. nevertheless, on july , only one month after the announcement that the indian ocean's tsunami warning system was operational, a tsunami in java, indonesia, killed hundreds of people. on that day, tsunami warnings were issued to alert jakarta but there was not enough time to alert the coastal areas. the july tsunami disaster illustrates that there are still operational gaps to be solved in the indian ocean tsunami early warning system, notably in warning coastal communities on time. . . . . volcanic eruptions. volcanic eruptions are always anticipated by precursor activities. in fact, seismic monitoring, ground deformation monitoring, gas monitoring, visual observations, and surveying are used to monitor volcanic activity. volcano observatories are distributed worldwide. a complete list of volcano observatories is available at the world organization of volcanic observatories (wovo) website. however, there is still a divide between developed and developing countries. in particular, a large number of observatories and research centres monitor volcanoes in japan and the united states very well. most central and south american countries (mexico, guatemala, el salvador, nicaragua, costa rica, colombia, ecuador, peru, chile, trinidad, and the antilles) have volcano observatories that provide public access to volcanic activity information. in africa, only two countries (congo and cameroon) have volcano monitoring observatories and they do not provide public access to information. only a small number, probably fewer than , of the world's volcanoes are well monitored, mostly due to inadequate resources in poor countries (national hazards working group, ) . there is a need to fill this gap by increasing the coverage of volcanic observatories. currently, there is no global early warning system for volcanic eruptions except for aviation safety. global volcanic activity information is provided by the smithsonian institution, which partners with the usgs under the global volcanism program to provide online access to volcanic activity information, collected from volcano observatories worldwide. reports and warnings are available on a daily basis. weekly and monthly summary reports are also available, but these only report changes in volcanic activity level, ash advisories, and news reports. the information is also available through google earth. this information is essential for the aviation sector, which must be alerted to ash-producing eruptions. there are several ash advisory centres distributed worldwide, in london, toulouse, anchorage, washington, montreal, darwin, wellington, tokyo, and buenos aires. however, there is a need to coordinate interaction and data sharing among the approximately volcano observatories that make up wovo. esa is developing globvolcano, an information system to provide earth observations for volcanic risk monitoring. . . . . wildfires. early warning methodologies for wildfires are based on the prediction of precursors, such as fuel loads and lightning danger. these parameters are relevant for triggering prediction, but once the fire has begun, fire behaviour and pattern modelling are fundamental for estimating fire propagation patterns. most industrial countries have ew capabilities in place, while most developing countries have neither fire early warning nor monitoring systems in place (goldammer et al., ) . local and regional scale fire monitoring systems are available for canada, south america, mexico and south africa. an interactive mapping service based on google maps and eo imagery from inpe, the brazilian space research institute, has been available since september . individuals can contribute with information from the ground; in only months the service has received million reports on forest fires and illegal logging, making it one of the most successful web sites in brazil; it has had real impact through follow up legal initiatives and parliamentary enquiries. wildfire information is available worldwide through the global fire monitoring center (gfmc), a global portal for fire data products, information, and monitoring. this information is accessible to the public through the gfmc website but is not actively disseminated. the gfmc provides global fire products through a worldwide network of cooperating institutions. gfmc fire products include the following: fire danger maps and forecasts, which provide assessment of fire onset risk; near realtime fire events information; an archive of global fire information; and assistance and support in the case of a fire emergency. global fire weather forecasts are provided by the experimental climate prediction center (ecpc), which also provides national and regional scale forecasts. noaa provides experimental, potential fire products based on estimated intensity and duration of vegetation stress, which can be used as a proxy for assessment of potential fire danger. the webfire mapper, part of fao's global fire information management system (gfims), initially developed by the university of maryland and financially supported by nasa, provides near real-time information on active fires worldwide, detected by the modis rapid response system. the webfire mapper integrates satellite data with gis technologies for active fire information. this information is available to the public through the website and e-mail alerts. the european forest fire information system also provides information on current fire situations and forecasts for europe and the mediterranean area. although global scale fire monitoring systems exist, an internationally standardized approach is required to create a globally comprehensive early fire warning system. integration of existing fire monitoring systems could significantly improve fire monitoring and early warning capabilities. an information network must be developed to disseminate early warnings about wild-fire danger at both the global and local levels, to quickly detect and report fires, and to enhance rapid fire detection and classification capabilities at national and regional levels. the global early warning system for wild-fires, which is under development as part of the global earth observation system of systems (geoss) effort, will address these issues. . . . . floods. among natural hazards that are currently increasing in frequency, floods are the deadliest. this study shows there is inadequate coverage of flood warning and monitoring systems, especially in developing or least developed countries such as china, india, bangladesh, nepal, west africa, and brazil. at the local scale, there are several stand-alone warning systems, for example, in guatemala, honduras, el salvador, nicaragua, zimbabwe, south africa, belize, czech republic, and germany. however, they do not provide public access to information. the european flood alert system (efas), which is an initiative by ec-jrc, provides information on the possibility of river flooding occurring within the next three days. efas also provides an overview of current floods based on information received from the national hydrological services and the global runoff data center in germany. floods are monitored worldwide from the dartmouth flood observatory, which provides public access to major flood information, satellite images and estimated discharge. orbital remote sensing (advanced scanning microradiometer (amsr-e and quickscat)) is used to detect and map major floods worldwide. satellite microwave sensors can monitor, at a global scale and on a daily basis, increases of floodplain water surface without cloud interference. the dartmouth flood observatory provides estimated discharge and satellite images of major floods worldwide but does not provide forecasts of flood conditions or precipitation amounts that could allow flood warnings to be issued days in advance of events. noaa provides observed hydrologic conditions of major us river basins and predicted values of precipitation for rivers in the united states. noaa also provides information on excessive rainfall that could lead to flash-flooding and if necessary warnings are issued within h in advance. ifnet global flood alert system (gfas) uses global satellite precipitation estimates for flood forecasting and warning. the gfas website publishes useful public information for flood forecasting and warning, such as precipitation probability estimates, but the system is currently running on a trial basis. at a global scale, flood monitoring systems are more developed than flood early warning systems. for this reason, existing technologies for flood monitoring must be improved to increase prediction capabilities and flood warning lead times. . . . . severe weather, storms and tropical cyclones. at the global level, the world weather watch (www) and hydrology and water resources programmes coordinated by wmo provide global collection, analysis and distribution of weather observations, forecasts and warnings. the www is composed of the global observing system (gos), which provides the observed meteorological data; the global telecommunications system (gts), which reports observations, forecasts and other products and the global data processing system (gdps), which provides weather analyses, forecasts and other products. the www is an operational framework of coordinated national systems, operated by national governments. the tropical cyclone programme (tcp) is also part of the www. tcp is in charge of issuing tropical cyclones and hurricanes forecasts, warnings and advisories, and seeks to promote and coordinate efforts to mitigate risks associated with tropical cyclones. tcp has established tropical cyclone committees that extend across regional bodies (regional specialized meteorological centres (rsmc)), which, together with national meteorological and hydrological services (nmhss), monitor tropical cyclones globally and issue official warnings to the regional meteorological services of countries at risk. regional bodies worldwide have adopted standardized wmo-tcp operational plans and manuals, which promote internationally accepted procedures in terms of units, terminology, data and information exchange, operational procedures, and telecommunication of cyclone information. each member of a regional body is normally responsible for its land and coastal waters warnings. a complete list of wmo members and rsmcs is available on the wmo-tcp website. wmo then collects cyclone information and visualizes it on world maps. the university of hawaii collects information from wmo and provides online information on cyclone categories, wind speed, and current and predicted courses. although comprehensive coverage of early warning systems for storms and tropical cyclones is available, recent disasters such as hurricane katrina of have highlighted inadequacies in early warning system technologies for enabling effective and timely emergency response. there is a pressing need to improve communication between the sectors involved by strengthening the links between scientific research, organizations responsible for issuing warnings, and authorities in charge of responding to these warnings. while the www is an efficient framework of existing rsmc, nmhss and networks, national capacities in most developing countries need improvements in order to effectively issue and manage early warnings. action plans must also be improved. epidemics pose a significant threat worldwide, particularly in those areas that are already affected by other serious hazards, poverty, or under-development. epidemics spread easily across country borders. globalization increases the potential of a catastrophic disease outbreak: there is the risk that millions of people worldwide could potentially be affected. a global disease outbreak early warning system is urgently needed. who is already working in this field through the epidemic and pandemic alert and response, which provides real-time information on disease outbreaks, and goarn. the who member countries, disease experts, institutions, agencies, and laboratories, part of an outbreak verification list, are constantly informed of rumoured and confirmed outbreaks. the who constantly monitors anthrax, avian influenza, crimean-congo hemorrhagic fever (cchf), dengue haemorrhagic fever, ebola haemorrhagic fever, hepatitis, influenza, lassa fever, marburg haemorrhagic fever, meningococcal disease, plague, rift valley fever, severe acute respiratory syndrome (sars), tularaemia, and yellow fever. a global early warning system for animal diseases transmissible to humans was formally launched in july by the fao, the world organization for animal health (oie), and who. the global early warning and response system for major animal diseases, including zoonoses (glews) monitors outbreaks of major animal diseases worldwide. a malaria early warning system is not yet available and the need for system development is pressing, especially in sub-saharan africa where malaria causes more than one million deaths every year. the iri institute at columbia university provides malaria risk maps based on rainfall anomaly, which is one of the factors influencing malaria outbreak and distribution, but no warning is disseminated to the potentially affected population. in addition, the malaria atlas project (map) supported by the wellcome trust, the global fund to fight aids, tuberculosis and malaria, the university of oxford-li ka shing foundation global health programme and others, aims to disseminate free, accurate and up-to-date information on malaria. the map is a joint effort of researchers from around the globe working in different fields (from public health to mathematics, geography and epidemiology). map produces and makes available a range of maps and estimates to support effective planning of malaria control at national and international scales. air pollution affects developing and developed countries without exception. for this reason, air quality monitoring and early warning systems are in place in most countries worldwide. nevertheless, there is still a technological divide between developed and developing countries; in fact, these systems are most developed in the united states, canada, and europe. there are several successful cases to mention in asia (taiwan, china, hong kong, korea, japan, and thailand), a few in latin america (argentina, brazil, and mexico city) and only one in africa (cape town, south africa). most of the existing systems focus on real-time air quality monitoring by collecting and analysing pollutant concentration measurements from ground stations. satellite observation is extremely useful for aviation and tropospheric ozone monitoring, which is done by nasa and esa. air quality information is communicated mainly through web services. the us environmental protection agency (epa) provides an e-mail alert service (epa airnow) only available in the us and the ministry of environment of ontario, canada, also provides e-mail alerts. the epa airnow notification service provides air quality information in real-time to subscribers via e-mail, cell phone or pager, allowing them to take steps to protect their health in critical situations. while current air quality information is provided by each of the air quality monitoring systems listed in the appendix, few sources provide forecasts. the following agencies provide forecasts, which are fundamental for early warning: us epa, esa, prev'air, and the environmental agencies of belgium, germany, and canada (see the appendix). prediction capability is an essential component of the early warning process. existing air quality monitoring systems need to be improved in order to provide predictions to users days in advance so they can act when unhealthy air quality conditions occur. . . . . drought. drought early warning systems are the least developed systems due its complex processes and environmental and social impacts. the study of existing drought early warning systems shows that only a few such systems exist worldwide. on a regional scale, the fews net for eastern africa, afghanistan, and central america reports on current famine conditions, including droughts, by providing monthly bulletins that are accessible on the fews net web page. for the united states, the us drought monitor (svoboda et al., ) provides current drought conditions at the national and state level through an interactive map available on the website accompanied by a narrative on current drought impacts and a brief description of forecasts for the following week. the us drought monitor, a joint effort between the us department of agriculture (usda), noaa, the climate prediction center, the university of nebraska lincoln and others, has become the best available product for droughts (svoboda et al., ) . it has a unique approach that integrates multiple drought indicators with field information and expert input, and provides information through a single easy-to-read map of current drought conditions and short notes on drought forecast conditions. for china, the beijing climate center (bcc) of the china meteorological administration (cma) monitors drought development. based on precipitation and soil moisture monitoring from an agricultural meteorological station network and remote-sensing-based monitoring from cma's national satellite meteorological center, a drought report and a map on current drought conditions are produced daily and made available on their website. the european commission joint research center (ec-jrc) provides publicly available drought-relevant information through the following real-time online maps: daily soil moisture maps of europe; daily soil moisture anomaly maps of europe; and daily maps of the forecasted top soil moisture development in europe ( -day trend). at a global scale, two institutions (fao's global information and early warning system on food and agriculture (giews) and benfield hazard research center of the university college london) provide some information on major droughts occurring worldwide. the fao-giews provides information on countries facing food insecurity through monthly briefing reports on crop prospects and food situations, including drought information, together with an interactive map of countries in crisis, available through the fao website. benfield hazard research center uses various data to produce a monthly map of current drought conditions accompanied by a short description for each country. in addition, the wmo provides useful global meteorological information, such as precipitation levels, cloudiness, and weather forecasts, which are visualized on a clickable map on the wmo website. existing approaches for drought early warning must be improved. due to the complex nature of droughts, a comprehensive and integrated approach (such as the one adopted by the us drought monitor) that would consider numerous drought indicators is required for drought monitoring and early warning. in addition, for large parts of the world suffering from severe droughts, early warning systems are not yet in place, such as in western and southern africa, and in eastern africa where fews net is available but no drought forecast is provided. parts of europe (spain, parts of france, southern sweden, and northern poland) are characterized by high drought risk but have no system in place. india, parts of thailand, turkey, iran, iraq, eastern china, areas of ecuador, colombia, and the south-eastern and western parts of australia also require a drought warning system. . . . . desertification. the united nations convention to combat desertification (unccd), signed by governments in , aims to promote local action programs and international activities. national action programmes at the regional or sub-regional levels are key instruments for implementing the convention and are often supported by action programmes at sub-regional and regional levels. these programs lay out regional and local action plans and strategies to combat desertification. the unccd website provides a desertification map together with documentation, reports, and briefing notes on the implementation of action programs for each country worldwide. currently no desertification early warning system is fully implemented, despite their potential to mitigate desertification. . . . . food security. the fao's giews supports policy-makers by delivering periodic reports through the giews web page and an e-mail service. giews also promotes collaboration and data exchange with other organizations and governments. the frequency of briefs and reports -which are released monthly or bimonthly -may not be adequate for early warning purposes. the wfp is also involved in disseminating reports and news on famine crises through its web service. no active dissemination is provided by wfp. another service is fews net, a collaborative effort of the usgs, united states agency for international development (usaid), nasa, and noaa, which reports on food insecurity conditions and issues watches and warnings to decision-makers. these bulletins are also available on their website. food security prediction estimates and maps would be extremely useful for emergency response, resources allocation, and early warning. the food security and nutrition working group (fsnwg) serves as a platform to promote the disaster risk reduction agenda in the region and provides monthly updates on food security through maps and reports available on its website. fewsnet and fsnwg were instrumental in predicting the food crisis in - in the east african region in a timely manner. nevertheless these early warnings did not lead to early action to address the food crisis. if they had been used adequately, the impacts of the serious humanitarian crisis in the horn of africa could have been partially mitigated (ververs, ) . nearly all efforts to cope with climate change or variability focus on either mitigation to reduce emissions or on adaptation to adjust to changes in climate. although it is imperative to continue with these efforts, the on-going pace of climate change and the slow international response suggests that a third option is becoming increasingly important: to protect the population against the immediate threat and consequences of climate-related extreme events, including heat waves, forest fires, floods and droughts, by providing it with timely, reliable and actionable warnings. although great strides have been made in developing climate-related warning systems over the past few years, current systems only deal with some aspects of climate related risks or hazards, and have large gaps in geographic coverage. information exists for melting glaciers, lake water level, sea height and sea surface temperature anomalies, el nino and la nina. the national snow and ice data center (nsidc)-ice concentration and snow extent provides near real-time data on daily global ice concentration and snow coverage. the usda, in cooperation with the nasa and the university of maryland, routinely monitors lake and reservoir height variations for approximately lakes worldwide and provides online public access to lake water level data. information on sea height anomaly (sha) and significant wave height data are available from altimeter jason- , topex, ers- , envisat and gfo on a near-real time basis with an average -day delay. this information is provided by noaa. additionally, near real-time sea surface temperature (sst) products are available from noaa's goes and poes, as well as nasa's eos, aqua and terra. the international research institute (iri) for climate and society provides a monthly summary of the el nino and la nina southern oscillation, providing forecast summary, probabilistic forecasts, and a sea surface temperature index. however, these systems are still far from providing the coverage and scope that is needed and technically feasible. large parts of the world's most vulnerable regions are still not covered by a comprehensive early warning system. most systems only deal with one aspect of climate-related risks or hazards, e.g., heat waves or drought. finally, most systems do not cover the entire early warning landscape from collection of meteorological data to delivery and response of users. recently, the world meteorological organization proposed a global framework for climate services, which aims to strengthen the global cooperative system for collecting, processing and exchanging observations and for using climate-related information (wmo, ). early warning technologies appear to be mature in certain fields but not yet in others. considerable progress has been made, thanks to advances in scientific research and in communication and information technologies. nevertheless, a significant amount of work remains to fill existing technological, communication, and geographical coverage gaps. early warning technologies are now available for almost all types of hazards, although for some hazards (such as droughts and landslides) these technologies are still less developed. most countries appear to have early warning systems for disaster risk reduction. however, there is still a technological and national capacity divide between developed and developing countries. from an operational point of view, some elements of the early warning process are not yet mature. in particular, it is essential to strengthen the links between sectors involved in early warning, including organizations responsible for issuing warnings and the authorities in charge of responding to these warnings, as well as promoting good governance and appropriate action plans. it is generally recognized that it is fundamental to establish effective early warning systems to better identify the risk and occurrence of hazards and to better monitor the population's level of vulnerability. although several early warning systems are in place at the global scale in most countries for most hazard types, there is the need ''to work expeditiously towards the establishment of a worldwide early warning system for all natural hazards with regional nodes, building on existing national and regional capacity such as the newly established indian ocean tsunami warning and mitigation system'' ( un world summit outcome). by building upon ongoing efforts to promote early warning, a multi-hazard early warning system will have a critical role in preventing hazardous events from turning into disasters. a globally comprehensive early warning system can be built, based on the many existing systems and capacities. this will not be a single, centrally planned and commanded system, but a networked and coordinated assemblage of nationally owned and operated systems. it will make use of existing observation networks, warning centres, modelling and forecasting capacities, telecommunication networks, and preparedness and response capacities (united nations, ) . a global approach to early warning will also guarantee consistency of warning messages and mitigation approaches globally thus improving coordination at a multi-level and multi-sector scale among the different national actors such as the technical agencies, the academic community, disaster managers, civil society, and the international community. the next section provides an analysis of existing global early warning/monitoring systems that aggregate multi-hazard information. this section presents the results of a comparative analysis of multi-hazard global monitoring/ early warning systems. the aim of this analysis is to assess the effectiveness of existing global scale multi-hazard systems and define the set of needs suggested by comparing existing services. it assesses existing monitoring/early warning systems (singh and grasso, ) , chosen to be multi-hazard with global coverage, such as wfp's (the un food aid agency's) hews; alertnet, the humanitarian information alert service by reuters; reliefweb, the humanitarian information alert service by un-ocha; gdacs (global disaster alert and coordination system), a joint initiative of the united nations office for the coordination of humanitarian affairs (un-ocha) and the ec-jrc; and usgs ens. these systems have been analysed for the type of events covered, variety of output/ communication forms and range of users served. these systems cover a range of hazards and communicate results using a variety of methods. usgs ens only provides information on earthquakes and volcanic eruptions through the volcano hazards program (vhp) in collaboration with smithsonian institution. reliefweb focuses on natural hazards (earthquakes, tsunamis, severe weather, volcanic eruptions, storms, floods, droughts, cyclones, insect infestation, fires, and technological hazards) and health; alertnet additionally provides information on food insecurity and conflicts. gdacs provides timely information on natural hazards (earthquakes, tsunamis, volcanic eruptions, floods, and cyclones). hews informs users on earthquakes, severe weather, volcanic eruptions, floods, and locusts. existing systems, such as hews, post the information on a website, provide mobile phone and rss services, and gdacs sends emails, sms and faxes to users. the gdacs notification service mostly addresses humanitarian organizations, rescue teams or aid agencies. alertnet provides information to users through a web service, e-mail, sms and reports. reliefweb uses web services, e-mail and reports to disseminate information to users. reliefweb and alertnet also use new communications tools (facebook and twitter). ocha's virtual on-site operations coordination centre (virtual-osocc) enables real-time information exchange by all actors of the international disaster response community during the early phases following disasters. this service has been integrated within gdacs but is restricted to disaster managers. the only natural event notification provided by usgs e-mail service is earthquakes. hews offers no e-mail notifications for natural events. alertnet and reliefweb inform users on natural hazards and on health, food and security issues. usgs, reliefweb, alertnet and gdacs serve a wide a variety of users such as international organizations, humanitarian aid, policy/decision-makers, and civil society (fig. ) . the optimal global coverage multi-hazard system has to be as comprehensive as possible in terms of content, output and range of users. it will enhance existing systems by streaming data and information from existing sources and it will deliver this information in a variety of user-friendly formats to reach the widest range of users. by building on existing systems the multi-hazard system will inherit both the technological and geographical coverage gaps and limitations of existing early warning systems. the review analysis performed in the section ''inventory of early warning systems'' has shown that for some hazards (such as droughts and landslides) these technologies are still less developed and for tsunamis these systems are still under development for areas at risk. the analysis has shown that there is still a technological and national capacity divide between developed and developing countries. from an operational point of view, the links and communication networks between all sectors involved (organizations responsible for issuing warnings and the authorities in charge of responding to these warnings) need improvement. similarly, good governance and appropriate action plans need to be promoted. overcoming these gaps and enhancing, integrating, and coordinating existing systems is the first priority for the development of a global scale multi-hazard early warning system . early warning technologies have greatly benefited from recent advances in communication and information technologies and an improved knowledge on natural hazards and the underlying science. nevertheless many gaps still exist in early warning technologies and capacities, especially in the developing world, and a lot more has to be done to develop a global scale multi-hazard system. operational gaps need to be filled for slow-onset hazards both in monitoring, communication and response phases. effective and timely decision making is needed for slow onset hazards. below are some recommendations: ( ) fill existing gaps: the section identified the weaknesses and gaps in existing early warning systems. technological, geographical coverage, and capacity gaps exist, in addition to operational gaps for slow-onset hazards. in particular, actions need to be taken to improve prediction capabilities for landslides hazard aimed at developing a landslides early warning system. likewise, there is a pressing need to improve existing prediction capabilities for droughts. a global early fire warning system is not yet in place, the tsunami early warning systems for the indian ocean and the caribbean are not yet fully operational and a desertification early warning system has not been developed yet. there are ongoing efforts to develop these systems, such as the gfmc effort for the global fire ews, the indian ocean tsunami warning system operated by indonesia and a noaa led effort in the caribbean. a malaria early warning system is mandatory for africa, where one million deaths occur every year due to malaria. climate variability impacts need to be monitored within a global and coordinated effort, and the global framework for climate services needs to be further elaborated and operationalized. local earthquake early warning systems applications are needed in high seismic risk areas, where early warning systems are not yet in place. air quality and flood systems require improvements in prediction capabilities. dust storms and transboundary early warning systems do not yet exist. a coordinated volcanic early warning system that would integrate existing resources is also needed as well as an increase in coverage of volcanic observatories. particular attention should be paid to fill gaps in decision making processes for slow-onset hazards. their extent and impact are challenging to quantify. for this reason, actions and response are far more difficult tasks for slow-onset hazards than they are for other natural hazards. an institutional mechanism to regularly monitor and communicate slow-onset changes is needed to keep changes under review and to enable rational and timely decisions to be taken based on improved information. ( ) build capacity: the evaluation study of existing early warning systems (the section ) highlighted that a technological divide between developed and developing countries still exists. it is critical to develop basic early warning infrastructures and capacities in parts of the developing world most affected by disasters; it is also important to promote education programs on disaster mitigation and preparedness and integrate disaster mitigation plans into the broader development context. poor countries suffer greater economic losses from disasters than rich countries. development plays a key role and has a significant impact on disaster risk. almost percent of the people exposed to the deadliest hazards, earthquakes, floods, cyclones and droughts live in the developing world. the impact of disasters is mostly influenced by previous development choices. by integrating disaster mitigation strategies into planning and policies, the effects of disasters can be sensibly reduced and managed. ''disaster risk is not inevitable, but on the contrary can be managed and reduced through appropriate development actions'' (united nations development programme-undp, ) . it is through ''risk-sensitive development planning that countries can help reduce disaster risks''. key targets for capacity building include the following: . developing national research, monitoring and assessment capacity, including training in assessment and early warning. . supporting national and regional institutions in data collection, analysis and monitoring of natural and man-made hazards. . providing access to scientific and technological information, including information on stateof-the-art technologies. . education and awareness-raising, including networking among universities with programmes of excellence in the field of the emergency management. . organizing of training courses for local decision-makers and communities. . bridging the gap between emergency relief and long-term development. ( ) bridge the gaps between science and decision making, and strengthen coordination and communication links: scientific and technological advances in modelling, monitoring and predicting capabilities could bring immense benefits to early warning if science were effectively translated into disaster management actions. bridging the gap between scientific research and decision making will make it possible to fully exploit capacities of early warning technologies for societal benefit. the major challenge is to ensure that early warnings result in prompt responses by governments and potentially the international community. this requires that information be effectively disseminated in an accessible form down to the end user. this is achievable by adopting standard formats and easy-to-use tools for information dissemination, such as interactive maps, emails, sms, etc. the adoption of standard formats (such as the common alerting protocol cap) for disseminating and exchanging information has to be promoted. the advantage of standard format alerts is their compatibility with all information systems, warning systems, media, and most importantly, with new technologies such as web services. the adoption of standard formats guarantees consistency of warning messages and is compatible with all types of information systems and public alerting systems, including broadcast radio and television as well as public and private data networks, with multi-lingual warning systems and emerging technologies. this would easily replace specific application oriented messages and will allow the merging of warning messages from several early warning systems into a single multihazard message format. finally, it is critical to strengthen coordination and communication links by defining responsibility mechanisms and appropriate action plans. more often, time-sequenced warning messages are released in early warning processes, implying a decrease in warning times available for action and in reliability of the information. this trade-off needs to be addressed. the content and views expressed in this publication are those of the authors and do not necessarily reflect the views or policies, or carry the endorsement of the contributory organizations or the united nations environment programme (unep). the designations employed and the presentation of material in this publication do not imply the expression of any opinion whatsoever on the part of unep concerning the legal status of any country, territory or city or its authorities, or concerning the delimitation of its frontiers and boundaries. reference to a commercial company or product in this publication does not imply the endorsement of unep. & maps, photos, and illustrations as specified. this publication may be reproduced in whole or in part and in any form for educational or non-profit purposes without special permission from the copyright holder, provided acknowledgement of the source is made. unep would appreciate receiving a copy of any publication that uses this publication as a source. no use of this publication may be made for resale or any other commercial purpose whatsoever without prior permission in writing from unep. applications for such permission, with a statement of purpose and intent of the reproduction, should be addressed to the director, division of communications and public information (dcpi), unep, p.o. box , nairobi , kenya. the use of information from this publication concerning proprietary products for publicity or advertising is not permitted. citation: unep ( adaptation: ( ) actions taken to help communities and ecosystems cope with changing climate conditions (unfccc). ( ) genetically determined characteristic that enhances the ability of an organism to cope with its environment (cbd). aerosols: a collection of airborne solid or liquid particles, with a typical size between . and mm, that resides in the atmosphere for at least several hours. aerosols may be of either natural or anthropogenic origin. air quality: smog is the product of human and natural activities such as industry, transportation, wild-fires, volcanic eruptions, etc. and can have serious effects on human health and the environment. us epa uses and air quality index (aqi) which is calculated on five major air pollutants regulated by the clean air act: ground-level ozone, particle pollution (also known as particulate matter), carbon monoxide, sulfur dioxide, and nitrogen dioxide. for each of these pollutants, epa has established national air quality standards to protect public health. groundlevel ozone and airborne particles are the two pollutants that pose the greatest threat to human health in this country. biodiversity: the variety of life on earth, including diversity at the genetic level, among species and among ecosystems and habitats. it includes diversity in abundance, distribution and in behaviour. biodiversity also incorporates human cultural diversity, which can both be affected by the same drivers as biodiversity, and itself has impacts on the diversity of genes, other species and ecosystems. biofuels: fuel produced from dry organic matter or combustible oils from plants, such as alcohol from fermented sugar, black liquor from the paper manufacturing process, wood and soybean oil. biomass: organic material, both above ground and below ground, and both living and dead, such as trees, crops, grasses, tree litter and roots. capacity: the combination of all the strengths, attributes and resources available within a community, society or organization that can be used to achieve agreed goals. comment: capacity may include infrastructure and physical means, institutions, societal coping abilities, as well as human knowledge, skills and collective attributes such as social relationships, leadership and management. capacity also may be described as capability. capacity assessment is a term for the process by which the capacity of a group is reviewed against desired goals, and the capacity gaps are identified for further action. capacity building: process of developing the technical skills, institutional capability, and personnel. climate change: change of climate, which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over comparable time periods. climate variability: variations in the mean state and other statistics (such as standard deviations and the occurrence of extremes) of the climate on all temporal and spatial scales beyond that of individual weather events. variability may be due to natural internal processes in the climate system (internal variability), or to variations in natural or anthropogenic external forcing (external variability). common alerting protocol: the common alerting protocol (cap) provides an open, nonproprietary digital message format for all types of alerts and notifications. it does not address any particular application or telecommunications method. the cap format is compatible with emerging techniques, such as web services, as well as existing formats including the specific area message encoding (same) used for the united states' national oceanic and atmospheric administration (noaa) weather radio and the emergency alert system (eas). cost-benefit analysis: a technique designed to determine the feasibility of a project or plan by quantifying its costs and benefits. cyclone: an atmospheric closed circulation rotating counter clockwise in the northern hemisphere and clockwise in the southern hemisphere. deforestation: the direct human-induced conversion of forested land to non-forested land. desertification: degradation of land in arid, semi-arid and dry sub-humid areas, resulting from various factors, including climatic variations and human activities. disaster: a serious disruption of the functioning of a community or a society involving widespread human, material, economic or environmental losses and impacts, which exceeds the ability of the affected community or society to cope using its own resources. comment: disasters are often described as a result of the combination of: the exposure to a hazard; the conditions of vulnerability that are present; and insufficient capacity or measures to reduce or cope with the potential negative consequences. disaster impacts may include loss of life, injury, disease and other negative effects on human physical, mental and social well-being, together with damage to property, destruction of assets, loss of services, social and economic disruption and environmental degradation. disaster risk: the potential disaster losses, in lives, health status, livelihoods, assets and services, which could occur to a particular community or a society over some specified future time period. comment: the definition of disaster risk reflects the concept of disasters as the outcome of continuously present conditions of risk. disaster risk comprises different types of potential losses which are often difficult to quantify. nevertheless, with knowledge of the prevailing hazards and the patterns of population and socio-economic development, disaster risks can be assessed and mapped, in broad terms at least. disaster risk reduction: the concept and practice of reducing disaster risks through systematic efforts to analyse and manage the causal factors of disasters, including through reduced exposure to hazards, lessened vulnerability of people and property, wise management of land and the environment, and improved preparedness for adverse events. comment: a comprehensive approach to reduce disaster risks is set out in the united nations-endorsed hyogo framework for action, adopted in , whose expected outcome is ''the substantial reduction of disaster losses, in lives and the social, economic and environmental assets of communities and countries.'' the international strategy for disaster reduction (isdr) system provides a vehicle for cooperation among governments, organisations and civil society actors to assist in the implementation of the framework. note that while the term ''disaster reduction'' is sometimes used, the term ''disaster risk reduction'' provides a better recognition of the ongoing nature of disaster risks and the ongoing potential to reduce these risks. droughts: a period of abnormally dry weather sufficiently prolonged for the lack of water to cause serious hydrologic imbalance in the affected area. early warning system: the set of capacities needed to generate and disseminate timely and meaningful warning information to enable individuals, communities and organizations threatened by a hazard to prepare and to act appropriately and in sufficient time to reduce the possibility of harm or loss. comment: this definition encompasses the range of factors necessary to achieve effective responses to warnings. a people-centred early warning system necessarily comprises four key elements: knowledge of the risks; monitoring, analysis and forecasting of the hazards; communication or dissemination of alerts and warnings; and local capabilities to respond to the warnings received. the expression ''end-to-end warning system'' is also used to emphasize that warning systems need to span all steps from hazard detection through to community response. earth observation: earth observation, through measuring and monitoring, provides an insight and understanding into earth's complex processes and changes. eo include measurements that can be made directly or by sensors in-situ or remotely (i.e. satellite remote sensing, aerial surveys, land or ocean-based monitoring systems, fig. ), to provide key information to models or other tools to support decision making processes. earthquakes: earthquakes are due to a sudden release of stresses accumulated around the faults in the earth's crust. this energy is released through seismic waves that travel from the origin zone, which cause the ground to shake. severe earthquakes can affect buildings and populations. the level of damage depends on many factors such as intensity of the earthquake, depth, vulnerability of the structures, and distance from the earthquake origin. ecosystem: dynamic complex of plant, animal, microorganism communities and their non-living environment, interacting as a functional unit. ecosystems are irrespective of political boundaries. el niñ o-southern oscillation: a complex interaction of the tropical pacific ocean and the global atmosphere that results in irregularly occurring episodes of changed ocean and weather patterns in many parts of the world, often with significant impacts over many months, such as altered marine habitats, rainfall changes, floods, droughts, and changes in storm patterns. comment: the el nino part of the el nino-southern oscillation (enso) phenomenon refers to the well-above average ocean temperatures that occur along the coasts of ecuador, peru and northern chile and across the eastern equatorial pacific ocean, while la nina part refers to the opposite circumstances when well-below-average ocean temperatures occur. the southern oscillation refers to the accompanying changes in the global air pressure patterns that are associated with the changed weather patterns experienced in different parts of the world. emergency management: the organization and management of resources and responsibilities for addressing all aspects of emergencies, in particular preparedness, response and initial recovery steps. comment: a crisis or emergency is a threatening condition that requires urgent action. effective emergency action can avoid the escalation of an event into a disaster. emergency management involves plans and institutional arrangements to engage and guide the efforts of government, non-government, voluntary and private agencies in comprehensive and coordinated ways to respond to the entire spectrum of emergency needs. the expression ''disaster management'' is sometimes used instead of emergency management. extensible markup language: a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. e-waste: a generic term encompassing various forms of electrical and electronic equipment that has ceased to be of value and is disposed of. a practical definition of e-waste is ''any electrically powered appliance that fails to satisfy the current owner for its originally intended purpose.'' false alarm: in the context of early warning systems, a false alarm is defined as the situation in which an alarm is activated when it should not have been. fine particle: particulate matter suspended in the atmosphere less than . mm in size (pm . ). floods: an overflow of water onto normally dry land. theinundation of a normally dry area caused by rising water in an existing waterway, such as a river, stream, or drainage ditch. ponding of water at or near the point where the rain fell. flooding is a longer term event than flash flooding: it may last days or weeks. floods are often triggered by severe storms, tropical cyclones, and tornadoes. food security: when all people at all times have access to sufficient, safe, nutritious food to maintain a healthy and active life. forecast: definite statement or statistical estimate of the likely occurrence of a future event or conditions for a specific area. comment: in meteorology a forecast refers to a future condition, whereas a warning refers to a potentially dangerous future condition. forest: land spanning more than . ha with trees higher than m and a canopy cover of more than percent, or trees able to reach these thresholds in situ. it does not include land that is predominantly under agricultural or urban land use. gaussian distribution: the gaussian (normal) distribution was historically called the law of errors. it was used by gauss to model errors in astronomical observations, which is why it is usually referred to as the gaussian distribution. geological hazard: geological process or phenomenon that may cause loss of life, injury or other health impacts, property damage, loss of livelihoods and services, social and economic disruption, or environmental damage. comment: geological hazards include internal earth processes, such as earthquakes, volcanic activity and emissions, and related geophysical processes such as mass movements, landslides, rockslides, surface collapses, and debris or mud flows. hydrometeorological factors are important contributors to some of these processes. tsunamis are difficult to categorize; although they are triggered by undersea earthquakes and other geological events, they are essentially an oceanic process that is manifested as a coastal water-related hazard. within this report, tsunamis are included in the geological hazards group. geographic information system: a computerized system organizing data sets through a geographical referencing of all data included in its collections. the potential for earthquake early warning in southern california oregon department of environmental quality. water quality credit trading in oregon: a case study report disaster management support group (dmsg) ceos disaster management support group report. the use of earth observing satellites for hazard support: assessments & scenarios. final report of the ceos disaster management support group natural disaster hotspots: a global risk analysis mexico city seismic alert system environmental change monitoring by geoinformation technology for baghdad and its neighboring areas geoss -year implementation plan reference document early warning systems early warning of wildland fires usable science: early warning systems: do's and don'ts automated decision procedure for earthquake early warning automated decision procedure for earthquake early warning a model for a seismic computerized alert network an automatic processing system for broadcasting earthquake alarms global observations of the land, a proposed theme to the igos-partnership-version probability theory: the logic of science jma earthquake early warning real-time seismology and earthquake damage mitigation global forest fire watch: wildfire potential, detection, monitoring and assessment oil spill detection and monitoring from satellite image on a rational strong motion index compared with other various indices the role of a new method of quickly estimating epicentral distance and magnitude from a single seismic record prediction, science, decision making and the future of nature university of valencia,eos.d c. chinese academy of forestry effective early warning -use of hazard maps as a tool for effective risk communication among policy makers and communities -session organized by the government of japan global environmental alert service digital change detection techniques using remotely sensed data the drought monitor ict in pre-disaster awareness and preparedness. presented at the apt-itu joint meeting for ict role in disaster management early warning systems for natural disasters reduction undp-united nations development programme bureau for crisis prevention and recovery united nations convention to combat desertification the east african food crisis: did regional early warning systems function climate knowledge for action: a global framework for climate services-empowering the most vulnerable quick and reliable determination of magnitude for seismic early warning a virtual sub-network approach to earthquake early warning key: cord- -gm b u authors: fazeli, shayan; moatamed, babak; sarrafzadeh, majid title: statistical analytics and regional representation learning for covid- pandemic understanding date: - - journal: nan doi: nan sha: doc_id: cord_uid: gm b u the rapid spread of the novel coronavirus (covid- ) has severely impacted almost all countries around the world. it not only has caused a tremendous burden on health-care providers to bear, but it has also brought severe impacts on the economy and social life. the presence of reliable data and the results of in-depth statistical analyses provide researchers and policymakers with invaluable information to understand this pandemic and its growth pattern more clearly. this paper combines and processes an extensive collection of publicly available datasets to provide a unified information source for representing geographical regions with regards to their pandemic-related behavior. the features are grouped into various categories to account for their impact based on the higher-level concepts associated with them. this work uses several correlation analysis techniques to observe value and order relationships between features, feature groups, and covid- occurrences. dimensionality reduction techniques and projection methodologies are used to elaborate on individual and group importance of these representative features. a specific rnn-based inference pipeline called doublewindowlstm-cp is proposed in this work for predictive event modeling. it utilizes sequential patterns and enables concise record representation while using but a minimal amount of historical data. the quantitative results of our statistical analytics indicated critical patterns reflecting on many of the expected collective behavior and their associated outcomes. predictive modeling with doublewindowlstm-cp instance exhibits efficient performance in quantitative and qualitative assessments while reducing the need for extended and reliable historical information on the pandemic. i n the early days of the year , the world faced another widespread pandemic, this time of the covid- strand, otherwise known as the novel coronavirus. the family of coronaviruses to which this rna virus belongs can cause respiratory tract infections of various severities. these infections range from cases of the common cold to the more lethal degrees. many of the confirmed cases and deaths reported due to covid- showed evidence of severe forms of the aforementioned infections [ ] , [ ] , [ ] . the origin of this new virus is still not clearly understood; however, it is believed to be mainly connected to the interactions between humans and particular animal species [ ] . the rapid spread of this virus has led to many lives being lost and extremely overwhelmed the health-care providers. it also led to worldwide difficulties and had considerable negative economic impacts. it is also expected to have adverse effects on mental health as well due to prolonged shutdowns and quarantines, and there are guidelines published to help minimize this negative impact [ ] . in this work, we have gathered, processed, and combined several well-known publicly available datasets on the covid- outbreak in the united states. the idea is to provide a reliable source of information derived from a wide range of sources on important features describing a region and its population from various perspectives. these features primarily have to do with demographics, socio-economic, and public health aspects of the us regions. they are chosen in this manner because it is plausible to assume that they can be potential indicators of commonalities between the affected areas. even though finding causality is not the objective of this work, our analyses attempt to shed light on these possible commonalities that allow public health researchers to obtain a better perspective on the nature of this pandemic and the potential factors contributing to a slower outbreak. this is vitally important as the critical role of proper policies enforced at the proper time is evident now more than ever. there has been widespread attention in the design and utilization of artificial intelligence-based tools to obtain a better understanding this pandemic. accordingly, we present a neural architecture with recurrent neural networks in its core to allow the machine to learn to predict pandemic events in the near future, given a short window of historical information on static and dynamic regional features. the main assumption that this work attempts to empirically validate is that the concise arxiv: . v [cs.cy] aug pandemic-related region-based representations can be learned and leveraged to obtain accurate outbreak event prediction with only minimal use of the historical information related to the outbreak. aside from the theoretical importance, an essential application of this framework is when the reported historical pandemic information, e.g., number of cases, is not reliable. an example of this is when a region discovers a problem in its reporting scheme that makes the historical information on the pandemic inaccurate due to overestimation or underestimation. such unreliability will severely affect the models which use this historical information as the core of their analysis. in summary, the contributions of this work are as follows: • gathering and providing a thorough collection of datasets for the fine-grained representation of us counties as subregions. this collection includes data from various us bureaus, health organizations, the center for disease control and prevention, and covid- epidemic information. • evaluation of the informativeness of individual features in distinguishing between regions • correlation analyses and investigating monotonic and non-monotonic relationships between several key features and the pandemic outcomes • proposing a neural architecture for accurate short-term predictive modeling of the covid- pandemic with minimal use of historical data by leveraging the automatically learned region representations given the importance of open-research in dealing with the covid- pandemic, we have also designed olivia [ ] . olivia is our online interactive platform with various utilities for covid- event monitoring and analytics, which allows both expert researchers and users with little or no scientific background to study outbreak events and regional characteristics. the codes for this work and the collection of datasets are also available as well. since the beginning of the covid- pandemic, there have been efforts in utilizing computerized advancements in controlling and understanding this disease. an example is the applications developed to monitor the patients' locations and routes of movement. a notable work in this area is mit's safepaths application [ ] that contains interview and profiling capability for places and paths. it is worthwhile to mention that these platforms have also caused worries regarding maintaining patients' privacy [ ] . to provide researchers and government agencies with frequently updated monitoring information regarding the coronavirus, point acres team has provided an api that allows access to the daily updated numbers of coronavirus cases [ ] , [ ] . several datasets such as [ ] are also released to the public. a large corpus of scientific articles on coronaviruses is released as well as a result of a collaboration between allenai institute, microsoft research, chan-zuckerberg initiative, nih, and the white house [ ] . there have been projects such as a work at john hopkins university that are focused on providing us county-level summaries of covid- pandemic information and important attributes [ ] , [ ] . the information in social networks has also been used in predicting the number or covid- cases in mainland china [ ] . the work in [ ] is also focused on an ai-based approach for predicting mortality risk in covid- patients. there have been numerous approaches to model the pandemic using ai that have the historical outbreak information at the core of their analyses, such as the modified versions of seir model and arima-based analysis [ ] , [ ] , [ ] , [ ] , [ ] , [ ] . this work is distinguished from the mentioned projects and the majority of statistical works in this area in the sense that it is targeting the role of region-based features in the spatio-temporal analysis of the pandemic with minimal use of historical data on the outbreak events. the area unit of this work is us county which enables a more fine-grained prediction scheme compared to the other works that have mostly targeted the state-level analytics. to our best knowledge, the works in [ ] and [ ] are the only attempts in county-level modeling of the disease dynamics. in [ ] , authors have proposed a non-parametric model for epidemic data that incorporates area-level characteristics in the sir model. the work in [ ] uses a combination of iterated filtering and the ensemble adjustment kalman filter for tuning their model, and their approach is based on a county-level seir model. the empirical results show that our approach outperforms these models on the evaluation benchmarks while providing a framework for utilizing deep learning in analysis and modeling the short-term pandemic events. we have made our codes and data publicly available and regularly maintained to help to expedite the research in this area. this study focuses on analyzing the regions of the united states with statistical and ai-based approaches to obtain results and representations associated with their pandemic-related behavior. a primary and essential step in doing so is to prepare a dataset covering a wide range of information topics, from socio-economic to regional mobility reports. more details regarding the primary data sources from which we have obtained information for this work's dataset are elaborated upon hereunder. ) covid- daily information per county: our first step towards the mentioned objective is to gather the daily covid- outbreak data. this data should include the number of cases that are confirmed to be caused by the novel coronavirus and its associated death toll. we are using the publicly accessible dataset api in [ ] , [ ] to fetch the relevant data records. the table of data obtained using this api contains the numerical information along with dates corresponding to each record, and each document includes the number of confirmed cases and the number of deaths that occurred due to covid- on that date. it also includes the number of recoveries from covid- in the same format. this dataset's significance is that it provides us with a detailed and high-resolution temporal trajectory of the covid- outbreak in different urban regions across the united states. using the dates, one can constitute a set of time-series for every county and monitor the outbreak along with the other metadata to make relevant inferences. ) us census demographic data: the us census demographic data gathered by the us census bureau [ ] plays a critical role in our analysis by providing us with necessary information on each region's population. additionally, this information includes specific features such as the types of work people in that region mainly take part in, their income levels, and other invaluable demographical and social information. ) us county-level mortality: the fluctuations in the mortality rate of a region is also a potential critical feature in pandemic analytics. the us county-level mortality dataset was incorporated into our collection to add the high-resolution mortality rate time-series throughout the years [ ] , [ ] . the age-standardized mortality rates provide us with information on variables, the values of which can be considered as the effects of specific causes. it is crucial since some of these causes might have contributed to the faster spread of covid- in different regions [ ] . ) us county-level diversity index: another dataset that offers a race-based breakdown of the county populations is available at [ ] with the diversity index values corresponding to the notion of ecological entropy. for a particular region, if k races comprise its population, the value of diversity index can be computed using the following formula: in the above formula, n is the total population and n i is the number of people from race i. this formula represents the probability p, which means that if we randomly pick two persons from this cohort, they are of different races with probability p. in addition to that, we have the percentages of different races in the regional population as well. ) us droughts by county: another source of valuable information regarding the land area and water resources per county is the data gathered by the us drought monitor [ ] , [ ] . this data is incorporated into our collection as well. ) election: based on the us presidential election, a breakdown of county populations' tendencies to vote for the main political parties is available [ ] . these records are added to our collection as the democratic-republican breakdown of regional voters can reflect socio-economic and demographical features that form the underlying reasons for the regional voting tendencies. ) icu beds: since covid- imposes significant problems in terms of the extensive use of icu beds and medical resources such as mechanical ventilators, having access to the number of icu beds in each county is helpful. this information offers a glance at the medical care capacity of each region and its potential to provide care for the patients in icus [ ] . it could be argued that having knowledge of the icu-related capacity of regional healthcare providers can, to some extent, represent the amount of their covid- related resources, such as ventilators and other needed resources. the aggregate dataset on central statistical values on the us household income per county (including average, median, and standard deviation) is used to provide information on the financial well-being of the affected regions' occupants [ ] . ) covid- hospitalizations and influenza activity level: aside from the socio-economical and demographical features of a region, the number of active and potential covid- cases is a critical factor. this information can be leveraged to provide a possible threat level for the region. these records are made available by cdc for specific areas and are incorporated into our collection as well [ ] , [ ] . ) google mobility reports: the covid- virus is highly contagious. therefore, the self-quarantine and social distancing measures are principal effective methodologies in bolstering the prevention efforts. our collection includes google's mobility reports obtained from [ ] . these records elaborate on the mobility levels across us regions, which are broken down into the following categories of mobility: ) retail and recreation ) grocery and pharmacy ) parks ) transit stations ) workplaces ) residential in addition, we have computed a compliance measure that has to do with the overall compliance with the shelter at home criteria: . in the above formula, m i is the mobility report for the ith mobility category. this value is computed through time to provide an overall measure of mobility through time. the compliance measures of + and − mean + % and − % changes from the baseline mobility behavior, respectively. ) food businesses: restaurants and food businesses are affected severely by the economic impacts of this outbreak. at the same time, they have not ceased to provide services that are essential and required by many. to reach a proper perspective of the food business in each region, we have prepared another dataset based on records in [ ] to provide statistics on regional restaurant revenue and employment. analysis of restaurants status is important in the sense that they are mostly public places that host large gatherings, and in the time of a pandemic, their role is critical. ) physical activity and life expectancy: various features have been selected from the dataset in [ ] to reflect on the obesity and physical activity representation for different us regions. these features include the last prevalence survey and the changes in patterns. also, life expectancy related features are valuable information for representing each region. they are included as well in our analyses. ) diabetes: different features to represent a region according to the diabetes-related characteristics were selected from the data in [ ] . these include age-standardized features and clusters that have to do with diabetes-related diagnoses. ) drinking habits: information on regional drinking habits from - has also been used in this work [ ] . this information includes the proportions of different categories of drinkers clustered by sex and age. the categories are as follows: • any: a minimum of one drink of any alcoholic beverage per days • heavy: a minimum average of one drink per day for women and two drinks for men per days • "binge: a minimum of four drinks for women and five drinks for men on a single occasion at least once per days ) analytics: in what follows, the analytical techniques that we have designed and used in this work are explained. to draw meaning from the data that we have at hand, we have designed and utilized a variety of techniques. these methodologies range from traditional statistical methodologies to the design and testing of deep learning inference pipelines for event prediction. we select a set of representative features to use in our analytics from the gathered collection of datasets. more details on the nature of these features are shown in table i. ) feature informativeness for sub-region representation: an important question that is raised in analyzing a dataset with well-defined categories of features is how important these features are in describing the entities associated with them. from the particular perspective of enabling the differentiation between two regions, it can be said that a measure of importance is the contribution of each one of these selected features to the overall variation in datapoints. the boundary case is that if a feature always has the same value, it is not informative as there is no entropy value associated with its distribution. to begin with, we associate a mathematical vector with each data point, which contains the values of all its dynamic and static features associated with a specific date and location. since we are mainly targeting us counties in this study, each record would be associated with a us county at a specific date. we then use linear principal component analysis [ ] to reduce the dimensionality of these data points and to evaluate the importance of the selected features in terms of their contribution to the overall variation. results show that in order to retain over % of the original variance, a minimum of principal components should be considered. each one of these components is found as a linear combination of the original set of features, and that along with ii the equations for the three main correlation analysis techniques used in this work, namely pearson the percentage of variance along the axis of that component can be used as a measure of performance. to be more specific, considering n features and m data points that result in p pca components to retain % of the variation, we will have: and u i is the total variance along the axis of ith pca component. this can be thought of as a measure of importance for the pca components, and the absolute value of v i s magnitudes can be considered as the importance of original feature i's contribution to its making. therefore, we will have the following measure of informativeness defined for our features: the features can be sorted according to these values, and the categories can also be considered in their relevant importance. note that this is just one definition of informativeness; for example, certain features might not vary a lot, but when they do, they are potentially associated with severe changes in the covid- events. therefore, the importance score that has been captured here merely has to do with how better we are able to distinguish between locations based on a feature. in order to better understand the co-occurrences of the features in our input dataset and their corresponding covid- related events, we have performed an in-depth correlation analysis on them. we have considered four principal measures of correlation, namely: pearson, kendall, histogram intersection, and spearman, as described in table ii . we have used the pearson correlation coefficient along with the p-values to shed light on the presence or absence of a significant relationship between the values of each specific feature and each category of pandemic outcome. we have also computed nonparametric spearman rank correlation coefficients between any two of our random variables. this value would be computed as the pearson measure of the raw values converted to their ranks. the formulation is shown in table ii in which d i is the difference in paired ranks. mutual information has also been used to provide additional information on such relationships. this coefficient measures the strength of the association between the values of these random variables in terms of their ranks. since many of the relationships in our dataset can be intuitively thought of as monotonic, these values are particularly important. to better understand the concordance and discordance, kendall correlation is computed as well. in the formulation shown in table ii , m and m are the numbers of concordant and discordant pairs of values, respectively. normalized histogram intersection is another methodology directly targeting the distributions of these variables. the degree of their overlap represents how closely xs distribution follows the distribution of y. it has also been utilized in finding the results of this section. in continuation of our statistical analyses on covid- event distributions, we have designed a neural inference pipeline to help with the effective utilization of both learned deep representations and the embedded sequential information in the dataset. in this work, we introduce a neural architecture, which is trained and used for covid- event prediction across the us regions. the double window long short term memory covid- predictor (dwlstm-cp) is comprised of multiple components for domain mapping and deep processing. first, using its dynamic projection which is a fully connected layer, the dynamic feature vectors which reflect on temporal dynamics will be mapped to a new space and represented with a further concise mathematical vector. this step is essential due to the fact that an optimal deep inference pipeline is the one that retains only the information required by each level and minimizes redundancies [ ] . the projections are designed to help the network achieve this objective. these are then fed to the lstm core for processing. each one of these outputs is concatenated with the projected version of static features, f static projection (x static ), and fed to the output regression unit. the outputs are compared with the ground truth time-series, and a weighted mean squared error loss along with norm-based regularization is used to guide the training process while encouraging more focus on the points with large values. the overall pipeline is shown in figure . it is worth mentioning that this approach leverages and utilizes all of the features discussed in the previous sections. it learns representations that take various factors, from different categories of mobility and activities to socio-economic information, to make accurate short-term predictions while reducing the need for lengthy historical data on the pandemic outcomes. there are many occasions in which accurate and reliable historical data on the pandemic is not available due to a variety of reasons (e.g., a problem in reporting scheme), which motivates approaches with less dependency on it. the results on our regional dataset in terms of feature importance from the principal component analysis indicate the following features contribute to the overall representation significantly: • restaurant businesses, namely the contribution to the state economy and the count of food and beverage locations. even though we only have access to state-level data, its importance can be intuitively argued as it reflects on the counties that the state includes. this is due to the fact that the status of restaurants plays an essential role in such pandemics. • the influenza activity level is another critical feature in the analysis. given the similarity of symptoms between influenza and covid- infection, monitoring influenza activity is very helpful for covid- pandemic understanding. • diversity index, which signifies the probability of two randomly selected persons belonging to different races from a population, also plays a crucial role in representing the regions. • the changes in the mortality rate that is not associated with covid- are beneficial as well. this is also intuitively arguable as it can be thought of as a measure of mortality related sensitivity for the regions. figure shows how the projected points scatter after the pca as well. the results indicate that pca components are required to retain over % of the variance of the dataset, and figure shows the progress of covering the variance by adding the results of correlation analyses help empirically and quantitatively validate many of the relationships mentioned in the known hypotheses regarding the covid- outbreak. the pearson correlation of − . % with the p-value of . indicates a significant relationship between the percentage of food businesses in the state economy, and the average cumulative death count in its counties. another example is the value of the spearman correlation coefficients between the different types of commute to work associated with each county and the values of the pandemic-related events. from table iv , it is apparent that there is a positive relationship between the proportion of public transit as a method of commute to work and the spread of covid- in the region. another example is the pearson correlation between the ratio of different races in regions and the pandemic outcomes. it is known that covid- is affecting the african american community disproportionately [ ] . accordingly, the values in table v show a higher correlation between the ratio of african americans and the severity of covid- outcomes. cumulative covered variance by using pca components sorted by their informativeness fig. . the cumulative amount of variance covered by using up to a certain number of pca components. this is assuming that they are sorted by their corresponding eigenvalue, meaning that the first component contributes more to variance coverage than the ones selected after it. the collected set of datasets in this work provide a sufficient number of records for enabling the efficient use of artificial intelligence for spatio-temporal representation learning. we show this by training instances of our proposed doublewindowl-stm architecture on the two main short-term tasks regarding epidemic modeling; namely, new daily death and case count. in our dataset, we considered the us covid- information from march st, to july nd, , in which the july data is used for our evaluations, and the rest are leveraged for training and cross-validation. the objective using which the proposed architecture was trained is a multi-step weighted mean squared error (mse) loss, which helps to minimize a notion of distance between the predictions and the target ground-truth while encouraging (by assigning larger weights) to the windows that exhibit larger values. these thresholds are empirically tuned and set prior to the training procedure. the learning curves for both experiments indicate clear convergence in figure . to quantitatively evaluate the performance, we have reported the root mean square error (rmse) for the prediction of new daily deaths and cases due to covid- in table vi . for comparison, we have used the arima model as well with the parameters set according to the work in [ ] that have fine-tuned this scheme for forecasting the dynamics of covid- cases in europe. we have also found the best arima model in each scenario according to augmented dickey-fuller (adf) tests and based on akaike information criterion (aic) and reported the results denoted by arima*. to compare with other works in this area, we had to aggregate our county-level findings to form estimators for state-level prediction. from the results reported in table vii , it is interesting to observe that the aggregated estimator based on our model achieves strong evaluation result comparable to the models that achieve highest scores, while clearly outperforming the other two models that are inherently county-level, namely, the works in [ ] and [ ] . the predictions for severak regions exhibiting different severities are shown in figure . these results can help the reader in a qualitative assessment of the model performances, in which the outputs of our approach demonstrate high stability and follow the trajectory of the ground-truth with precision. the primary objective of this work is focused on leveraging regional representations for accurate short-term predictive modeling of the epidemic with minimal use of historical data. it is plausible to assume that the features chosen in this work, which reflect on different characteristics of a region, include valuable information for efficient prediction of pandemic events. the static features include various socio-economic and demographical properties associated with a region and its population. combined with the dynamic set of features such as influenza activity level and mobility patterns, this information was leveraged along with a short track of pandemic time-series for predictive modeling. we do not claim that the data points coming from this domain are statistically sufficient for the pandemic event prediction tasks; however, empirical results indicate that they can be effectively utilized for these objectives. there are occurrences outside of this domain that can impact the outcomes (e.g., the initial impact of a large number of infected people arriving in a specific location is not initially captured by our scheme). nevertheless, the results indicate that the data points coming solely from this work's domain can help in the effective knowledge extraction regarding the current and future values of pandemic-related time-series. the result section elaborated on the statistical findings and introduced a measure of feature importance. in addition, a neural network architecture that has a long short-term memory configured recurrent neural network in its core was introduced to serve as a new baseline for covid- event prediction. since the beginning of the covid- outbreak, there have been works focusing on gathering information or performing statistical analysis related to this epidemic. this work is focused on learning and analysis of the high-resolution spatiotemporal representation of urban areas. we provide a collection of datasets and select a large number of features to reflect on various demographics, socio-economics, mobility, and pandemic information. we have used statistical analysis techniques to investigate the relationships between individual features and the epidemic, while also considering the contribution of such features to the overall representation power. we have also proposed a deep learning framework to validate this idea that such region-based representations can be leveraged to obtain accurate predictions of the epidemic trajectories while using but a minimal amount of historical data on the outbreak events (e.g., number of cases). even though are model is trained with the objective of providing county-level predictions, we have aggregated these county-level predictions and used these now state-level estimators to evaluate the loss on the most recent data. in table , we have compared these results with the information on the similar performance measure of the eight covid- prediction works that perform state-level inference making. it can be seen that our framework provides a simple solution which outperforms the other county-level methodologies (namely, [ ] and [ ] ) on this task. the importance of clearly defined policies enforced at the proper time on alleviating the adverse impacts of a pandemic in different areas is crystal clear. one of the important applications of this work is in providing researchers and agencies with a more in-depth understanding of the co-occurrence of idiosyncratic patterns associated with regions and the predicted pattern of the outbreak. this information can be used to assist policymakers, for example, to render the details of their decisions such as lockdowns, more fine-grained and attuned to the regional needs. these include the intensity and length of enforcing such measures. the ability to predict pandemic-related occurrences (e.g., number of deaths, cases, and recoveries) is another valuable application of this work. this knowledge will provide hospitals and healthcare facilities with targeted information to help with the efficient allocation of their resources. another important application of this work is when there is a lack of availability for accurate and reliable historical data on the epidemic events. for example, when it is realized that the previous reports on the number of cases and deaths due to the pandemic were not reliable, such finding will not affect our solution due to its less degree of dependence on the historical data on the epidemic than other models which base their analysis on them at the core of their analyses. this study has several limitations that should be discussed. the initial notion of feature informativeness which was discussed in the earlier sections of this article mainly has to do with the contribution of features to the variance in representing regions and areas. given the nature of this study, combining this and the relationship between them and the pandemic and providing more in-depth prior domain knowledge can help with a better definition of feature importance. our methodology provides a means to use region-based representations to obtain predictions with less reliance on the historical epidemic data. nevertheless, generalizing the network architecture in this work and providing access to more extended and reliable historical data, if possible, can be an improvement and is worthwhile as a potential future direction. utilizing attention-based methodologies and other interpretation techniques with the pre-trained weights is also a well-suited future direction to better understand what the models learn. in this study, we gathered a collection of datasets on a wide range of features associated with us regions. our approach then used various statistical techniques and machine learning to measure the relationship between these regional representations and the pandemic time-series events and perform predictive modeling with minimal use of historical data on the epidemic. both quantitative and qualitative evaluations were used in assessing the efficacy of our design, which renders it suitable for applications in various areas related to pandemic understanding and control. this is crucial since the information on the patterns and predictions related to an outbreak play a critical role in elaborate preparations for the pandemic, such as improving the allocation of resources in healthcare systems that will otherwise be overwhelmed by an unexpected number of cases. it is important for a predictive modeling approach on the pandemics to be able to help when the epidemic is in its early stages. to evaluate the performance of our approach, we have performed experiments on the early stages of the covid- pandemic as well. in this particular dataset, the march st, to may th, date range is covered. using a k-fold validation approach, the performance of the model is evaluated and reported in table ? ?. it is shown that the network operates significantly better than arima*, the details of which were discussed in the article. please note that arima based models have shown success in predicting covid- events in the literature. in the first appendix, the performance of the model on the two main tasks regarding covid- predictions and simulations was demonstrated. to add on that, table x shows the performance of the model on the task of predicting normalized cumulative table ix this table shows the average daily root mean square error for the dwlstm model compared to the arima* predictions. the evaluations are done using a dataset that contains only the early stages of the covid- outbreak in the us. the objective in the following experiments was to predict the new daily confirmed covid- death counts for each county which is attributed to the pandemic. the other factor that is shown in table x is the variations of the performance level by changing the length of the prediction window. this suggests that in the early stages, since the available data is limited, choosing smaller windows would help with the performance. however, based on the results in the article we came to know that as more data becomes available, the performance on the longer windows can be significantly improved. as an experiment to show the impact of the highly affected areas in teaching the machine learning model in our approach, we have tried removing the counties of new york state from the dataset and showed the results in table ? ?. the results indicate that in terms of quantitative assessment, the lack of presence for the highly affected areas causes a significant drop in the loss values. however, the qualitative analysis showed that the models do not perform well in the case of rising values, as the amount of information available on such cases to train the network on is fairly limited. this causes both family of models to be biased in making predictions that tend to underestimate the target values. virus taxonomy bat coronaviruses in china viral metagenomics revealed sendai virus and coronavirus infection of malayan pangolins (manis javanica) mental health and coping with stress during covid- pandemic olivia health analytics platform private kits: safepaths; privacy-by-design covid solutions using gps+bluetooth for citizens and public health officials apps gone rogue: maintaining personal privacy in an epidemic covid- /coronavirus real time updates with credible sources in us and canada covidnet: to bring the data transparency in era of covid- novel coronavirus dataset covid- open research dataset challenge (cord- ) a county-level dataset for informing the united states' response to covid- comparing and integrating us covid- daily data from multiple sources: a county-level dataset with local characteristics using reports of symptoms and diagnoses on social media to predict covid- case counts in mainland china: observational infoveillance study predicting mortality risk in patients with covid- using artificial intelligence to help medical decision-making spatiotemporal dynamics, nowcasting and forecasting of covid- in the united states covid- simulator learning to forecast and forecasting to learn from the covid- pandemic fast and accurate forecasting of covid- deaths using the sikja model arima-based forecasting of the dynamics of confirmed covid- cases for selected european countries initial simulation of sars-cov spread and intervention effects in the continental us us census demographical data us mortality rates by county us county-level mortality us county-level trends in mortality rates for major causes of death diversity index of us counties us drought monitor united states droughts by county county presidential election icu beds by county in the us us household income statistics a weekly summary of us covid- hospitalization data laboraty-confirmed covid- associated hospitalizations state statistics us data for download principal component analysis deep learning and the information bottleneck principle reasons coronavirus is hitting black communities so hard covid- data in the united states us facts dataset key: cord- -hrxj tcv authors: bunker, deborah title: who do you trust? the digital destruction of shared situational awareness and the covid- infodemic date: - - journal: int j inf manage doi: . /j.ijinfomgt. . sha: doc_id: cord_uid: hrxj tcv developments in centrally managed communications (e.g. twitter, facebook) and service (e.g. uber, airbnb) platforms, search engines and data aggregation (e.g. google) as well as data analytics and artificial intelligence, have created an era of digital disruption during the last decade. individual user profiles are produced by platform providers to make money from tracking, predicting, exploiting and influencing their users’ decision preferences and behavior, while product and service providers transform their business models by targeting potential customers with more accuracy. there have been many social and economic benefits to this digital disruption, but it has also largely contributed to the digital destruction of mental model alignment and shared situational awareness through the propagation of mis-information i.e. reinforcement of dissonant mental models by recommender algorithms, bots and trusted individual platform users (influencers). to mitigate this process of digital destruction, new methods and approaches to the centralized management of these platforms are needed to build on and encourage trust in the actors that use them (and by association trust in their mental models). the global ‘infodemic’ resulting from the covid- pandemic of , highlights the current problem confronting the information system discipline and the urgency of finding workable solutions . information system (is) artifacts have been developed and used throughout human history and over time we have witnessed their disruption of business, industry and society. the current wave of digital disruption has been caused by the development of mobile phones (that are really computers), resource sharing services such as uber and airbnb and social media communications platforms that are centrally managed (but socially distributed) like facebook, instagram and twitter. unfortunately, our is development trajectory also now sees us https://doi.org/ . /j.ijinfomgt. . in a time of post-truth , fake news and an 'infodemic' which have wreaked digital destruction and havoc on our shared mental models of what we understand to be real and true i.e. shared situational awareness . it is becoming increasingly difficult to agree on what information represents the reality and truth about crisis events within the echo chamber of social media and the opaque algorithmic biases which underpin platform providers, search engines and data aggregators (bunker, stieglitz, ehnis, & sleigh, ; himelboim, smith, rainie, shneiderman, & espina, ; noble, ; sismondo, ) . fig. shows the current level of concern with fake news about the covid- epidemic from a survey conducted in canada, china, france, germany, india, japan, mexico, saudi arabia, s. korea, u.k. and u.s. (edelman trust barometer, ) . this survey highlights that % of respondents worry that there is a lot of fake news being spread about covid- while % of respondents are having difficulty finding trustworthy and reliable information. if these levels of concern are reflected in the global population at large, then managing an adequate pandemic response through shared situational awareness becomes an impossible task. while profiling is not a new approach to the treatment of data, communications and service platform providers and data aggregators have found new ways of combining the techniques of individual user profiling (iup), data analytics (da) and artificial intelligence (ai) to monetize the vast amounts of data that have been increasingly generated by their users (liozu & ulaga, ) . iup, da and ai are applied to better understand, influence or manipulate an individual's opinions and social, political and economic behavior through 'nudging' mechanisms (lanzing, ) . this approach to profiling is a powerful tool (zuboff, ) which is used to exploit the individual, their decisions and behavior for financial gain, but which does not effectively address issues of critical and optimal decision making and behavior for societal and group benefit e.g. pandemic management. this is due to the creation of mental model dissonance through the misinformation and rumors that are produced and propagated by this approach. for example, in the current covid- pandemic, in order to stop the spread of the virus, health agencies across the globe are urging us to stay socially distant, wash our hands at every opportunity, wear masks (when necessary), and get tested if we develop symptoms. unfortunately, rumors propagated on social media platforms quite often reinforce multiple and conflicting mental models of virus conspiracies, 'quack treatments' and inaccurate information regarding government motivations for lockdowns. this can severely hamper crisis management efforts. some examples of misinformation propagated during the current pandemic include: dissonant mental models are reinforced by recommender algorithms (lanzing, ), bots (mckenna, and trusted individual platform users or influencers (enke & borchers, ) resulting in alarming levels of digital destruction which is turn undermines social cohesion and creates a barrier to shared situational awareness and effective crisis response. we therefore see a tension and conflict arising from: ) the need for alignment of mental models and shared situational awareness to support effective crisis management; and ) the developments of digital disruption, destruction and the facilitation and reinforcement of dissonant mental models through post truth perspectives and conflicting situational awareness. shared situational awareness is developed through the alignment of our mental models to represent a shared version of truth and reality on which we can act. this is an important basis for effective information sharing and decision making in crisis response (salas, stout, and cannon-bowers et al., ) . aligned mental models help us to agree about the authenticity, accuracy, timeliness, relevance and importance of the information being communicated and give concurrence, weight and urgency to decisions and advice. harrald and jefferson ( ) highlight that shared situational awareness implies that "( ) technology can provide adequate information to enable decision makers in a geographically distributed environment to act as though they were receiving and perceiving the same information, ( ) common methods are available to integrate, structure, and understand the information, and ( ) critical decision nodes share institutional, cultural, and experiential bases for imputing meaning to this knowledge" (page ). we know that most crisis management agencies have established, agreed, authenticated and qualified mental models on which they base their internal operational command and control systems. this gives them assurance and governance of the information they produce (bunker, levine, & woody, ) and qualifies their decisions and recommended actions to manage crisis situations. it also engenders public trust in these agencies, to provide relevant and critical crisis information and advice for public action. fig. highlights the current high and increasing levels of trust in government institutions during the covid- "…that truth has been individualized or that individuals have become, to borrow a turn of phrase from foucault, the primary and principal points of the production, application, and adjudication of truth is one important point. that emotion and personal belief are able now to outflank even objective facts and scientific knowledge is another (the claim that literature, for example, has truths to tell has long fallen on deaf ears). their articulation is decisive: with the regime's inflection, even inflation, of the indefinitely pluralized and individualized enunciative i who, by virtue of strong feeling, is able at any moment not only to recognize or know but, also, to tell or speak the truth, truth is privatized and immanitized, its universal and transcendental dimensions nullified altogether. hence, what is true for any one person need not be true for everyone or anyone else; what is true for anyone now need not necessarily be true later" (biesecker, pp - ) . we posit that fake news is, in essence, a two-dimensional phenomenon of public communication: there is the ( ) fake news genre, describing the deliberate creation of pseudojournalistic disinformation, and there is the ( ) fake news label, describing the political instrumentalization of the term to delegitimize news media " (egelhofer & lecheler, page ) ." "an over-abundance of informationsome accurate and some notthat makes it hard for people to find trustworthy sources and reliable guidance when they need it." (who, ) "a concentrated, personally constructed, internal conception, of external phenomena (historical, existing or projected), or experience, that affects how a person acts" (rook, )page . "refers to the degree of accuracy by which one's perception of his current environment mirrors reality." (naval aviation schools command ( ) pandemic (edelman trust barometer, ). when digital destruction produces mental model dissonance shared situational awareness between crisis management agencies and the general public becomes impossible to maintain and communicate (both to and from) due to inconsistencies in what constitutes reality and truth, making crisis response unmanageable. centrally managed communications and service providers and data aggregators treat your personal data as a commodity/resource which they are generally entitled to use as they wish, however, they largely ignore the deeper understanding of the mental models on which data is produced by any one system (bunker, ) . "in the social sciences, in particular, big data can blend wide-scale and finer-grained analytic approaches by providing information about individual behaviour within and across contexts" (tonidandel, king, & cortina, )page . data science harnesses the belief that data created by an individual using different applications or platforms, can be seamlessly combined then analyzed using sophisticated data analytics and ai. conversely, this belief has also fundamentally changed the way that all individuals view and interact with data and information in their daily lives. for instance, our trust in the google maps application on our phone to tell us exactly where we are at any given moment, extends to the belief that all information coming to us via our mobile phones and the applications we choose to use, must be some version of reality or the truth. we selfselect our filters (applications and services) for engagement with the wider world and our reliance on mobile technology and applications to navigate the world is now at an all-time high i.e. . billion or . % of the global population. if we are a social media platform user, however, we can be bombarded with paid advertising or 'nudged' by recommender algorithms to make contact with other platform users, information sites and products and services that are deemed to be relevant to us and part of our 'shared reality' (echterhoff, higgins, & levine, ). nudging performs three functions : meeting platform user ) epistemic; and ) relational needs; and ) adding to the platform owner's profitability'. for instance, the platform user is directed to people, products, information and communities of interest that help them to "achieve a valid and reliable understanding of the world" (levine, ) -pg ; this then fulfills their "desire to affiliate with and feel connected to others" (levine, ) -pg ; and ; platform owners and managers (and their influential users) then make lots of money selling targeted advertising by directing platform users in this way. this might be a desirable situation when sharing situational awareness within a social media platform community of interest where mental models align, but when combined with platform characteristics like user anonymity and lack of information assurance, then treating a social media platform as a trusted information source for shared situational awareness becomes problematic. for example, social media platform users can be a valuable source of eyewitness information for crisis management agencies to enhance the production of shared situational awareness for crisis decision making. social media information when generated in large volumes in a crisis, however, is difficult to process. the source of the information can take time to identify and authenticate and the information provided by them can be a problem to verify, validate, analyze and systematize. this produces a general lack of trust by crisis management agencies and other social media users, in the crisis information produced on social media platforms. this can have catastrophic consequences for shared situational awareness through failure to detect and use important and relevant information or through the belief in, and the propagation of, mis-information produced on these platforms (bui, , ehnis and which can also impact and undermine social benefit and cultural cohesion in times of crisis (kopp, ) . we are currently living in an era of digital disruption which provides many economic and social benefits, but we must also be able to support crisis management based on shared situational awareness. post truth perspectives, fake news and the resulting infodemic has resulted in wide ranging digital destruction and the enablement and encouragement of mental model dissonance. how can we best address this problem? seppanen, makela, luokkala, and virrantaus ( ) have outlined the connection characteristics of shared situational awareness in an actor network. fig. highlights the configuration of the connection which includes three requirements: ) informationto bridge the information gap through the identification of key information elements; ) communicationto understand the fluency of how actors communicate through describing this communication in detail; and ) trustto analyse the role of trust on the quality and fluency of communication. they reason that "if trust could be increased the availability, reliability, and temporal accuracy of information could be improved". recent research conducted on the use of social media platforms for crisis communication purposes, so far concludes that: ) trusted agencies have an early mover information advantage in crisis communication on social media platforms such as twitter (mirbabaie, bunker, stieglitz, marx, & ehnis, ) ; ) information communicated by trusted agencies can be amplified and intensified by influential social media users and others to "communicate, self-organize, manage, and mitigate risks (crisis communications) but also to make sense of the event (commentary-related communications)", for example through retweets on twitter (stieglitz, bunker, mirbabaie, & ehnis, ); ) trusted agencies and the information they supply is influential in shaping the human response to crisis situations (mirbabaie, bunker, stieglitz, & deubel, ) ; ) trusted agencies find processing the high volumes of information communicated through social media platforms problematic due to the difficulty in authenticating the information source (user) and establishing the accuracy, timeliness and relevance of the information itself ; and ) there are a number of tensions which emerge in the use of social media as a crisis communications channel between trusted agencies and the general public. these tensions occur in the areas of: information, generation and use i.e. managing the message; emergence and management of digital and spontaneous volunteers; management of community expectations; mental models which underpin prevention, preparation, recovery and response protocols (pprr); and management of the development of the large-scale adoption of social media technologies for crisis communications (elbanna, bunker, levine, & sleigh, ) . this knowledge points us to a number of areas of research focus in is for the future development of data analytics and artificial intelligence to more effectively align mental models for shared situational awareness. these should: • build on the trust in government and their crisis management agencies, as well as other influential actors in crisis management communications, to provide and amplify advice and information as early as possible in a crisis; • build frameworks that create algorithmic transparency, information governance and quality assurance for platform and service providers and data aggregators to create and reinforce trust in them as information sources i.e. so they become trusted actors in the communications network; • address how platform and service developers and government communication system developers can share concepts and build systems that address crisis communication requirements, including those used in iup, ds and ai; and • address government failures to provide robust is services, during the covid- pandemic, and the subsequent impacts this has had on trust in government and their systems for tracking and tracing infections (chakravorti, ) . these areas of focus are important given the negative impacts that are already emerging from the use of ai during the covid- pandemic i.e. varying levels of data quality and comprehensiveness, development of covid- treatments based on the use of this variable data, use of social control and surveillance methods to minimize virus spread (smith & rustagi, , naughton, b . it is time to critically analyze and evaluate how centrally managed platforms, their data and systems algorithms are being used during the covid- pandemic (and other crises) by the companies who own and run them. how are they using the information they collect i.e. development of services, influencing users, creation of profits etc., (how) are they limiting the spread of post truth arguments and fake news/information and are they exacerbating or assisting with the management of crises? some platform owners (youtube, twitter, whatsapp) are currently making efforts to be more transparent in their platform operations, data governance and quality assurance (hern, ; naughton, a) . there have also been growing calls from critics for regulation of these companies and their business practices (lewis, ) . there is a long way to go, however, to address the problems, issues and barriers caused by these companies for the production of shared situational awareness to support crisis management. for instance, the latvian government wanted to access the google/apple designed contact tracing framework (a bluetooth enabled api) which "can later be translated into covid- exposure notifications" and which are sent to contacts of a covid- positive person. google and apple set preconditions to accessing this framework, however, by only allowing the registration of one government/health authority approved contact tracing app per country and by not allowing government health agencies access to the personal details of contacts, due to their evaluation of potential privacy issues (ilves, ) . multi-jurisdictional legal definitions and treatment of privacy issues is also a complicating factor in this decision. this situation presents a problem to any country wishing the use the api, as contact tracers need to be able to: ) assess the level of potential exposure to the virus of the contact; and then ) provide advice to the contact as to what action they should take. this could be anything from "get a test and self-isolate for days" through to "take no action at all, socially distance from others, but watch for symptoms". as covid- health advice can have critical health, economic and social consequences for an individual, the advice needs to be tailored for the individual and be as least impactful as possible. merely sending an exposure notification to a contact of a covid- infected person, does not guarantee any action, or the correct action being taken by that person. by prohibiting access to data and controlling their api in this way, google and apple are not sharing available data with not sharinggovernment health agencies that would allow them to tha perform effective contact tracing which could save many lives while preventing large scale economic hardship and vice versa. this presents us with a difficult legal and ethical situation to ponder i.e. does the individual requirement for data privacy outweigh the opportunity to save lives and livelihoods? we are now in the midst of a pandemic and the 'infodemic' that has followed in its wake and to counter the effects of this overload of inaccurate and misinformation, the who has collaborated with the providers of social media platforms (e.g. facebook, twitter, etc.) to mitigate the impact of false information on social media (who, ) in order to support shared situational awareness and effective crisis management. this is an unsustainable and unrealistic situation, however, due to the ongoing cost, level of resources and necessary skills required for such an intervention. while many countries have been unable to adequately deal with the pandemic, there are many success stories of shared situational awareness supporting health agency and public response for effective virus containment through day-to-day decisions and actions. australian governments (federal and state) have had mixed results in the containment and management of the covid- pandemic. there have been communications and is missteps along the way e.g. ruby princess cruise ship (mckinnell, ) and the melbourne quarantine hotels (kaine & josserand, ) where quarantine protocols were breached, as well as the current technical problems with the collection and use of the data from the australian federal government covidsafe tracking and tracing app (taylor, ) . we are also currently seeing the rapid development of a covid- outbreak in metropolitan melbourne (victoria), where previously there had been a successful pandemic response. this has necessitated the reinstatement of lockdowns, strict social distancing enforcement and the closure of the nsw/victorian border. this outbreak has been exacerbated by misinformation circulating on social media targeting specific cultural groups, which has caused general confusion (especially in non-english speaking communities) as well as the promotion of racist tropes and hate speech (bosley, ) . while managing a pandemic is a complex and complicated process with many stakeholders, to achieve a more effective level of crisis management there are benefits to be obtained by shared situational awareness through the alignment of mental models that represent more broadly acceptable situational reality and truth. this alignment would further support our trust in government as well as develop trust in other organizational and individual actors in the communications network. individual differences in political, social and cultural contexts also add a layer of complexity to the alignment of mental models for shared situational awareness. government agencies should refine their mental models of situational awareness to accommodate those variations in factors of significance which impact alignment e.g. cultural behavioral practices, housing conditions, working environments and practices, access to services, regard for community leadership, digital literacy, access to technology etc. or they risk the ongoing development and reinforcement of dissonant and alternative mental models and erosion of their trusted status. (mirbabaie et al., ) both government and platform providers have public interest and safety information communications requirements to satisfy in both the short and long-term, which directly impact our ability to manage pandemics and other types of crises and disasters effectively. there must be collaboration and cooperation (either legislated or voluntary) to build on the trust in government to provide information as early and as often as possible in a crisis (and enable the amplification of, and action taken from this advice) as well as ensure algorithmic transparency, information governance and quality assurance for robust and trusted communication and information services overall. to remain successful in managing the pandemic, however, requires long term vigilance and effort by both the pandemic managers and the international journal of information management xxx (xxxx) xxxx public alike. as we can see by the current covid- outbreak in victoria, mental model alignment (and realignment) during a crisis is a continual process which requires constant attention, effort and resources. "a critique of how science is produced is very different from the post-truth argument that there are alternative truths that you can choose from. post-truth is a defensive posture. if you have to defend yourself against climate change, economic change, coronavirus change, then you grab at any alternative. if those alternatives are fed to you by thousands of fake news farms in siberia, they are hard to resist, especially if they look vaguely empirical. if you have enough of them and they are contradictory enough, they allow you to stick to your old beliefs." bruno latourinterview (watts, ) . guest editor's introduction: toward an archaeogenealogy of posttruth reports melbourne coronavirus cluster originated at eid party could stoke islamophobia, muslim leaders saythe guardian (online) social media, rumors, and hurricane warning systems in puerto rico a philosophy of information technology and systems (it&s) as tools: tool development context, associated skills and the global technology transfer (gtt) process repertoires of collaboration for common operating pictures of disasters and extreme events bright ict: social media analytics for society and crisis management digital contact tracing's mixed record abroad spells trouble for us efforts to rein in covid- . the conversation (online) july shared reality: experiencing commonality with others' inner states about the world spring update: trust and the covid- pandemic fake news as a two-dimensional phenomenon: a framework and research agenda repertoires of collaboration: incorporation of social media help requests into the common operating picture emergency management in the changing world of social media: framing the research agenda with the stakeholders through engaged scholarship social media influencers in strategic communication: a conceptual framework for strategic social media influencer communication shared situational awareness in emergency management mitigation and response youtube bans david duke and other us far-right users. the guardian (online) classifying twitter topic-networks using social network analysis why are google and apple dictating how european democracies fight coronavirus? the guardian (online) melbourne's hotel quarantine bungle is disappointing but not surprising. it was overseen by a flawed security industry. the conversation (online) fake news: the other pandemic that can prove deadly strongly recommended" revisiting decisional privacy to judge hypernudging in self-tracking technologies going back to basics in design science: from the information technology artifact to the information systems artifact socially-shared cognition and consensus in small groups facebook and google must move away from the zero-sum game. the sydney morning herald -opinion (online) monetizing data: a practical roadmap for framing, pricing & selling your b b digital offers queensland researchers analysing coronavirus conspiracy theories warn of social media danger ruby princess passengers disembarked before coronavirus test results to get flights, inquiry hears social media in times of crisis: learning from hurricane harvey for the coronavirus disease pandemic response who sets the tone? determining the impact of convergence behaviour archetypes in social media crisis communication twitter taking on trump's lies? about time too. the guardian (online) silicon valley has admitted facial recognition technology is toxic -about time. the guardian (online) algorithms of oppression: how search engines reinforce racism defining and measuring shared situational awareness a kuhnian analysis of revolutionary digital disruptions mental models: a robust definition. the learning organization the role of shared mental models in developing shared situational awareness developing shared situational awareness for emergency management editorial 'post-truth? the problem with covid- artificial intelligence solutions and how to fix them. stanford social innovation review sense-making in social media during extreme events australia's covidsafe coronavirus tracing app works as few as one in four times for some devices. the guardian (online) big data methods: leveraging modern data analytic techniques to build organizational science this is a global catastrophe that has come from within' interview. bruno latour: the guardian (online) novel coronavirus ( -ncov) situation report the age of surveillance capitalism: the fight for a human future at the new frontier of power i would like to thank the communications and technology for society research group at the university of sydney business school,and the marie bashir institute for infectious diseases and biosecurity for their continued support for this work, and adjunct associate professor anthony sleigh and dr christian ehnis for their encouragement and invaluable feedback on this paper. key: cord- - ie c f authors: heimer, carol a. title: the uses of disorder in negotiated information orders: information leveraging and changing norms in global public health governance date: - - journal: br j sociol doi: . / - . sha: doc_id: cord_uid: ie c f the sars epidemic that broke out in late in china’s guangdong province highlighted the difficulties of reliance on state‐provided information when states have incentives to conceal discrediting information about public health threats. using sars and the international health regulations (ihr) as a starting point, this article examines negotiated information orders in global public health governance and the irregularities in the supply of data that underlie them. negotiated information orders within and among the organizations in a field (here, e.g., the world health organization, member states, government agencies, and international non‐governmental organizations) spell out relationships among different categories of knowledge and non‐knowledge – what is known, acknowledged to be known, and available for use in decision making versus what might be known but cannot be acknowledged or officially used. through information leveraging, technically sufficient information then becomes socially sufficient information. thus it is especially information initially categorized as non‐knowledge – including suppressed data, rumour, unverified evidence, and unofficial information – that creates pressure for the renegotiation of information orders. the argument and evidence of the article also address broader issues about how international law and global norms are realigned, how global norms change, and how social groups manage risk. to mean well-defined ignorance (gross and mcgoey : ) or a type of knowledge about the unknown (gross : ) . to be sure, influential research has considered such important matters as the distinctions between levels and kinds of ignorance, uncertainty, and risk, strategies for handling asymmetric information, and procedures for coping with dishonesty and duplicity (see, e.g., ackerlof ; arrow ; cook ; ericson and doyle ; goffman ; granovetter ; heimer b; knight knight [ ; shapiro ; williamson ) . what is missing, though, is a thorough incorporation of various forms of ignorance, such as non-knowledge, into existing theories of how people, groups, and organizations seek, assign meaning to, and use information (gross and mcgoey ; heimer ; mcgoey ) . researchers need to consider how exactly non-knowledge fits into the negotiated information orders that anchor organizational and interorganizational action. using sars as an example, this article examines negotiated information orders in global public health governance and the irregularities in the supply of the data that underlie them. information may be in short supply because it is suppressed, and it may also be of uncertain quality because it is incomplete or purposefully misleading. in effect, the sars case suggests, whether information is acquired from legitimate sources shapes not only the nature and quality of the information itself but also the uses to which it can be put. in addition to seeking information, then, actors strategically seek information from particular sources and deploy the information they have in hand to pressure others to augment or confirm existing information. through information leveraging, technically sufficient information becomes socially sufficient information. in this way, the article shows what a negotiated information order might look like when we more fully incorporate the social uses of non-knowledge and other forms of ignorance into our analysis. in particular, the article suggests that it is especially information categorized initially as non-knowledge -including suppressed data, rumour, unverified evidence, and unofficial information -that creates pressure for the renegotiation of information orders. although it is a truism that information is needed before rational decisions can be made, the importance of information for organizational decision making is often overestimated. since the pioneering work of herbert simon (march and simon ; simon ) , organization theorists have understood that the model of rational decision making was a poor description of reality and did not capture how information is actually used by organizations. because of limited cognitive and computational capacities, theorists suggest, organizations are only boundedly rational, accepting satisfactory solutions rather than continuing decision-making processes until they find optimal ones. besides using less information than might be expected, organizations also use it on a different timetable and for different purposes. for instance, information intensive solutions often are produced somewhat independently of the problems with which they are eventually matched (feldman ) . employing the metaphor of a garbage can, other scholars suggest that organizational decision making is not linear, but instead depends on how the semi-autonomous streams of choice points, problems, solutions, and participants come together (cohen et al. ; heimer and stinchcombe ) . moreover, information has symbolic as well as instrumental uses, often serving to legitimate decisions even when it plays little role in identifying problems and selecting or crafting solutions (feldman and march ) . decision making is a quintessentially social matter. decisions may depend less on whether decision makers have enough high-quality information than on whether they agree that the available information meets a variety of normatively or even legally established criteria. that is, whether or not information is technically sufficient, it must also be socially sufficient to be usable in decision making (heimer a) . information is technically sufficient if it can be used to answer key questions confronting an organization and if it can be used, perhaps with some modification, in an organization's decision-making algorithms. if decision makers cannot cite data of the sort conventionally used or recognized by their organizational field as sufficient for decision making, their decisions may be subject to challenge. a negotiated information order emerges when consensus is reached within or between organizations in a field regarding the criteria for socially sufficient information -about the type of information usable in decision making, the priority given to different types of information, and allocation of responsibility for gathering and interpreting that information (heimer a: ) . as these conceptual distinctions suggest, the symbolic nature of information penetrates even more deeply into organizational decision making than previous research might lead us to expect (feldman and march ; meyer and rowan ) . in particular, such symbolic considerations shape assessments of both decision-making processes and the information on which they are based. to work its symbolic magic, information must be seen as legitimate, and organizational actors will spar over whose data passes that test. but, crucially, such tests are layered. socially sufficient information is thus information that is widely agreed to be adequate to its intended purposes. technically sufficient information is more contested, with some actors touting its virtues and others casting doubt. technical sufficiency can therefore be a way-station along the path to social sufficiency or an intermediate category that permits some uses of information while prohibiting others. although participants experience these discussions as realist, social scientists would be quick to point out the deeply constructionist character of claims about the quality and veracity of information. the lines dividing categories of information are necessarily fluid, with discoveries shoring up some claims while undermining others and regularly adding to the stores of both knowledge and ignorance. as we will see, it is especially the boundary between knowledge and non-knowledge where contests are focused, because crossing that boundary makes otherwise prohibited actions possible. transposed into an organizational register, the dividing line between knowledge and non-knowledge takes the form of a distinction between technically and socially sufficient information. to say that the acceptability of information depends on a negotiated information order says only that the meaning of information is not given a priori but must be worked out collectively. norms about the sufficiency of information may be grounded in rules or laws. or they may reflect a broad, but informal consensus. what consensus is ultimately reached will depend on such factors as power differences, inter-organizational dependencies, and pre-existing loyalties. the preferences of powerful actors who have a vested interest in perpetuating practices associated with traditional types of information may have an outsized influence on the norms that emerge. previous agreements about the acceptability of various kinds and quantities of information provide important starting points, but will be less influential when decision makers face situations that seem unprecedented. thus negotiated information orders can be destabilized by modifications in technology, by the arrival of new problems or opportunities, or by changes in relationships among parties. three examples illustrate the importance of negotiated information orders in assigning meaning and determining how information is interpreted and used. clarke's mission improbable ( ) shows how information orders negotiated by powerful actors can exclude other voices that might challenge the meaning assigned to information. analysing organizations' plans to avert, control or cope with disasters, clarke considered the 'fantasy plans' created to clean up oil spills in open waters, evacuate long island in the event of a nuclear power plant accident, and protect the population during and after a nuclear war. in each case, rather than frankly acknowledging the impossibility of averting, controlling or mitigating disaster, key actors developed elaborate analogies and conducted careful simulations to convince themselves and others of the truth of essentially untenable propositions. the problem, of course, is that such analogies and simulations rarely worknuclear meltdown is not much like an ice storm, a major oil spill in open waters cannot be simulated by scooping up oranges from calm seas, and the evacuation of long island because of a nuclear accident cannot in good conscience be equated with the flow of people during rush hour. but when discussion is confined to a circle of experts, others may be unable to point out the obvious. here, the negotiated information order precluded consideration of information that other parties could have introduced by dismissing suppressed perspectives as unusable non-knowledge. with sars, as we will see, the first impulse of health workers, scientists, and policy makers was also to assume that they were seeing a variant of something they had encountered in the past -an atypical pneumonia or a disease caused by chlamydia (normile ; who a) . in sars, as in clarke's cases, suppressing information allowed actors at least temporarily to move forward with existing routines. even when divergent views are not completely suppressed, the context in which information is considered can shape conclusions. in last best gifts, healy ( ) asks how the major organizations supplying blood and blood products to american patients responded to early evidence that hiv could be transmitted through their products. in the american blood industry, a nonprofit whole-blood sector (the blood banks), reliant on donors, coexists with a for-profit plasma industry (the plasma fractionators) that purchases plasma from suppliers. between and , when no one was sure whether hiv could be transmitted through tainted blood and blood products, the us centers for disease control (cdc) presented their accumulating evidence and made recommendations about how to keep blood supplies safe (healy : - ) . representatives of blood banks and plasma fractionators received identical information, often at the same meetings. interestingly, though, these two sectors interpreted the information differently and adopted divergent strategies. blood banks, dependent on donors, saw blood borne transmission as 'still unproven' (healy : ) and were unwilling to ask intrusive questions about donor lifestyles and sexual practices. in contrast, plasma fractionators, working in a competitive market that made them more dependent on consumers than suppliers, adopted a policy of questioning potential donors and excluding putatively high-risk groups from their supplier pools. in short, the negotiated information order of the plasma fractionators led them to see the early information about hiv transmission as knowledge to be acted on, while blood banks' information order constructed the same information as non-knowledge to be ignored. with sars, people in different social contexts not only interpreted the data differently but also concluded that the data implied different things about their obligations under the ostensibly clear rules of the ihr. the final example shows how attempts to solve a new problem, not easily managed within the constraints imposed by an existing information order, can lead to modifications in that information order and changes in power relations in the field. contrasting the insurance of mobile rigs used for exploration and drilling with the insurance of fixed platforms used later for the production of oil in the norwegian north sea, heimer ( a) shows how norwegian insurers gradually altered the negotiated information order dominated by powerful british marine insurers. during the crucial early period of the exploration and development of the oil fields, insurers lacked the experience-based information needed for rating and underwriting, making them dependent on the reinsurance offered by british insurers. because multiple companies had to cooperate to assemble these insurance contracts with their uncertain risks and astronomical face values, insurers had to agree about what information was acceptable for rating and underwriting. british insurers stubbornly insisted on using conventional types of information. some types of information that, from a norwegian perspective, addressed key uncertainties thus could not be used simply because they had not been used in the past; social sufficiency dominated (alleged) technical sufficiency because of the requirement for consensus. because the total insurance capacity was insufficient to adequately insure the north sea oil fields, norwegian insurers were strongly motivated to create new routines for collecting and analysing data. they worked around and then modified conventions about what information could be used for ratemaking and underwriting. and gradually the situation changed. for mobile rigs, experiencebased data slowly became available and pooling risks over time and over similar units became increasingly feasible. this in turn further decreased dependence on the british reinsurance market and enabled norwegian insurers to be more flexible about what data to use and how to construct the policies. in contrast, little changed in the insurance arrangements for fixed installations, which were more expensive, less uniform, less numerous, and introduced later in the development of the oil fields. in the sars case as well, as we will see, pressure to change the rules became acute when new sources of helpful information became available but could not be efficiently exploited unless rules and norms were modified. as these examples demonstrate, organizations' information use is strongly shaped by social conventions. negotiated information orders spell out the relationships among different categories of knowledge and non-knowledge -what is known, acknowledged to be known, supplied by official sources, categorized as socially sufficient and therefore available for use versus what might be known, with varying degrees of uncertainty, but cannot be acknowledged or officially used. non-knowledge, which often comes from unofficial, back-channel sources, may be disregarded because it seems dangerous, threatening, harmful, or simply uncertain; ignored because decision makers are reluctant to bear the costs of retooling to collect, evaluate, and use new forms of information; or discarded because of the symbolic importance attached to information from official sources and the rights accorded to those who possess such high-value information. negotiated information orders thus introduce a modicum of stability in information use for some period of time until a new opportunity or danger arises. when that occurs, key actors pointing to the strategic value of certain information may successfully advocate for the reclassification of some non-knowledge as usable and technically sufficient and for the renegotiation of the information order. the arrival of a new disease -hiv, for the negotiated information order of the american blood industry; sars, for the ihr, the corresponding information order of global public health governance -can bring into sharp relief the irrationalities of established understandings about the reliability of information and the appropriate ways of using it. this article draws on a case study of sars to demonstrate how the deficiencies of the existing information order, institutionalized in the ihr, became painfully apparent in the wake of the epidemic. recognizing the substantial contributions that unofficial, previously illegitimate sources of knowledge could make in the fight against deadly infectious disease in turn helped to solidify the consensus around ihr reform efforts already underway in - when the sars epidemic occurred. in preparing this article, i have drawn especially on primary who documentation about the ihr and sars epidemic, supplemented by reports and commentaries from governmental bodies (e.g., the cdc and us congress) and non-governmental organizations and policy institutes (e.g., the national academy of medicine and chatham house). i have also drawn extensively on existing journalistic and scholarly accounts chronicling and analysing various features of the epidemic and scholarly articles and books investigating the epidemic's legal ramifications and the revision of the ihr. these documents were drawn from a larger body of primary and secondary materials collected primarily in - for a larger project examining the relationship between law and globalization in healthcare more generally. the article contends that in this case, a strong argument about technical sufficiency ultimately led to a new rule system that recategorized such non-knowledge as socially sufficient, legitimate, usable knowledge. international disease surveillance and global health governance have a long history before the sars epidemic. this history includes a century of international sanitary conferences to standardize quarantine regulations to prevent the spread of cholera, yellow fever and plague (and, previously, relapsing fever, typhus and smallpox); the crafting and revision of the international sanitary regulations (first written in ); and the formation of a series of international organizations to oversee disease surveillance and international public health, culminating with the creation of the world health organization, whose member states formally adopted the international sanitary regulations in . revised and renamed the international health regulations (ihr) in , these rules were in turn replaced by the revision, which went into effect in (fidler ; fidler and gostin ; gostin : - ; scales : - ) . a key issue in these agreements has been the collection and publication of information about disease outbreaks, with careful rules about who has to report and to whom, what they must report about, and what information they must transmit -in short, a negotiated information order that became more fully institutionalized over time. only with transparency, the argument went, was there any hope of protecting public health and curbing the spread of disease. yet, as the history of disease surveillance makes clear, because nations also worry about threats to trade, tourism and national reputation, they often strategize about what to reveal and on what timetable, hoping that diseases can be brought under control before discrediting information damages the economy or spoils the national reputation. the objective of the international conferences, conventions, and the ihr has been to induce more timely and more complete sharing of information, previously narrowly focused on reporting on a few infectious diseases and now more expansively redefined to include both infectious diseases and a wide variety of other threats to public health, by recognizing and working with this tradeoff. under the ihr (who ) , security against the spread of disease was to be achieved by requiring member states to notify the who of disease outbreaks within their borders (part ii, notifications and epidemiological information, articles - ) and maintain public health capabilities at ports and airports to monitor and reduce cross-border transmission of disease (part iii, health organization, articles - ). minimization of interference with trade and travel was to be achieved by specifying the range of responses states would be permitted or required to take in response to public health threats (part iv, health measures and procedures, articles - ). in effect, commitments to report outbreaks were traded for promises that responses to such information would be moderate, reasonable and scientifically grounded. even with this exchange in place, though, the record of compliance has been poor, and poor on both counts (carvalho and zacher ; fidler : ; kamradt-scott : - ; scales ; woodall ) . countries frequently failed to report disease outbreaks, but they also imposed overly restrictive protective measures, including quarantines and outdated vaccination requirements, that violated the spirit and the letter of the ihr rules on trade and travel. diseases continued to spread across borders whether or not travellers and goods were impounded, quarantined or otherwise delayed. although neither the who nor the member states seemed very committed to it, as the governing information order, the ihr continued to be consequential in shaping the circulation of information, categorizing information as actionable or not, and providing an excuse for states to shirk or evade pressures to report even as new transparency norms were emerging. the ihr's history suggests that this information order is primarily organized around concerns with trade and travel and has favoured the interests of rich countries (chorev b; fidler ; kamradt-scott ) . similar patterns of favoring the interests of rich countries in global health governance have been noted by other scholars (king ; erikson ; but see wenham on recent changes in emphasis). 'the rising commercial costs imposed by a system of uncoordinated, unregulated national quarantine practices meant that trade rather than health drove the development of international governance on infectious diseases', concludes fidler ( : ) . quite emblematically, the treaty and its predecessors focused only on diseases that seemed likely to be spread by trade and travel, and particularly those that might move from poor to rich countries. infectious diseases that plagued only poor countries, such as polio, were not listed, and south-south contagion was a secondary concern. adjustments were unidirectional: diseases were removed from the list, but re-emerging or new diseases were not added; no adjustments were made to take account of changes in modes and speed of transportation. the official rules of the ihr in some senses imagined a static information order in which states interacted with the who -what fidler ( ) describes as a westphalian system. yet the information order has evolved over time in important ways, with the official information order often out of step with informal practices. two key drivers of change have been innovations in information technologies, which vastly increased the amount of information available while simultaneously reducing state control of information, and the creation of new types of actors in the loosely organized global public health system. as reporting rules were first being developed, it was diplomats who certified that a ship's last port of call was disease free, allowing ships to avoid quarantine as they entered ports to offload cargo and passengers (fidler : ) . although diplomats no longer verify bills of health, the treaty's reliance on national reporters remains a core element of the reporting framework even though the new categories of actors (ngos, ingos, international health workers, laboratory workers, scientists, etc.) have access to much relevant information. thus an evolving information order peopled with these new actors co-existed with a static legal framework that only recently acknowledged and incorporated them. this meant that the who was unable to act even when it possessed information that it believed to be technically sufficient. because it was bound by a strictly formalized set of rules (few things are more rigid than a treaty with a long list of signatories), it could not adjust to evolving communication patterns. although analysts have often described disease surveillance as a collective action problem in which the global interest in transparency is pitted against national interests in episodic strategic concealment, characterizing the problem this way vastly understates the complexity of the interactions among actors. in particular, although it is nation states that have ihr treaty obligations, information about disease may be generated and controlled not only by non-state actors (as mentioned above) but also below the level of the nation-state, by agencies of the state, provincial health departments, individual public or private hospitals, and doctors and other medical personnel. as we will see, the norms and rules about how these lower level actors fit into the ihr negotiated information order have not always been entirely clear. until recently, the ihr made the who exceedingly dependent on official country reports by prohibiting the use of other sources of information. although promed-mail became an important unofficial source of information about threats to public health after its founding in , for many years the who was constrained from officially using it (woodall ). over time, the who's stance on these alternative sources of information evolved. a world epidemiological record piece suggested that 'public health authorities should give more attention to information from sources other than the public health sector, including ngos and the media. the capacity of public health authorities to rapidly respond to outbreak-related information from any source is essential for the efficiency and credibility of the entire surveillance effort' (who : ) . as the volume of information available from electronic sources and from health experts dispersed around the world increased, the pressure to use such information also increased. with increasingly sophisticated tracking systems, for instance, it became possible to demonstrate that deaths (even of particular named individuals) could have been prevented by earlier issuance of travelers' advisories (woodall ) . often, though, sub rosa information was less useful for issuing official warnings than for pressuring countries to report or for asking pointed questions about the adequacy or accuracy of reported information. 'they have accused us of spreading unfounded rumors and posting reports that have had no peer review. but we're just reporting what is being said or published. we tell health officials, you might as well report this, because you'll be reading it on promed tomorrow', commented charles calisher, an early moderator of promed (miller ) . some kinds of action required only that information be seen as technically sufficient (adequate in volume and coverage), but other kinds of action required that information also be socially sufficient (supplied by legitimate sources and arriving through specified routes). were the ihr ever an effective information order? undoubtedly the treaty was an improvement over earlier agreements, both in clarifying expectations and obligations and in institutionalizing a set of practices for reporting on disease outbreaks and keeping protective reactions in bounds. although it was an admirable attempt to create a worldwide consensus that balanced health interests against economic ones, it also had several clear deficiencies. an especially important deficiency was the limited coverage of the ihr, which cast doubt on the legitimacy of the treaty. beyond this severely limited coverage, the ihr were also compromised as an information order by a naïve conception of states as unitary actors and by rules that allowed the who to use only limited kinds of information supplied by specified, state-based actors. over time, informal norms supported fuller reporting on a broader range of threats and exploitation of information from unofficial as well as official sources. but in the medium term, although some nation-states adhered to the new norms, others hid behind the inadequate formal rules of the ihr, and still others continued to ignore even the limited formal requirements of the ihr. how well did this imperfect, outdated information order function when the ihr encountered sars, a new, deadly infectious disease that seemed poised to spread rapidly around the world? did the deficiencies of the information order in fact prevent the who from acting quickly and appropriately to contain the disease? many accounts of the - sars episode describe the chinese as concealing information or misrepresenting the situation in the first months, often suggesting that the country acted illicitly or illegitimately in doing so (altman ; the guardian ). yet closer examination of the record (see especially huang ) suggests that something considerably more complex occurred -there were multiple legitimate reasons for china to conceal early evidence of the outbreak. to begin, during the first days, there was nothing to report because no one understood that this was a new viral disease. because most apparently new diseases are in fact not new, physicians are reminded to think of horses not zebras when they hear hoof beats. as perhaps happened with sars, this advice sometimes leads people astray. with the benefit of hindsight, it is easy to conclude that chinese health workers should have been more diligent in forwarding reports about early cases of 'atypical pneumonia'. but we must be careful not to interpret actions taken in the confusion of the earliest days with knowledge acquired only later. still, local hospitals did call on provincial authorities for help. provincial authorities contacted the national ministry of health. a group of experts conducted an investigation. a report was prepared and circulated to all of the hospitals in the province. but here chinese law altered the disease's trajectory because the report became a state secret that could be shared only with specified people (such as the heads of hospitals). and then the trajectory was modified serendipitously when the report arrived in hospitals during the chinese new year celebrations. because no one read or acted on the report for a three-day period, precautionary measures were not implemented, creating an opportunity for the disease to spread. as noted, several months passed between the first appearance of sars and the first reports to the who. had the first suspicious cases been reported promptly, the disease likely would not have spread beyond guangdong province and hundreds of deaths could have been prevented. reports on the case suggest that in the earliest period, people 'knew' but 'didn't know' about the epidemic. and although healthcare workers, officials and other actors suspected a problem, at least in some instances they were either forbidden to share information or prohibited from acting on the information they received. in this case, the complex interplay of international, national and local rules and norms seems to have done as much to delay as to accelerate the spread of information about threats to public health. information about sars gradually leaked out, though, with a report from the chinese ministry of health finally reaching the who on february (who d). accounts of this period mention 'medical whistle-blowers' (see, e.g., eckholm ), promed-mail, the global health intelligence network (gphin, the 'rumor list'), the global outbreak alert and response network (goarn), and the move of the disease across borders into hong kong and then vietnam. although who personnel were investigating cases of what turned out to be sars in china as early as late february (enserink a (enserink : , the who issued its first alert about a severe form of atypical pneumonia only on march . according to david heymann (then executive director of who's communicable diseases cluster), vietnam was 'the trigger' for this announcement (enserink a (enserink : . a march report from carlo urbani, a who parasitologist consulting on a case in the french hospital in hanoi, provided the first indication that the new disease had spread beyond guangdong and hong kong. (urbani himself subsequently died from sars.) with the second who announcement, the world was informed that the atypical pneumonia, now named sars, was a new and very serious communicable disease. the secret was out. during this period, it could be hard to discern the signal in the noise. many things contributed to the noise -the irreducible uncertainties of the early days of a new disease, fear, mistakes, lack of preparation, incompetence, reputational concerns, and of course deliberate obfuscation. to be sure, there was ample evidence of outright concealment, foot-dragging, and obfuscation. the guangdong provincial government 'initially banned the press from writing about the disease and downplayed its significance' (enserink b (enserink : . although the who diplomatically reported cooperative efforts (see, e.g., who b), it carefully avoided comment on chinese silence or obfuscation between november and february, and even later. in fact, although chinese officials agreed to share information, their first promises were followed by more deceptions (fidler ; huang ; knobler et al. ). when the chinese government began to share information, who officials were still unable to get meetings with chinese health officials and were refused permission for travel to guangdong (enserink b (enserink : . when the early undercount of sars patients was attributed to the inadvertent exclusion of patients in military facilities, eckholm pointed out that the high proportion of beijing sars patients in military hospitals 'could [instead] indicate that patients were placed there to avoid their inclusion in civilian disease reports ' ( ) . what the nation-state does not know, it cannot report. but the ihr was little help in dampening the noise or strengthening the signal. although the ihr treaty was officially the governing document when the - sars outbreak occurred, it was an imperfect information order that did not authoritatively mandate a clear course of action. '[n]othing compelled china, or any other country, to tell the rest of the world what was happening within its borders early in ' (enserink b (enserink : . indeed, the shocking weakness of the international health governance system was surely a factor in china's failure to report the outbreak quickly. under the ihr, most disease outbreaks, including those of previously unknown diseases, were domestic business. but if china had no formal obligation to report, why was it so soundly condemned for its delay in transmitting information to the who? although the treaty -international law -did not require reporting, emerging norms around the management of global public health governance diverged from formal law. under these emerging norms, a failure to report a new disease, an environmental disaster, or some other occurrence that might affect global public health was a serious infraction (heymann ) . indeed, it was the conflict between these emerging norms and the existing treaty provisions, along with the emergence of new infectious diseases like ebola, that helped spur the ihr revision, first called for in a resolution, well ahead of the sars outbreak. an important difference between formal treaty obligations and norms, though, is that the first applies uniformly and the second does not. being a signatory to a treaty is a bright line. membership in a moral community is more ambiguous, with some treaty signatories more fully incorporated and others more peripheral. thus although the long silence of the chinese government was not technically a violation of the ihr, it nevertheless appeared dishonest and inappropriate to the international community, undermining rather than supporting emerging cooperative norms and in fact harming global public health by allowing the new disease to spread beyond china's borders. the institutional incoherence around global public health governance was in fact deeper than this; the treaty provisions were inconsistent with domestic law as well as with emerging norms. until treaty provision and domestic law are harmonized, health workers can be caught between local and global legal obligations, two distinct sets of rules laying out inconsistent requirements for partially overlapping groups of actors. although only state representatives were responsible for reporting to the who, domestic law compelled medical workers to preserve state secrets about the very matters that international norms -but not ihr treaty provisions -compelled them (or others in their chain of command) to report. many chinese actors were in a terrible bind, legally required to protect state secrets but morally obligated to share information so fellow citizens could protect themselves from a virulent emerging disease and so international bodies could study the disease and develop methods to combat it. individual and global interests both demanded transmission of information, yet the chinese state initially mandated secrecy instead. moreover, the ihr specified roles and obligations for only a few actors, thus offering no guidance about appropriate courses of action for many other actors who possessed relevant information. beyond legalistic matters about obligations to report or to conceal, the evidence from sars also suggests that fears about economic consequences of adverse publicity associated with disease outbreaks strongly shaped the thinking of chinese authorities (huang : ) . these economic concerns were in fact justified, though overstated, in hindsight. the economic effects of sars include much more than the cost of providing medical care for those affected, as analysts acknowledge. lee and mckibben ( ) estimated the short-term impact of sars to be about $ billion for alone if people expected the epidemic to be a one-time event and considerably higher if they behaved as if they anticipated recurrences. subsequent research suggests that the economic impacts were considerably smaller than anticipated and that recovery occurred quickly (keogh-brown and smith ) . although the economic impact was widely dispersed, the losses were greater in asian countries than in the rest of the world, with strong shocks to mainland china, which experienced a decline in foreign investment, and especially to hong kong whose service economy depends on travel and tourism. for government officials responsible for the overall welfare of a society, including both physical and economic health, worries about commercial impacts cannot be dismissed. as a negotiated information order, the ihr was thus ineffective, unstable, and ripe for change for a host of reasons. first, legal obligations were out of sync with the higher expectations of an evolving normative system. second, international law and domestic law often had not been harmonized and disagreed about whether threats to public health should be reported or kept secret, creating a serious conundrum for health workers. third, the ihr failed to take account of the social complexity of a system in which information was produced and controlled by a wide variety of actors, including not just official national representatives (e.g., ministries of health) and provincial or other substate actors (e.g., provincial departments of health), but also actors who were not state representatives but nevertheless had relevant roles and expertise (e.g., heads of hospitals, whether private, public or military), journalists, and private citizens all with varying relationships to the international treaty, emerging norms, and domestic law. fourth, although the ihr did not envision that the who would act on the basis of information other than that provided officially by nation-states, pressure to use such 'non-knowledge' had increased over time as information sources multiplied, tools to parse such information were created, and threats to public health came to seem increasingly urgent. one important effect of sars was to shift the boundary between official and unofficial knowledge, ultimately modifying the information order so that unofficial information of questionable quality could be used as leverage, forcing states to reveal what they might have preferred to conceal. the revision of the ihr was adopted by the world health assembly (wha), the governing body of the who, in and put into force in . just as revisions to the ihr were being crafted, the deficiencies of the existing legal framework were made glaringly apparent by the rapid spread of sars and the numerous -and avoidable -deaths it caused. although china had not in fact violated the existing treaty, it clearly violated emerging norms on the reporting of infectious diseases. the objective of the new treaty provisions was to induce earlier and fuller reporting by acknowledging the importance of non-state actors as suppliers of information and recrafting the information order so that previously unusable kinds of information -information that might have been seen as technically sufficient but was not socially sufficient -could now be used. the revision brought important changes in what has to be reportedany 'public health emergency of international concern'. along with this broader range of reportable threats, the ihr introduced a decision tool to replace the short, simple list and guide reporting; offered considerable guidance about who should report and how (e.g., mandates for designated reporters, now called 'national focal points'); and created tool kits for implementation including for harmonizing the ihr with domestic law (who ). in effect, these changes move the ihr from the realm of 'soft law' further into the domain of 'hard law' (abbott and snidal ) by making the rules more specific and more obligatory, by adding processes for interpretation of law and for dispute settlement, and by inserting rudimentary enforcement mechanisms. some of the work of hardening the ihr is delegated to individual member states as they bring domestic law into harmony with the ihr. as treaty provisions and domestic law are harmonized and gaps bridged, excuses for non-compliance are eliminated and domestic supports for compliance are added (see, e.g., the agreement between the australian federal government and its states and territories to ensure timely reporting [commonwealth of australia ; scales : ]). fidler ( ) argues that sars exposed the conflict between an outdated, unworkable, westphalian system of international governance and a world in which global diseases required a global governance system. states have lost their primacy, he suggests, in a world in which they can control neither the movement of disease nor the movement of information. believing it had the right to suppress information, the chinese government attempted to treat information about infectious disease as it always had: as a matter of state secrets. but in a world of cell phones and internet, text messages and email allowed both patients and physicians to circumvent the state. prohibiting news media from reporting the outbreak of the deadly disease did not keep individuals from communicating with one another inside china and sending information and questions to contacts outside the country. with the growth of new information technologies, state monopolies on information have decayed and the balance between socially and technically sufficient information has shifted. as the volume of information considered technically sufficient has increased and the who has developed more sophisticated techniques for extracting high-quality information, its capacity to pressure states to meet their treaty obligations has increased. something like an enforcement capacity, albeit one not formally recognized in the ihr, grew up in the midst of all this complexity. with the vote of the world health assembly (wha) and the subsequent revision of the ihr, this enforcement capacity has been recognized, endorsed and formalized, first with the wha's blessing of the who's use of unofficial information and then with the incorporation of this information use into the procedures outlined in the revised ihr. in this case, changes in practice preceded changes in the legal infrastructure as the who increasingly drew on information that did not come directly from the official reporters of member states. but in a pattern of 'punctuated globalization' (heimer ) , the legal framework seems now to have reclaimed the lead in moving forward global coordination around public health surveillance. as countries and agencies adjust to the ihr, we can expect the development of a host of new strategies for exploiting the opportunities created by this new framework. the revisions have required many countries to invest heavily in improving their systems for tracking and reporting threats to public health. this, in turn, has created an opening for many joint activities between rich and poor countries, including construction of new cdc facilities around the world (gootnick ) . do these changes then signal the end of the gap between actionable, socially sufficient information and technically sufficient information in global health governance? rather than an end to the gap, we should expect a shift of the gap's location. gaps arise because parties with imperfectly aligned interests have some incentive to game systems. such discrepancies between global, collective interests and regional, state or local interests will continue to exist and some evidence suggests both continued and fresh strategies for gaming and non-compliance (scales : - ) . the exact configuration of the gaps will change, of course, as the nature of the key actors changes (less emphasis on states, perhaps) and as technologies change (easier transmission of information by both official and lay actors). the gap itself will not vanish. states will remain relevant actorsindeed world politics suggests that national borders are as often reinforced as demolished and that states continue to have responsibilities and interests that might motivate them to conceal information. moreover, a clarification of treaty obligations and the introduction of a new lever for the who will not entirely resolve the problem. in the past, with no uncertainty about obligations to report, countries nevertheless failed to report outbreaks (carvalho and zacher ; fidler : ; kamradt-scott : - ; scales ) . although the who can more nimbly alert the world about an outbreak, it can do little beyond that: no sanctions, no fines, no cancellation of membership. and new incentives for non-compliance will continue to arise. until samples were used to create flu vaccines, countries had little reason to withhold samples of new influenza strains. but under a regime that protects intellectual property and gives those supplying samples no share of the income from the sale of resulting vaccines, countries now have an incentive not to offer their samples for the common good. when indonesia, responding to this incentive structure, began withholding flu samples, a new who working group developed a non-binding framework to encourage both virus and benefit sharing (fidler ; fidler and gostin ; scales ; smith ) . in the argument of this article, sars plays a central (albeit non-determinative) role. but is sars simply a useful case on which to hang the argument? or could the argument have been built around hiv/aids, h n , ebola, zika, or some other infectious disease? in fact, other diseases and sars are not interchangeable in this argument; sars is not 'merely' an example. because of historical timing, sars was the epidemic that brought the previously recognized failings of existing disease surveillance systems into the spotlight and stiffened the spines of those pushing for change. the features and timing of sars helped to bring the shortcomings of the ihr into sharp relief, undermining their legitimacy and making it essentially impossible for the who and public health specialists to continue working under the old rules. the legitimacy of the who increasingly depended on denying the legitimacy of the ihr. by the time sars appeared, the deficiencies of the ihr had become so glaringly apparent that the wha had endorsed the who's use of unofficial information even before the rules changed. but particular features of the disease, namely its brief incubation period and moderate transmissibility, meant that the adage that microbes do not respect national borders was all too applicable. local outbreaks of sars had global relevance in a way that local outbreaks of hiv/aids, with its long period of dormancy, did not. sars quickly became a global threat. but it also mattered that the disease arose in a country that wished to suppress information about the outbreak. in the age of the internet and cell phones, information, like microbes, neither respects borders nor governmental edicts on secrecy. thus sars brought to a head a long-standing clash between national governments' desires to keep secrets and new capacities to transmit information with or without governments' blessing. in fidler's view, 'china's behavior [at the start of the sars epidemic] put the final nail in the coffin of basing global surveillance for infectious diseases only on government information ' ( : ) as the rules required. sars was a 'historic moment in public health governance' (fidler : ) , the tipping point for new governance strategies (fidler : ) . in a limited sense, then, sars was a boon to the who because it provided an added inducement for the wha and member states to modify the rules in ways that benefited the entire group and gave the who and ihr new relevance. although the ihr's limitations had long been apparent, by making it impossible to deny that the treaty provisions were outmoded sars accelerated the process of reaching consensus on proposed changes. the revisions of the ihr attempted to deal with two kinds of ignorance: ignorance about outbreaks of known diseases and ignorance about newly emerging diseases and other threats to public health. before revision, the ihr had focused only on outbreaks of known diseases and therefore on ignorance that could in principle be reduced or even eliminated by full and honest disclosure. as it became clear that infectious diseases were not going to be eradicated, as new diseases continued to emerge, and as natural disasters, industrial accidents, air and water pollution, and so forth came to be understood as threats to public health, the ihr's focus shifted to these less tractable forms of ignorance and thinking changed about what should be reportable under the ihr. this expanded understanding of threats to public health brought both expanded obligations for states and expanded obligations for the who. the who's remit now included not just spreading the word and issuing advisories about a larger package of threats to public health, but also overseeing and orchestrating the scientific work of untangling the etiology, symptom patterns, modes of detection, and effective remedies for these threats. into this changed environment, the reworked information order introduced a more sophisticated understanding of the relationship between what was or could be known and what was unknown and perhaps even unknowable. the modified procedures of the ihr in some senses acknowledged the difference between technically sufficient information that was also socially sufficient -because it had been supplied by mandated state reporters -and technically sufficient information that was not socially sufficient because it travelled to the who by unconventional or even clandestine routes. but the loosening of constraints on the sourcing of information did more than simply make information usable by recategorizing previously unofficial, socially insufficient information. the modified procedures also opened the door to using information as leverage, with information of inferior quality or illegitimate provenance being used to pry loose information of better quality or from official sources. moreover, in casting a wider net and exhibiting its willingness to draw on an expanded network of informants and more variable kinds of information, the ihr seem to acknowledge the essential irreducibility of ignorance. when uncertainty cannot be eliminated, and when the transmission and withholding of information is at least in part a strategic game, an entity such as the who is in no position to sharply limit the information it will consider. the ihr, a renegotiated global public health information order, thus incorporate into their structure an acknowledgement of the complex relationship between knowledge and ignorance, socially sufficient information and technically sufficient information, and the socially constructed nature of these distinctions. although this article focuses on negotiated information orders in global public health governance, its argument and evidence address broader issues about how global norms change and how social groups manage risk. the story of the - sars epidemic, the core empirical component of the article, is about the possibility that a virulent new disease would become a devastating pandemic and about an emerging (but not yet formalized) obligation to inform the who about serious threats to public health. the comparison points -the threat of aids contamination in banked blood (healy ) ; threats from oil spills, nuclear power accidents, and nuclear war (clarke ) ; and threats from accidents on north sea oil rigs and platforms (heimer a ) -are also about how key actors assessed novel risks. in all of these cases, the assessment of the core risk was implicitly balanced against other risks -risks to trade and tourism for sars; risks to relationships with important constituencies for the blood banks (healy ) ; risks to desired investments in business and government enterprises (clarke ) ; and risks to vested interests in the insurance business (heimer a) . generally speaking, though, as discussions unfolded, only some of the risks were fully on the table, perhaps because people were not wholly aware of how other considerations were shaping their thinking, perhaps because of the questionable legitimacy of balancing other risks (trade and tourism, in the sars case) against threats to life and health. the result is often a pattern of minimizing assessments of danger and normalizing those (implicit) assessments. as noted earlier in the article, many disease outbreaks, even of the three reportable diseases, had not been reported to the who. somewhat like the normalization of deviance that diane vaughan ( ) so carefully describes in the challenger launch decision, the deviant non-reporting of disease outbreaks had been normalized. some countriesespecially poorer ones -were learning from one another that they would suffer no consequences from ignoring ihr treaty obligations. although the ihr were described as regulations to protect health in all countries, in fact they focused on stemming the spread of disease from poor countries to richer ones. as chorev ( a) suggests, international obligations perceived as coercive are more likely to be reinterpreted locally and perhaps ultimately transformed through processes of reactive diffusion. in the case of the ihr, reactive diffusion essentially made the already unenforceable ihr progressively less useful. but in the pre-sars period, the evidence in fact suggests a more complex process of normative change. two rather different norms were being institutionalized simultaneously in global public health governance. at the same time that ignoring ihr treaty obligations was becoming the norm in some circles, a different norm was spreading in other circles. some countries -especially the richer ones -were adopting a more cooperative stance, sharing information not only on ihr reportable diseases but also on other infectious diseases and threats to public health. it was this cooperative norm, not the norm of non-reporting, that ultimately diffused and, coupled with the sars epidemic, led to a reinvention of the ihr as a treaty with a few more teeth. how did this happen? here a comparison with the space shuttle launch decision is instructive. although nasa carried out rigorous, carefully scripted pre-launch reviews, contextual pressures to launch could subtly shift thinking about which risks could be dismissed and which warning signs ignored. over time, these modified assessments were institutionalized and the insularity of the process made it hard for alternative viewpoints to force a recalibration. the conflict between protecting against rare events and attending to business is utterly mundane (vaughan ) , so mundane that insurers have institutionalized methods for protecting key risk management tasks from production pressures (heimer b) . the job of the ihr, arguably, is to rebalance risk assessments so global public health interests are not regularly sacrificed when discrediting information about health threats is concealed to protect a country's trade and tourism. yet the ihr treaty gave the who few levers to induce such a rebalancing. unlike space shuttle launch decisions, though, global public health governance does not take place behind a single set of closed doors. thus, although a practice of non-reporting -normalized deviance -seemed to be developing in some sectors, changes in information technologies and communication patterns made secret keeping more difficult and shifted the balance in favour of the more cooperative norm. even with china's strict control over the internet and the press, text messages and emails spread news about 'atypical pneumonia', forcing public officials to acknowledge the outbreak. although any single medium might fail to pick up the news, the proliferation of methods for detecting signals makes suppression of information more difficult. a news blackout might make gphin, which scrapes information from news outlets, less effective, but have less effect on promed-mail, which relies on medical workers' postings. working together over some considerable period of time and in a series of discrete steps, the new information technologies and the emerging norm of information sharing reconfigured the rules about global public health governance and reshaped understandings about what information could be used and who could supply it. information technologies first reshaped some practices of the who. as the who began to use the unofficial information supplied by entities like gphin, it also initiated the process of redefining non-knowledge as technically sufficient, at least for some purposes. as the who rebuilt its routines to use unofficial information alongside official country reports, new relationships and resources (e.g., goarn) were created around those new information sources. both the suppliers of information and the who increasingly treated this new information as technically sufficient. with the endorsement of the wha, these new practices and new definitions of the adequacy of unofficial information were further institutionalized, moving one step further to a formal change in the treaty itself. with the adoption of the ihr, the process was complete -what had previously been categorized as unusable non-knowledge was first reconceptualized as technically sufficient, and ultimately accepted as socially sufficient for use in an expanded menu of actions. nevertheless, information categorized as unusable non-knowledge will always exist and will continue to be important precisely because it comes from different social locations than those tapped by official information. as mary douglas would remind us, we need the sentinels on society's margin to warn us of unexpected dangers every bit as much as we need people working in core institutions to protect us from more routine risks (douglas and wildavsky ) . although admittedly the uses of non-knowledge or clandestine knowledge are typically different than the uses of official knowledge, that should not lead us to underestimate either the vital strategic value of non-knowledge or the importance of using it efficiently in a smoothly functioning, adaptable information order. just ask kim philby or david john moor cornwell, aka john le carré. gphin in (who c , notes that '[m]ore than % of the initial outbreak reports come from unofficial informal sources, including sources other than the electronic media, which require verification' (who n.d.) . gphin is often credited with picking up news of a disease outbreak in china in late november (heymann and rodier : ) . set up in by the who and formally launched in , goarn is a collaboration of other networks, linking a wide variety of experts and combining both surveillance and response (fidler ; heymann ; heymann et al. ; who c: ) . as of , goarn includes as members over technical institutions and networks concerned in one way or another with public health (https://extranet.who.int/goarn/; last viewed march ). fidler ( ) , especially, credits goarn with a major role in containing sars. according to virologist malik peiris, 'if something untoward was happening across the border, it would come to hong kong pretty quickly' (enserink a (enserink : . vietnam was a reluctant trigger, though. as hospital staff fell ill, the vietnamese government had to be persuaded that this was not simply a 'private problem in a private hospital' but might instead be 'very important' (enserink a : . according to the who, the resurgence of cholera in south america and plague in india, as well as the emergence of new infectious agents such as the ebola virus, 'resulted in a resolution at the th world health assembly in calling for the revision of the regulations' (https://www.who.int/ihr/ about/faq/en/; last viewed march ). some observers (e.g., katz and fischer ; wenham ) contend that states have not lost their primacy. for similar assessments of the role of sars, see lazcano-ponce, allen and gonzález ( : ) and katz and fischer ( ) . to be sure, the new technologies were not free, and wenham ( ) notes that considerable human intervention was required to make sense of the volumes of information arriving through gphin and goarn. but these costs were disproportionately borne by richer countries, who also agreed to help build infrastructure and supply expertise for poorer countries. hard and soft law in international governance the market for "lemons": quality uncertainty and the market mechanism uncertainty and the welfare economics of medical care the international health regulations in historical perspective changing global norms through reactive diffusion: the case of intellectual property protection of aids drugs the world health organization between north and south a garbage can model of organizational choice risk and culture: an essay on the selection of technological and environmental dangers war stories sars: chronology of the epidemic uncertain business: risk. insurance and the limits of knowledge order without design: information production and policy making information in organizations as signal and symbol emerging trends in international law concerning global infectious disease control sars. governance and the globalization of disease influenza virus samples, international law, and global health diplomacy the new international health regulations: an historic development for international law and public health the who pandemic influenza preparedness framework: a milestone in global governance for health agencies support programs to build overseas capacity for infectious disease surveillance economic action and social structure: the problem of embeddedness the unknown in process: dynamic connections of ignorance, non-knowledge and related concepts routledge handbook of ignorance studies last best gifts: altruism and the market for human blood and organs allocating information costs in a negotiated information order: interorganizational constraints on decision making in norwegian oil insurance reactive risk and rational action: managing moral hazard in insurance contracts inert facts and the illusion of knowledge: strategic uses of ignorance in hiv clinics remodeling the garbage can: implications of the origins of items in decision streams punctuated globalizaton: legal developments and globalization in healthcare sars and emerging infectious diseases: a challenge to place global solidarity above national sovereignty hot spots in a wired world: who surveillance of emerging and re-emerging infectious diseases global surveillance, national surveillance, and sars the sars epidemic and its aftermath in china: a political perspective managing global health security: the world health organization and disease outbreak control the revised international health regulations: a framework for global pandemic response the economic impact of sars: how does the reality match the predictions? security, disease, commerce ideologies of postcolonial global health the contribution of international agencies to the control of communicable diseases estimating the global economic costs of sars learning from sars: preparing for the next disease outbreak the internet and the global monitoring of emerging diseases: lessons from the first years of promed-mail an introduction to the sociology of ignorance: essays on the limits of knowing institutional organizations: formal structure as myth and ceremony public health surveillance and infectious disease detection understanding the enemy the world health organization and the dynamics of international disease control: exit, voice, and (trojan) loyalty', phd dissertation agency theory administrative behavior advancing science diplomacy: indonesia and the us naval medical research unit china accused of sars cover-up the challenger launch decision: risky technology, culture, and deviance at nasa gphin, goarn, gone? the role of the world health organization in global disease surveillance and response the politics of surveillance and response to disease outbreaks: the new frontier for states and non-state actors lessons from severe acute respiratory syndrome (sars): implications for infection control markets and hierarchies global surveillance of emerging diseases: the promed-mail perspective', cadernos de saúde pública (reports in status of the outbreak and lessons for the immediate future world health organization (who) d 'update -sars: chronology of a serial killer toolkit for implementation in national legislation world health organization (who) n.d. 'epidemic intelligence -systematic event detection international health regulations ( key: cord- - pp ra authors: woolard, robert h.; borron, stephen w.; mackay, john m. title: emergency department design date: - - journal: ciottone's disaster medicine doi: . /b - - - - . - sha: doc_id: cord_uid: pp ra nan in the aftermath of terror events and natural disasters with subsequent disaster response planning, hospital architects have begun to design eds to better meet the needs anticipated from a terror attack, flood, or epidemic. some ed design lessons have been learned from disaster events. from the tokyo sarin event, recent natural disasters and epidemic illnesses, and other routine "disasters," such as influenza outbreaks, hospitals know they need to plan for surge capacity. methods of alerting and preparing ed staff early and protecting emergency care providers from contamination and infection are needed, a lesson integrated into current ed staff vaccination requirements and made painfully clear by the tokyo sarin event, in which many emergency providers were contaminated. the boston marathon bombing event illustrated the need to provide emergency and surgical care to mass casualties, requiring coordination of response between hospitals and enhanced field rescue efforts to meet high volume demands over a short time period. in new york after / , the cleanup phase after the event led to prolonged increased prehospital and ed volume. most care was provided by emergency personnel working close to ground zero. from the anthrax mailings in the wake of the world trade center event, planners learned to anticipate the need for accurate public information and increased ed patient volume. from the flooding and evacuation of hospitals and eds, with the needs for medical capacity met at new sites on high ground during katrina, planners relearned the importance of providing care throughout a larger health care system, with each hospital and ed participating uniquely, some evacuating and relocating, and others providing care to surges of relocated and new patients. the ed remains the most available point of access to immediate health care in the united states. ed designers are now anticipating increased volumes of patients that might be generated by a disaster, epidemic, or a terror event. although a stressed public needs information and health screenings that perhaps can be met by providers outside the ed, the ed will be accessed for counseling and screening when other services are overwhelmed, critical information is misunderstood, or delay to access is anticipated or encountered. in most disaster scenarios, shelter needs are provided outside the ed. however, loss of facilities or needs for quarantine of exposed and ill patients during bioterror events and epidemics may create shelter needs proximate to eds. ed design and response capability after / became a larger concern for public disaster planners, the federal government, and hospital architects. two federally funded projects coordinated by emergency physicians, one at washington hospital center (er one) and another at the rhode island disaster initiative (ridi), have developed and released recommendations. er one suggests designs for a new ed that meets any and all anticipated needs of a disaster event. ridi has developed new disaster response paradigms, training scenarios, and response simulations that also can be used in ed design. threat mitigation • self-decontamination surfaces • offset parking away from building footprint • single-room and single-zone modular compartmentalized ventilation systems • % air filtration • assured water supply with internal purification capabilities • blast-protection walls and blast-deflection strategies • elimination or encapsulation of building materials that can shatter during an event • built-in radiation protection • advanced security and intrusion detection technologies an ed design using er one concepts has been constructed at tampa general hospital in florida. after considering highly likely disaster scenarios, specific recommendations were developed and integrated into the tampa ed design. to address surge capacity, the state of florida, the project architects, and hospital administration agreed on an ed that could expand from treatment spaces to within the ed. a parking area beneath the ed was designed to convert to a mass decontamination zone, feeding directly into the ed. the ed observation area was designed to convert into a quarantine unit, with direct access from outside the ed during epidemics. adding these elements to create an ed that is more disaster ready increased the cost of the project by less than %. ed design would be tremendously enhanced if a prototype "disaster-ready" ed demonstrating all the elements of er one could be built. to date no full prototype has been constructed, and it has yet to be shown that ed designers are able to create an "all-hazards-ready" ed, given financial constraints. however, incorporating some disasterready suggestions when eds are built or renovated will improve our readiness and may be more financially feasible. new building materials, technologies, and concepts will continue to inform the effort to prepare eds for terror attack. urban ed trauma centers are attempting to develop the capacity to serve as regional disaster resource centers and the capability for site response to disaster events. these eds are designed to incorporate larger waiting and entrance areas, adjacent units, or nearby parking spaces in their plans to ramp-up treatment capacity. more decontamination and isolation capacity is being built to help control the spread of toxic or infectious agents. information systems are being made available to provide real-time point-of-service information and any needed "just-in-time" training for potential terror threats. an example of better organized disaster information is the co-s-tr model, a tool for hospital incident command that prioritizes action to address key components of surge capacity. there are three major elements in the tool, each with four subelements. "co" stands for command, control, communications, and coordination and ensures that an incident management structure is implemented. "s" considers the logistical requirements for staff, stuff, space, and special (event-specific) considerations. "tr" comprises tracking, triage, treatment, and transportation. having an information system with robust capability to gather and display the detailed information needed in a preformatted and rehearsed mode such as co-s-tr, with easy access by multiple electronic means, including wireless and handheld devices, is an important facility design feature. the technology needed to respond to a terrorist event, such as personal protective equipment (ppe), is becoming more widely available and is stored where it is easily available in eds. although mass decontamination can and in general should occur close to the disaster scene, eds are gearing up to better decontaminate, isolate, and treat individuals or groups contaminated with biologic or chemical materials upon arrival. the tokyo sarin episode demonstrated clearly that contaminated patients will make their way to the ed without waiting for first responders. four elements of ed design are being addressed to prepare eds for terror events: scalability, security, information systems, and decontamination. eds are generally designed with sufficient, but not excess, space. the number of treatment spaces needed in an ed is usually matched to the anticipated ed patient volume; roughly treatment space per annual ed visits or treatment space per annual ed hospital admissions is recommended. according to disaster planners, a major urban trauma center ed built for , to , visits per year should have surge capacity up to patients per hour for hours and patients per day for days. one disaster-ready design challenge is to provide surge capacity to meet anticipated patient needs. hospitals are woefully overcrowded, and eds are routinely housing admitted patients. [ ] [ ] [ ] to maintain surge capacity, efforts to address hospital overcrowding and eliminate the boarding of admissions in the ed must be successful. eds are now being designed to allow growth into adjacent space: a ground level or upper level, a garage, or a parking lot. garage and parking lot space has been used in many disaster drills and mass exposures. garages and parking lots can be designed with separate access to streets, allowing separation of disaster traffic and routine ed traffic. more often, needed terror-response supplies (e.g., antidotes, respirators, personal protective gear) are stored near or within the ed. the cost of ventilation, heat, air conditioning, communication, and security features often prohibits renovation of garages. more often, tents are erected over parking lots or loading areas. modular "second eds," tents or structures with collapsible walls (fold and stack), have been deployed by disaster responders. these can be used near the main ed, preserving the ed for critical cases during a disaster surge. some hospitals are building capacity for beds in halls and double capacity patient rooms, and converting other nontreatment spaces to wards to increase their ability to meet patient surges during disaster. hallways can provide usable space if constructed wider and equipped with medical gases and adequate power and lighting. often only minimal modifications are necessary to make existing halls and lobbies dual-use spaces. some hospitals have increased space by installing retractable awnings on the exterior over ambulance bays or loading docks. tents are often used outside eds as decontamination and treatment areas in disaster drills. tents with inflatable air walls have the added benefit of being insulated for allweather use. in the military, the need to provide treatment in limited space has resulted in the practice of stacking patients vertically to save space and to reduce the distances that personnel walk. u.s. air force air evacuation flights have stacked critical patients three high, and some naval vessels may bunk patients vertically in mass-casualty scenarios. similar bed units could be deployed for a mass event. portable modular units are also available to help eds meet additional space needs. unfortunately, many eds plan to rely on other facilities in the regional system and do not build in surge capacity. during a terror event occurring on a hospital campus, the ed function may need to be moved to a remote area within the hospital in response to a flood, fire, building collapse, bomb or bomb threat, active shooter, or other event. many disaster plans designate a preexisting structure on campus as the "backup" ed. the area is stockpiled with equipment and has a viable plan for access. patient and staff movement are planned and developed during drills. in addition to increasing ed surge capacity, significant off-loading of ed volume can be accomplished by "reverse triage" of inpatients. through such measures as delaying elective admissions and surgeries, early discharge, or interhospital transfer of stable patients, significant improvements in bed capacity can be accomplished within hours. , although the capacity to handle patient surges is being addressed regionally and nationally, large events with high critical care volumes will overtax the system regionally, as was the case during hurricane katrina. the national disaster medical system (ndms) can be mobilized to move excess victims and establish field hospitals during events involving hundreds or thousands of victims. however, there are barriers to a prompt response time in the deployment of ndms resources. the ed plan to provide treatment during disaster must include evacuation, since the event may produce an environmental hazard that contaminates, floods, or renders the ed inaccessible. evacuation of ed patients has been addressed by ed designers. some eds have the capacity to more easily evacuate. in well-planned eds, stairwells have floor lights to assist in darkness. stairways are sufficient in size to allow backboarded and chair-bound patients to be evacuated. ground level eds should have access to surface streets, interior pathways, and exterior sidewalks. the communication and tracking system includes sensors in corridors and stairways. patient records are regularly backed up and stored for web access and hence available during and after evacuation. specially designed ambulance buses allow for the safe transfer of multiple patients of variable acuity to other facilities. securing the function of an ed includes securing essential resources: water, gases, power, ventilation, communication, and information. ed security involves surveillance, control of access and egress, threat mitigation, and "lockdown" capacity. surveillance exists in almost all eds. many ed parking and decontamination areas are monitored by cameras. the wireless tracking system can also be part of the surveillance system. a tracking system can create a virtual geospatial and temporal map of staff and patient movement. tracking systems have been used in disaster drills to identify threat patterns. most eds have identification/access cards and readers. chemical and biologic sensors for explosives, organic solvents, and biologic agents are becoming available, but have not been used in eds. many eds have metal detectors and security checkpoints prior to access by ambulatory patients and visitors. when selecting a sensor, designers consider sensitivity, selectivity, speed of response, and robustness. [ ] [ ] [ ] [ ] [ ] sensor technology is an area of active research that continues to yield new solutions that could be incorporated into ed security. in concept, all entrances could be designed to identify persons using scanning to detect unwanted chemicals, biologic agents, or explosives, allowing detention and decontamination when needed. given the wide array of physicochemical properties of hazardous materials in commerce, developing sensors with sufficient sensitivity to detect threats while avoiding an excess of false positives will remain a challenge. most eds have multiple entry portals for ingress of patients, visitors, staff, vendors, law enforcement personnel, and others. eds are using screening and identification technologies at all entrances in combination with closed-circuit video monitoring. personnel must be dedicated for prompt response when needed. automation of identification can efficiently allow safe flow of patients, staff, and supplies. vehicle access has been managed by bar-coding staff and visitor vehicles. at some road access points, automated scanners could monitor and control vehicle access. modern eds limit the number of entrances and channel pedestrian and vehicular traffic through identification control points. for the most part, points of entrance into the ed can be managed with locking doors, identification badge control points, and surveillance to allow desired access for staff and supplies. thoughtful planning should facilitate rapid access between the functional areas, such as the ed, operating rooms, catheterization suites, and critical care units. movement within and between buildings needs to be controlled and must allow a total lockdown when necessary. direct threats to the ed include blasts; chemical, biologic, and environmental contamination; and active shooting. there are several strategies to mitigate blasts. twelve-inch-thick conventional concrete walls, using commercially available aggregates ( lb per cubic foot), afford reasonable blast protection. , on some campuses, the space between the ed and the entrance is designed largely to prevent direct attack. however, atriums are terror targets. although atriums are useful as overflow areas, their windows and glass can create hazardous flying debris. in general, use of unreinforced glass windows, which help create a more pleasant ed environment, must be balanced against threat of injury from broken glass shards. given the threat of blast attack, communication, gas, electric, water, and other critical services should be remote from vulnerable areas and shielded when they traverse roads and walkways. protection against release of chemical and biologic agents inside or outside the ed requires a protective envelope, controlled air filtration in and out, an air distribution system providing clean pressurized air, a water purification system providing potable water, and a detection system. better hvac systems can pressurize their envelope, keeping contaminants out, and also purge contaminated areas. anticipated computing needs for ed operations during disaster events are immense. in most eds, large amounts of complex and diverse information are routinely available electronically. overflow patients in hallways and adjacent spaces can be managed with mobile computing, which is available in many eds. wireless handheld devices can facilitate preparation for disasters and allow immediate access to information by providers in hallways and decontamination spaces. multiple desktop and mobile workstations are available throughout most eds. during disaster, displays of information that will aid decision making include bed status, the types of rooms available, the number of persons waiting, and ambulances coming in. monitors now display patient vital signs, telemetry, and test results. significant improvements in efficiency and decision making can be achieved when more real-time information is available to decision makers. having available multiple computer screens with preformatted disaster information screens that are regularly used should enhance ed readiness. clinical decision tools and references, such as uptodate, make information readily available to providers. these and other just-in-time resources will be needed when practitioners treat unusual or rare diseases not encountered in routine practice. the wide variety of potential disaster scenarios argues for the availability of just-in-time information. information specific to a disaster event should be broadcast widely on multiple screens in many areas. cellular links and wireless portable devices should also be designed to receive and display disaster information. access to information has been enhanced in most eds through cell phone, texting, and other social media use. developing apps to make local disaster information available through as many media as possible and to guide each staff member should be part of the information system disaster plan. diagnostic decision support systems have been demonstrated to help practitioners recognize symptom complexes that are uncommon or unfamiliar. information systems should be capable of communicating potential terror event information regularly. many eds have log-on systems that require staff to read new information. in a disaster-ready ed, a list of potential threats could be posted daily. however, the utility of computer references or on-call experts is limited by the practitioner's ability to recognize a situation that requires the resource. computer-based patient tracking systems are available for routinely tracking patients in most eds. some computer-based tracking systems have a disaster mode that quickly adapts to a large influx of patients allowing for collation of symptoms, laboratory values, and other pertinent syndromic data. in many regions, eds provide real-time data that serve as a disaster alert surveillance network. routine data obtained on entry are passively collected and transferred to a central point for analysis (usually a health department). in the event of a significant spike in targeted patient symptom complexes, these data can trigger an appropriate disaster response. the capacity for this entry point surveillance should be anticipated and built in to any disaster-ready ed information system. [ ] [ ] [ ] for example, data terminals allowing patients to input data at registration similar to electronic ticketing at airports could passively provide information during a surge, rather than requiring chief complaint and registration data input by staff. this self-service system could add to ed surge capacity. similarly, real-time bed identification, availability, and reservation systems used to assist patient management in some eds could aid ed function during disaster. movement to an inpatient bed is a well-documented choke point recognized nationally during normal hospital operations, and implementing a plan to open access to admissions becomes an issue during disaster. lobby screens can facilitate family access to information during a disaster, displaying information about the event and patient status using coded names to preserve confidentiality. during disasters, family members can be given their family member's coded name and access to screens to query for medical information. computers with internet access that could display public event information are available in patient rooms in some eds. during anthrax mailings, public hysteria taxed the health care system. posttraumatic stress, anxiety, and public concern over possible exposure to a biologic or chemical agent may generate a surge of minor patients at eds. within some eds, lecture halls or media centers are available; they are generally used for teaching conferences but could provide venues for health information and media briefings during disaster. the media are an important source of public information and must be considered when planning disaster response. an adjacent conference area can serve as a media center, where information could be released to the internet and closed-circuit screens could provide more accurate information to allay public concerns and direct the public to appropriate resources and access points for evaluation of potential exposures. poison control centers provide an immediate source of valuable information for hazard communication and risk assessment. notwithstanding the tremendous potential value of computer systems in disaster management, it is important to anticipate and plan for information systems and communications failure. failure of hospital generators, such as occurred in hurricane sandy, will rapidly render computers and landline telephones inoperable. cellular telephone services are often overwhelmed during disasters. the internet appears to be less likely to crash during disasters, due to its redundancy, but hospitals should plan for alternative methods of communication and documentation. many eds have patient decontamination (decon) areas. adequate environmental protection for patients undergoing decon is necessary and includes visual barriers from onlookers, segregation of the sexes, and attention to personal belongings. , in many eds, decon areas are being added to accommodate mass exposures. eds have added or augmented decon facilities. decon areas should have a separate, self-contained drainage system, controlled water temperature, and shielding from environmental hazards. exhaust fans are used to prevent the buildup of toxic off-gassing in these decontamination areas. most importantly, decon facilities should be deployable within minutes of an incident, to avoid secondary contamination of the ed. for most eds, mass decon has been accomplished by using an uncovered parking lot and deploying heated and vented modular tent units. uncovered parking areas adjacent and accessible to the ed have been enabled for disaster response. other eds use high-volume, lowpressure showers mounted on the side of a building. serial showers allow multiple patients to enter at the same entrance and time. however, serial showers do not provide privacy, can be difficult for an ill patient to access, and can lead to contaminated water runoff. also, persons requiring more time may impede flow and reduce the number of patients decontaminated. parallel showers built in advance or set up temporarily in tenting offer greater privacy but require wider space and depth. combined serial and parallel design allows the advantages of each, separating ill patients and increasing the number of simultaneous decontaminations. often built into the ed is another decon room for one or two patients with the following features: outside access; negative pressure exhaust air exchange; water drainage; water recess; seamless floor; impervious, slip-resistant, washable floor, walls, and ceiling; gas appliances; supplied air wall outlets for ppe use; high-input air; intercom; overhead paging; and an anteroom for decon of isolated cases. ppe is routinely used by military and fire departments during events involving hazardous materials. hospitals likewise must be trained for and plan to use these devices and store a reasonable number of protective ensembles (i.e., gloves, suits, and respiratory equipment), usually near the ed decon area. decon areas are built with multiple supplied air outlets for ppe use to optimize safety and maximize work flexibility. powered air purifying respirators (paprs) are used by many hospitals in lieu of air supplied respirators. while providing increased mobility and convenience, their utility is somewhat limited by the requirement for battery power and the need to select an appropriate filtration cartridge. voice-controlled two-way radios facilitate communication among decon staff with receivers in the ed. a nearby changing area is available in some eds. the changing area is laid out to optimize medical monitoring and to ease access to the decon area. [ ] [ ] [ ] [ ] [ ] the need for easily accessible ppe and adequate training and practice in the use of ppe cannot be overemphasized. some capability to isolate and prevent propagation of a potential biologic agent has been designed into most eds. patients who present with undetermined respiratory illnesses are routinely sent to an isolation area. a direct entrance from the exterior to an isolation room is not usually available but has been a recent renovation in some eds. creation of isolation areas poses special design requirements for hvac, cleaning, and security to ensure that infections and infected persons are contained. an isolation area should have compartmentalized air handling with high-efficiency filters providing clean air. [ ] [ ] [ ] [ ] biohazard contamination is particularly difficult to mitigate. keeping the facility "clean" and safe for other patients is an extreme challenge. biologic agents of terrorism or epidemics may resist decontamination attempts. infected patients present a risk to staff. during the severe acute respiratory syndrome (sars) epidemic, singapore built outdoor tent hospitals to supplement their existing decontamination facility. patients were evaluated outside the ed and those with fever were isolated and not allowed to enter the main hospital. this, among other measures, allowed singapore to achieve relatively rapid control over the epidemic. few triage areas and ed rooms have been designed for decontamination. surfaces must be able to withstand repeated decontamination. sealed inlets for gases and plumbing have also been considered. patients who are isolated can be observed with monitoring cameras. some isolation areas include a restroom within their space, which helps restrict patient egress. all ed areas could have more infection control capabilities built in. floor drains have been included in some ed rooms for easier decontamination. infection control is improved using polymer surface coatings that are smooth, nonporous, and tolerant to repeated cleaning, creating a virtually seamless surface that is easy to clean. these coatings can be impregnated with antimicrobial properties, enhancing their biosafe capability. silver-impregnated metal surfaces in sinks, drains, door handles, and other locations can reduce high bacterial content. silverimpregnated metal has demonstrated antimicrobial effects. conventional ventilation systems use % to % outside air during normal operation, thus purging indoor contaminants. air cleaning depends on filtration, ultraviolet irradiation, and purging. hvac design should model demand for adequately clean air and also for isolation of potential contaminants. the disaster-ready ed requires protection from external contaminations as well as contagious patients. a compartmentalized central venting system without recirculation has the ability to remove or contain toxic agents in and around the ed. compartmentalized hvac systems allow for the sealing of zones from each other. more desirable hvac systems electronically shut down sections, use effective filtration, and can clean contaminated air. a compartmentalized system can fail, but it only fails in the zone it is servicing; smaller zones mean smaller areas lost to contamination. these systems are less vulnerable to global failure or spread of contamination. modular mobile hvac units developed for field military applications have been added to existing ed isolation areas for use when needed to create safe air compartments. cost may prohibit addressing issues like building more space or better ventilation, decontamination, and isolation facilities. if added space and facilities are not made more available, many lives may be lost during a disaster event. when funds are scarce, less money is spent for disaster readiness, since all available money is spent to support eds' continuous function day to day. eds in the united states are challenged to provide efficient routine care and board excess admitted patients. however, they must also be designed to handle the consequences of a terror event, epidemic, or natural disaster. these competing functions could result in eds with less financial support to handle routine care. these design efforts could also lead to unnecessary increases in expenditures in anticipation of terror events that never materialize. to the extent that efforts to provide disaster care can be translated into solutions that address other more immediate hospital and ed problems, they will gain support. more access to information systems providing just-in-time training could inform staff not only of terror events but of mundane policy changes and unique patient needs such as bloodless therapy for jehovah's witnesses, etc. better information access could also improve routine ed efficiency and communication with patients and families. hopefully, these rationales will prevail when funds are made available for disaster readiness. decontamination equipment and areas may be used for commercial hazardous materials spills. isolation areas could be more routinely used in an effort to contain suspected contagions, such as influenza. lack of bed capacity in hospitals leads to ed overcrowding. scalable eds may offer temporary solutions in times of overburdened hospital inpatient services. however, when reserve spaces are used to solve other overcapacity problems, those spaces are no longer available for disaster operations. thus, a new facility could "build" the capability of handling large surges of patients into adjacent spaces, only to lose it by filling these spaces with excess patients whenever the hospital is over census, which is a recurrent problem at many medical centers. finally, the next disaster event may be different from those for which responders prepare. the rarity of terror events creates a need for testing and practicing disaster plans, skills, and capacities in drills to maintain current competence. drills may uncover design problems that can then be addressed, but such drills can only prepare emergency personnel for anticipated threats. why pour such resources into building capacity that may never be used or undertake other costly initiatives in anticipation of disaster events? among the lessons learned from past disaster events is the need to develop disaster skills and build a disaster response system from components that are in daily use. systems that are used routinely are more familiar and more likely to be used successfully during disaster events. certainly the surge capacity of a disaster-ready ed could be used for natural disaster response and in disaster drills. the surge space could also be used for over-census times, public health events, immunizations, and health screenings. newly built or renovated eds should have excess capacity by design to serve as a community disaster resource. these capacities could be utilized routinely in response to hospital overcrowding or public service events (such as mass immunization campaigns) and should be deployed and tested regularly in disaster drills to maintain readiness in a post- / world. australian college for emergency medicine functional and space programming designing for emergencies rhode island disaster initiative. improving disaster medicine through research integrating disaster preparedness and surge capacity in emergency facility planning surge capacity concepts for health care facilities: the co-s-tr model for initial incident assessment the tokyo subway sarin attacklessons learned emergency departments and crowding in us teaching hospitals trends in emergency department utilization do admitted patients held in the emergency department impact the throughput of treat-andrelease patients? using 'reverse triage' to create hospital surge capacity: royal darwin hospital's response to the ashmore reef disaster creation of surge capacity by early discharge of hospitalized patients at low risk for untoward events inglesby systemic collapse: medical care in the aftermath of hurricane katrina physical security equipment action group (pseag) interior/exterior intrusion and chemical/biological detection systems/sensors computational methods for the analysis of chemical sensor array data from volatile analytes electrochemical detection in bioanalysis array-based vapor sensing using chemically sensitive, carbon black-polymer resistors nadel ba. designing for security putting clinical information into practice emergency department and walk-in center surveillance for bioterrorism: utility for influenza surveillance iceid syndromic surveillance for bioterrorism policewomen win settlement. seattle times fire department response to biological threat at b'nai b'rith headquarters chemically contaminated patient annex (ccpa): hospital emergency operations planning guide emergency department hazardous materials protocol for contaminated patients centers for disease control and prevention. cdc recommendations for civilian communities near chemical weapons depots. occupational safety and health administration. hospitals and community emergency response: what you need to know weapons of mass destruction events with contaminated casualties: effective planning for health care facilities outline of hospital organization for a chemical warfare attack guidelines for design and construction of hospitals and healthcare facilities. philadelphia: the american institute of architects academy of architecture for health emergency response guidebook: a guidebook for first responders during the initial phase of a dangerous goods-hazardous materials incident department of health and human services. metropolitan medical response system's field operation guide volume i-emergency medical services: a planning guide for the management of contaminated patients. agency for toxic substances and disease registry infectious diseases epidemiology and surveillance. australia: victoria guidelines for design and construction of hospital and health care facilities sars: experience from the emergency department, tan tock seng hospital chemically contaminated patient annex: hospital emergency operations planning guide silver-nylon: a new antimicrobial agent filtration of airborne microorganisms: modeling and prediction. available at: ashrae transactions key: cord- -daiikgth authors: van velsen, lex; beaujean, desirée jma; van gemert-pijnen, julia ewc; van steenbergen, jim e; timen, aura title: public knowledge and preventive behavior during a large-scale salmonella outbreak: results from an online survey in the netherlands date: - - journal: bmc public health doi: . / - - - sha: doc_id: cord_uid: daiikgth background: food-borne salmonella infections are a worldwide concern. during a large-scale outbreak, it is important that the public follows preventive advice. to increase compliance, insight in how the public gathers its knowledge and which factors determine whether or not an individual complies with preventive advice is crucial. methods: in , contaminated salmon caused a large salmonella thompson outbreak in the netherlands. during the outbreak, we conducted an online survey (n = , ) to assess the general public’s perceptions, knowledge, preventive behavior and sources of information. results: respondents perceived salmonella infections and the outbreak as severe (m = . ; five-point scale with as severe). their knowledge regarding common food sources, the incubation period and regular treatment of salmonella (gastro-enteritis) was relatively low (e.g., only . % knew that salmonella is not normally treated with antibiotics). preventive behavior differed widely, and the majority ( . %) did not check for contaminated salmon at home. most information about the outbreak was gathered through traditional media and news and newspaper websites. this was mostly determined by time spent on the medium. social media played a marginal role. wikipedia seemed a potentially important source of information. conclusions: to persuade the public to take preventive actions, public health organizations should deliver their message primarily through mass media. wikipedia seems a promising instrument for educating the public about food-borne salmonella. with an estimated . million cases each year, foodborne salmonella infections are a worldwide concern [ ] . in developing areas in africa, asia and south-america, salmonella typhi and paratyphi are an important cause of severe illness, leading to more than million cases and . deaths in children and young people every year [ ] . a typical salmonella infection can lead to fever, diarrhea, nausea, vomiting, abdominal cramps, and headache. symptoms usually appear between to hours after eating contaminated food, and last three to seven days. the incidence rate of salmonella is highest among infants and young children. as there are many different types of food-borne salmonella, each with their own food sources, control is difficult. proper hygiene in the kitchen (e.g., washing hands, thoroughly heating and baking meat) can prevent a salmonella infection. however, studies among the general public in italy [ ] , turkey [ ] and new zealand [ ] showed that compliance with preventive hygiene advice is low to very low. a possible explanation is that most people believe that a food-borne infection is "something that happens to others" [ , ] . educating the public about food safety is crucial in preventing food-borne infections. according to medeiros and colleagues [ ] , food-borne salmonella infections should be prevented by educating the general public about adequate cooking of food, and by instructing them about the risks of cross-contamination. traditional communication means, such as flyers, are well suited to achieve these educational goals [ ] . however, when a food-borne infection breaks out on a large scale, the dynamics of the situation shift tremendously. due to an uncertain course of events, decisions have large consequences, the general public is stressed, and the media is eager for news [ ] . in these circumstances, health organizations should inform the public about the situation and persuade them to take preventive actions. to be effective in this endeavor, they should use the communication channels the general public expects them to use, and provide the public with the information they want and need. a study among malaysians during the a(h n ) influenza outbreak in , uncovered that their main sources of information were newspapers, television and family members; their information needs were instructions on how to prevent or treat infections [ ] . in the netherlands, the severe acute respiratory syndrome (sars) outbreak and the enterohaemorrhagic e. coli (ehec) outbreak showed us that the dutch general public mostly turns to traditional media (i.e., television and radio), and news websites [ , ] . in recent years, the rise of social media (e.g., facebook, twitter) has provided new avenues for reaching the general public during infectious disease outbreaks. although social media have proven very valuable during disaster relief as a crowdsourcing tool [ ] , an exploratory study of their worth as a communication tool during an infectious disease outbreak suggested their value to be limited [ ] . research on the information behavior of the general public during infectious disease outbreaks is scarce. but this knowledge is crucial in serving the general public in their information needs, and in maximizing citizen compliance with preventive advice. in this study, we uncovered the general public's perceptions, knowledge, preventive behavior, and sources of information during a large, national salmonella outbreak by a large-scale online survey. as a result, we were able to answer our main research question: which information should health organizations convey during a largescale salmonella outbreak, and by which channels, to maximize citizen compliance with preventive advice? in the beginning of august , an outbreak of salmonella thompson occurred in the netherlands [ ] , later traced back to contaminated smoked salmon from one producer. by september , all smoked salmon of this producer was recalled. in the following week, other products containing this producer's smoked salmon (e.g., salads) were also recalled. citizens were advised to check the batch number of their products and to dispose of possible contaminated products. after implementing those measures, the number of cases decreased rapidly and by the end of , the outbreak came to an end. , laboratory-confirmed patients and four deaths were reported [ ] . the actual number of patients is thought to be higher, as individual cases of salmonella gastro-enteritis are not mandatory notifiable in the netherlands and laboratory confirmation usually merely takes place in a fraction of all patients presenting with diarrhea. according to dutch standards, this situation classifies as a large-scale outbreak, as it is an occurrence of disease greater than would otherwise be expected at a particular time and place. normally around four cases of salmonella thompson are seen in the netherlands per year. we developed an online survey to assess the general public's perceptions, knowledge, preventive behavior, and information use during the salmonella thompson outbreak. the instrument was constructed on the basis of the health belief model [ ] , and research on citizen channel choice for medical information [ , ] . the survey contained questions, and was divided into five domains: participants' information intake about the outbreak through the media, and where they went to look for answers to questions related to salmonella infections and the outbreak. perceptions were assessed by multiple statements with five-point likert scales (ranging from disagree ( ) to agree ( ) ). items were based on bults et al. [ ] . knowledge was assessed by nine true/false statements. preventive behavior was assessed by multiple-choice questions about what respondents did after hearing about the outbreak. sources of information were determined by questioning how often and where respondents saw, heard or read about the outbreak. next, we asked respondents if they had wanted more information about the outbreak or an answer to a specific question about the outbreak. if so, we asked where they had sought this information or the answer. if they had so through the internet, then we asked them if they had found it through a google search, whether they had found what they were looking for, how satisfied they were with the website, and how much they trusted the information. to keep the length of the survey acceptable, we only posed these questions for one website the participants named. if they named more than one website, the website was chosen at random. the survey can be found in additional file . respondents were recruited by a commercial panel that also hosted the survey in their online environment. the panel supplied standard demographics for each respondent (e.g., age and income). a stratified sample was taken to create a representative group of the dutch population. the minimum age for participation was years. the target sample size was , respondents, to allow for satisfactory statistical power, and to maximize our chances of including people who contracted a salmonella infection. respondents received points for participating, with which they could buy gifts in an online shop. panel participants received an individual invitation via email of which the first was sent out on november , . the survey was closed on november , . due to the method of recruitment, a response rate could not be calculated. written informed consent was obtained from each respondent for publication of this report. the nature of this general internet-based survey among healthy volunteers from the general population does not require formal medical ethical approval according to dutch law [ ] . descriptive statistics were performed for the demographics, respondents' preventive behavior, and sources of information. cronbach's alpha was calculated to assess internal consistency for the psychological rating scales. these scores were . for perceived severity of salmonella, . for perceived severity of the outbreak, . for carefulness with salmon preparation during the outbreak, . for carefulness with general food preparation during the outbreak, . for interest in health information, and . for perceived health. next, mean scores were computed for the aforementioned psychological rating scales, while the statements for assessing knowledge about salmonella infections resulted in a sum score (ranging from to , where is no knowledge and is very high knowledge). to establish the influence of factors determining respondents' application of preventive measures during the outbreak (dependent variable), we performed stepwise backward regression analyses. following [ ] [ ] [ ] , we included the following independent variables in the initial model: the demographics age, education, income and sex, and the factors perceived severity of a salmonella infection, perceived severity of the outbreak, knowledge about salmonella infections, and increased general kitchen hygiene during the outbreak. education was recoded into a new variable with three options: low, middle or high, while sex was included in the regression analyses as a dummy variable. these actions make it possible to include these nominal variables in this kind of regression analysis. factors were removed from the model if p > . . the procedure was repeated for determining the factors that influence the consumption of information about the salmonella outbreak for different media. here, consumption of information on a medium was the dependent variable for the different models (each model explaining the information consumption for a specific medium.). we included the following independent variables in the initial models: the demographics age, having children, education, income, and sex (based on [ , ] ), as well as the factors perceived severity of a salmonella infection, perceived severity of the salmonella outbreak, knowledge about salmonella infections, interest in health information, and perceived health (based on [ ] ), as well as the application of measures to prevent a salmonella infection, and increased carefulness with preparing food (following [ ] ). for the variables using twitter or not, and having children or not, we also created a dummy variable. these analyses allowed us to formulate recommendations in line with our main research question: which information should health organizations convey during a large-scale salmonella outbreak, and by which channels, to maximize citizen compliance with preventive advice? in total, , respondents completed the survey. table displays their demographics, showing that the sample is fairly representative for the dutch population. figure shows how often the respondents made use of different media. most respondents watched television more than two hours a day. radio was less popular, although one quarter listened to this medium more than four hours a day. the majority spent some time each day reading a newspaper. most respondents used the internet intensively. finally, . % had a twitter account, . % a hyves (a dutch social network) account, and . % a facebook account. respondents perceived salmonella thompson to be quite a severe infection (m = . ; sd = . ). this finding is corroborated by the comparison respondents made between a salmonella infection and other illnesses. this comparison is displayed in table , and shows that salmonella is estimated as severe as asthma and diabetes. the outbreak was also estimated as quite severe (m = . ; sd = . ). respondents' mean interest in health information (m = . ; sd = . ), and their perceived health (m = . ; sd = . ) were neutral. we assessed respondents' knowledge about salmonella infections by nine true/false statements (see table ). the respondents appeared to be well informed, with a few exceptions. % was unaware of the common sources of a salmonella infection, , % unaware of its incubation period, and , % was unaware of how salmonella is treated in general. we calculated a sum score for each respondent's knowledge (with a maximum of ). the mean score was . (sd = . ). respondents' self-reported application of measures to prevent a salmonella infection during the outbreak was below the neutral point (m = . ; sd = . ), as was their estimation of an increase in kitchen hygiene during the outbreak (m = . ; sd = . ). however, in both cases standard deviations are quite high, implying that there were people who increased their kitchen hygiene tremendously, and people who absolutely did not. our regression analysis showed that the application of preventive measures (dependent variable) was influenced by increased general kitchen hygiene during the outbreak (β = . ; p < . ), by perceived severity of the outbreak (β = . ; p < . ), and by the demographics income (β = . ; p < . ) and sex (higher for women; β = . ; p < . ). a significant beta means that a factor influences the dependent variable (in this case application of preventive measures). a low beta stands for a small influence, a high beta for a large influence. in this case, the betas show that four factors influence the application of preventive measures; of which increased general kitchen hygiene is by far the greatest influence. explained variance (r ) for the model was . (which means that the dependent variable is explained for a large part by the identified independent variables, but also by some, as of yet, unidentified variables). in our sample, eight respondents (. %) indicated to have gotten a salmonella infection from eating contaminated salmon. a larger group ( respondents; . %) knew someone in their close vicinity (friends or family) who ate contaminated salmon and then got a salmonella infection. we asked the respondents whether they checked if they had salmon at home when they heard of the outbreak. it turned out that: respondents ( . %) checked but did not have salmon at home; respondents ( . %) checked and did have salmon at home; respondents ( . %) did not check if they had salmon at home. next, we assessed what the respondents did who had salmon at home: respondents ( . %) found out their salmon was not contaminated; respondents ( . %) threw all salmon away; respondents ( . %) found out they had contaminated salmon and threw it away; respondents ( . %) found out they had contaminated salmon, but did eat it; respondents ( . %) did something else, mostly returning contaminated salmon to the supermarket. in assessing the information behavior of the general public during the salmonella outbreak, we made a distinction between passive and active information behavior [ ] . passive information behavior consists of situations in which a person receives information without actively searching for it (e.g., listening to the radio, stumbling upon an item when surfing on a news website). in other words, a person is exposed to information without a direct and specific need for this information. active information is caused by a question or explicit need for information, after which a person actively seeks out information. figure displays the channels and popular online sources from which the respondents have passively received information about the salmonella outbreak. television was the medium that delivered most information, followed by radio and newspapers. news website nu.nl was also a relevant source of information. finally, social media played a marginal role, whereby social network sites were more important than twitter. next, we assessed what factors influence passive information consumption for each channel or source (dependent variables). results for the different regression analyses can be found in table (each column representing the regression analysis for a specific medium). time spent on the medium was the most influential predictor for passive consumption of information for several media or sources. interest in health information, and perceived health influenced passive consumption of information for all media and sources, except for social media. perceived severity of the salmonella outbreak played a small role in the passive consumption of outbreak-related information through traditional media. the other factors and demographics played no or a marginal role, with one exception for age in the case of nu.nl (a popular news website in the netherlands), where lower age was an important predictor. we also encountered active information behavior among the respondents. ninety-one respondents ( . %) finally, we focused on a specified range of online sources, and if a website was visited by a respondent, we asked how the website was found, whether it provided the information the respondent was looking for, how satisfied he/she was with it, and whether he/she trusted the information. the number of respondents who answered these questions was relatively low (ranging from for the nvwa website, to for facebook and hyves). most online sources were either found through a google search or directly by entering the url. the nvwa website and wikipedia were predominantly found through a google search, and newspaper websites were mostly accessed directly. virtually all sources provided the seekers with the information they were looking for. satisfaction with the source was high for wikipedia, the nvwa website, and if you have symptoms from salmonella (like vomiting or diarrhea), you are temporarily not allowed to work in healthcare. true salmonella can predominantly be found on chicken, raw vegetables, and fruit. true . % after you have eaten salmonella-contaminated food, it can take weeks before you become ill. false . % salmonella is almost always treated with antibiotics. false . % figure number of times news about the salmonella outbreak was received per source (n = , ). note: nu.nl is a popular news website in the netherlands. the website of the municipal health service, while it was low for facebook and hyves. trust in the online source was relatively high for the websites of the government organizations: the rivm, the nvwa, and the municipal health service. trust in the website of the company that was the source of the outbreak and of the social networks facebook and hyves was relatively low. our results show that shortly after salmonella thompson broke out nationally in the netherlands, the general public perceived salmonella gastro-enteritis as a serious illness, comparably severe to asthma and diabetes. they also perceived the outbreak as severe. respondents' knowledge of salmonella (gastro-enteritis) was appropriate, except for the common food sources of a salmonella infection, the duration of the incubation period, and the fact that treatment with antibiotics is usually not needed. this study reveals gaps in the public's knowledge on salmonella infections, and shows where health education efforts should be put in by health organizations. moreover, it also shows that it is important to assess existing public knowledge regarding different infectious diseases, in order to improve health communication, and to fill knowledge gaps. despite warnings through mass media channels, the majority of the respondents neither checked whether they had contaminated batches of smoked salmon products at home, nor did their kitchen hygiene increase during the outbreak. while the perceived severity of the outbreak influenced the adoption of preventive measures to some degree, increased general kitchen hygiene during the outbreak appeared to be the most important antecedent. this suggests that being careful to avoid a foodborne infection during an outbreak is primarily done by people who are already concerned about food safety. since salmon is very popular and processed in many other products, it is well possible that people did not realize they owned contaminated products. some people even knowingly ate contaminated salmon, thereby neglecting health officials' advice to throw contaminated salmon away, or to return it to the supermarket. during the infectious disease outbreak, the general public mostly receives information through traditional media and popular news(paper) websites. health organizations should focus on these media to inform the general public, and to persuade them to take preventive actions. we came to a similar conclusion after studying information behavior during the german ehec outbreak [ ] . we uncovered that people do not use social media in these situations, as they think healthrelated information is 'out of place' there, or unreliable [ ] . investing time and effort in a social media campaign may serve only a very small portion of the population, resulting in a low return on investment. the consumption of outbreak-related information through a traditional medium and twitter was mostly determined by time spent on the medium, suggesting that consuming outbreak-related information is for a large part coincidental, and highly determined by the news selection of the different media. a higher interest in health information also resulted in more outbreakrelated information consumption. however, this could also be due to a recall bias, as those interested in such information might more easily remember receiving it. other predictors played no or a marginal role, with the exception of lower age for the popular dutch news website nu.nl. only a small sample of our respondents actively searched for information about salmonella or the outbreak. those who did mostly turned to the internet. there, they consulted multiple sources, found through a google search or by entering the url, like national food safety institutes, online newspapers, websites of municipal health services, and wikipedia. the latter has also been found to be an important source of information during other infectious disease outbreaks [ , ] . it should be noted, however, that the popularity of wikipedia could be due to the high ranks it receives in google. the website of the national institute for public health and the environment (the dutch equivalent of the american centers for disease control and prevention) was consulted less than the aforementioned sources. this implies that such national institutes should not solely rely on their own communication efforts, but they should collaborate with local health organizations, and they should contribute to relevant wikipedia articles. there has been some debate, however, concerning the quality of wikipedia articles for the goal of public health education, and studies on this matter show mixed results. the quality of medical wikipedia articles has been found to be good but inferior to official patient information [ ] , of similar quality as official patient information [ ] , or incomplete, which might have harmful effects [ ] . these results imply that if health organizations decide to use wikipedia to inform the public during a large-scale salmonella outbreak, they should make a continuous effort to continuously monitor the relevant articles and to improve their quality. our analysis did not result in a clear set of predictors for consuming outbreak-related information through social media. also, the predictors that are often found for consuming health information through traditional media (like interest in health information, and perceived health) did not hold for these services. if we are to find a set of predictors for this contextpresuming they do exist, considering the little use the general public made of social media during the outbreakwe will have to step off the beaten path and gather a set of new predictors. we conducted the survey at the end of the salmonella outbreak. while this allows for a good retrospective view, the general public's perceptions and behavior may evolve during an outbreak. different phases induce different information needs, related to the uncertainties of the situation (e.g., fear may play a bigger role when the outbreak source is still unknown) [ ] . a longitudinal setup would provide insight in these developments, and it would be an interesting direction for future research. second, the number of people in our study that actively searched for more information or for answers to their questions was relatively low. it is therefore difficult to base generalizable conclusions on these results, and our efforts should be viewed as explorative. they do provide valuable input for in-depth studies aimed at assessing people's outbreak-related information seeking processes. such studies have already generated important insights for the health domain (e.g., [ ] ). but it is also possible that, in this context, people actively searching for more information is a rarity, possibly due to the fact that the information provided by the different media is perceived as adequate. other studies should acknowledge or refute this thesis. finally, our study was restricted to the dutch general public. we do not have any indications that these results would not hold for other western european countries, but these should be validated for countries where the process of outbreak-related information provision and the internet penetration rate are fundamentally different. this study aimed to determine which information health organizations should convey during a large-scale salmonella outbreak, and by which channel, to maximize citizen compliance with preventive advice. we found that after the outbreak, the general public perceived salmonella gastro-enteritis as severe, but the public did not wholeheartedly apply the advised preventive measures. health organizations should use traditional media, and news and newspaper websites to inform the public, and to persuade them to take preventive actions. they should increase knowledge about salmonella infections, and stimulate citizens to check for possibly contaminated products at their home, and to increase kitchen hygiene. future research should focus on the role wikipedia can play during infectious disease outbreaks, not only those caused by salmonella. we are especially interested in case studies in which health organizations have used wikipedia as a public health education tool, and in how they experienced this in terms of public appreciation, and organizational investment. furthermore, studies assessing the quality and completeness of health-related wikipedia articles can be very valuable in helping health organizations decide on which articles they should use or improve the quality of. finally, our study pointed out that there is a group of people who knowingly take risks the global burden of nontyphoidal salmonella gastroenteritis global trends in typhoid and paratyphoid fever food safety at home: knowledge and practices of consumers the knowledge and practice of food safety by young and adult consumers van der logt p: survey of domestic food handling practices in new zealand consumer perceptions of food safety risk, control and responsibility south and east wales infectious disease group: differences in perception of risk between people who have and have not experienced salmonella food poisoning food safety education: what should we be teaching to consumers? development and evaluation of a risk-communication campaign on salmonellosis risk communication for public health emergencies public sources of information and information needs for pandemic influenza a(h n ) sars risk perception, knowledge, precautions, and information sources, the netherlands je: should health organizations use web . media in times of an infectious disease crisis? an in-depth qualitative study of citizens' information behavior during an ehec outbreak harnessing the crowdsourcing power of social media for disaster relief outbreak of salmonella thompson in the netherlands since national institute for public health and the environment: salmonella thompson-uitbraak historical origins of the health belief model who learns preventive health care information from where: cross-channel and repertoire comparisons determinants of internet use as a preferred source of information on personal health perceived risk, anxiety, and behavioural responses of the general public during the early phase of the influenza a (h n ) pandemic in the netherlands: results of three consecutive online surveys central committee on research involving human subjecs: manual for the review of medical research involving human subjects food safety perceptions and practices of older adults socioeconomic determinants of health-and food safety-related risk perceptions what does the food handler in the home know about salmonellosis and food safety? information behaviour, health self-efficacy beliefs and health behaviour in icelanders' everyday life information behaviour: an interdisciplinary perspective. inf process manag public anxiety and information seeking following the h n outbreak: blogs, newspaper articles, and wikipedia visits wikipedia and osteosarcoma: a trustworthy patients' information? patient-oriented cancer information on the internet: a comparison of wikipedia and a professionally maintained database accuracy and completeness of drug information in wikipedia: an assessment crisis and emergency risk communication health informationseeking behaviour in adolescence: the place of the internet public knowledge and preventive behavior during a large-scale salmonella outbreak: results from an online survey in the netherlands by eating contaminated products during a salmonella outbreak. a future study should focus on this group, and uncover their motivations for doing so (e.g., by interviewing patients with an infection who were seen by doctors during a salmonella outbreak), to improve health education for this group. additional file : survey.abbreviations ehec: enterohaemorrhagic e. coli; nvwa: the netherlands food and consumer product safety authority; rivm: national institute for public health and the environment. the authors declare that they have no competing interests. lvv contributed to the study design and collection of data, analyzed the data, and drafted the manuscript as the lead writer. djmab contributed to the study design and collection of data, and critically reviewed the first draft of the paper. jewcgp and jes contributed to the study design. at contributed to the study design and collection of data, and critically reviewed the first draft of the paper. all authors approved the final version. key: cord- - ukr se authors: okuyama, tadahiro title: analysis of optimal timing of tourism demand recovery policies from natural disaster using the contingent behavior method date: - - journal: tour manag doi: . /j.tourman. . . sha: doc_id: cord_uid: ukr se this paper examines the applicability of contingent behavior (hereafter, cb) method for analyzing dynamic processes and efficient policies in tourism demand recovery. the cb questionnaires used for this study used a hypothetical disaster situation of bird flu in kyoto, japan. safety, event, visitor information, and price discounting policies were designed accordingly. respondents were then asked about their willingness to travel time. the results showed the optimal timing for devising pertinent policies during the year. we found that the first step requires a safety information announcement, within one week, immediately after disaster site decontamination. the second step is the implementation of event information policy within th to th week after the disaster. the third step constitutes announcing visitor information within the th to nd week after the second step. the final step is the implementation of price discounting policy, until the nd week, immediately after the third step. natural disasters have occasionally caused physical and economic damage to both tourist and non-tourist sites, leading to loss of tourism opportunities and the collapse of tourism industries (murphy & bayley, ; ritchie, ) . given the possibility of long-term economic deterioration due to continuing reduction in tourism demand, opportunity losses are a major concern for policymakers and the industry itself (chew & jahari, ) . the bird flu outbreak in japan's miyazaki prefecture in forced public officers to prohibit visitor entry to disaster areas, followed by the culling of influenza-stricken birds, which caused losses of approximately ¥ . billion (miyazaki prefecture, ) . the great east japan earthquake, which occurred at a magnitude of . , and the ensuing tsunami in tohoku area (eastside of japan), in , killed nearly , people. these disasters led to economic losses of ¥ . trillion, which included losses due to a decrement in the number of touristsdfrom . million in to . million in (cabinet office, government of japan, ; kento, ) . the great kumamoto earthquake, which occurred in kyushu area (westside of japan) in , caused deaths and economic damages worth ¥ . million to ¥ . trillion to the kumamoto and oita prefectures. it further resulted in a decrease of approximately . million tourists to the kyushu area between april and june , compared to the same period in (cabinet office, government of japan, ; kyushu economic research center, ). e-mail addresses: okuyama@sun.ac.jp, z_okuyama@hotmail.com. tourism management in previous literature, tourism management studies have analyzed frameworks and methods for tourism recovery at disaster sites (durocher, ; faulkner, ; huanga & min, ; mazzocchia & montini, ; wang, ) . for instance, ritchie ( , p. ) noted that tourism crisis and disaster management models should be developed for decision-making. however, due to lack of tourism demand data with respect to disasters, few studies have examined the quantitative effects of recovery policies. owing to insufficient research on this topic, this study examines a valuation method, while simultaneously measuring the quantitative effects and the optimal timing (order) of tourism recovery policies applying the contingent behavior (hereafter cb) method. by showing the optimal policy timing (order), we expect to contribute toward ) helping policymakers when they may not be able to undertake rescue operations and recover disaster losses due to financial and human resources shortages simultaneously and ) development of advance planning (the stage of faulkner, ) before potential disasters. the cb method design requires consideration of the realities and existence of disaster-related solutions. as it is difficulty to design and establish efficient solutions for earthquakes of large magnitudes, tsunamis, and typhoonsdwhich typically cause considerable damage across a wide areadthis study employs a bird flu scenario as a hypothetical natural disaster. the world health organization ( ) reported that, from to , bird flu claimed human lives globally. in asia alone, % and . % of all those infected by bird flu died in china and vietnam, respectively. brahmbhatt ( ) reported that bird flu decreased vietnam's gross domestic product (gdp) by . %. moreover, the alarming possibility of a worldwide bird flu pandemic continues to exist. in such a scenario, approximately million to million people could die (ministry of foreign affairs of japan, ) . the remainder of this paper is structured as follows. section summarizes the main objectives of this study based on a review of previous studies. section describes the estimation models and survey questionnaires. section presents the estimation results. the discussion and conclusions appear in sections and , respectively. . . tourism demand recovery management from disasters faulkner ( ) and ritchie ( ) presented the frameworks of tourism demand recovery processes (strategies). faulkner ( ) 's framework is divided into six stages: ) the pre-event (pre-disaster) stage (stage ) to mitigate the effects of disaster through advance planning, ) the prodromal stage (stage ), indicating the inevitability of a disaster, ) the emergency stage (stage ) to undertake rescue operations in the event of a disaster, ) the intermediate stage (stage ) that responds to the short-term needs (e.g., food, medicines) of people and companies in the disaster site, ) the long-term recovery stage (stage ), which includes reconstruction of infrastructure and victim counseling, and ) the resolution stage (stage ), which requires restoration of routine along with new and improved state establishments. the fifth and sixth stages are post-event stages, and the focus of this study. thus, the policy effects from pre-event to the post-event stages and the feedback effects from the post-event to the pre-event stages described in racherla and hu ( ) are not our focus. furthermore, the third and fourth stages would constitute the main parts of emergency policies. as mentioned in ritchie ( ) , the quantitative valuation of recovery process is one of the most important tasks of tourism disaster management. faulkner ( ) , thus, presented various strategies, such as restoration of business and consumer confidence, and repair of damaged infrastructures. the ministry of land, infrastructure, transport and tourism of japan (mlitt, ) states that the recovery process has to include management policies for safety information, pricing, visit campaigns, among others. moreover, beirman ( ) suggested the importance of media, public relations, and regional cooperation in case studies. regardless of these suggestions, policymakers might not know which policies are effective, when they should be implemented, and which policy ordering is desirable under the provision of few quantitative valuations. the method used in this study could lead policymakers to make quick and appropriate decisions that may reduce or prevent damages related to a disaster. the ministry of land, infrastructure, transport and tourism of japan ( ) has published a manual (hereafter the mlitt manual) on the management of tourism demand recovery before and after the occurrence of infections, such as the bird flu. fig. shows the framework of the recovery process as per the mlitt manual in relation to the stages in faulkner ( ) . the vertical axis shows tourism demand (tourists' choice probability) levels. the horizontal axis shows time series, where t refers to the emergence time of the bird flu, t denotes the time when the affected areas/sites are decontaminated, and t denotes the time that the tourism demand recovers to the standard (pre-stage) demand level. thus, the optimal policy (or policies) in this study refers to a policy or a combination of policies that can recover a tourism demand level immediately after t is closest to or over the standard demand level at t (t ). the tourism demand process was categorized into periods to . period almost corresponds to stages and of faulkner ( ) ; period , to stages and ; and periods and , to stages and , respectively. one of the aims of this study is to examine the recovery process by estimating the demand function after t in period . theoretically, tourism demand is determined by travel prices to tourism sites, individual, or household income, and site attributes data, such as nature, safety levels, and leisure amenities (dann, ; dwyer, forsyth, & dwyer, ) . tourism policy evaluations, which are based on demand function approaches, measure policy effects from these factor (policy variable) changes (e.g., discounting the prices and improving attributes). while micro (consumer behavior) data are frequently used for the demand analyses (fleming & cook, ; phaneuf, kling, & herriges, ) , the difficulty of researching such data from the time series of independent consumer behaviors sometimes dissuades researchers from analyzing the dynamic processes of demand functions, especially in disaster. macro data following the time series are also used for tourism demand analysis (song & witt, ; wang, ) . using data from the world health organization ( ), kuo, chen, tseng, ju, and huang ( , ) showed the negative impacts of the severe acute respiratory syndrome (better known as sars) and bird flu on tourism activities. page, song, and wu ( ) also showed that the outbreak of bird flu decreased the number of tourists to england. kuo et al. ( ) and chang et al.'s ( ) results indicated that analyses based on social statistics may not always be able to estimate the impacts of bird flu on tourism, given the influences of external macro-impact factors, such as economic trends, terrorism, and temperature. the cb method could overcome these micro and macro data issues, as it analyzes individual behaviors. this method enables researchers to analyze individual behaviors under (researcher -designed) hypothetical situations, and it is used in cases where observable data are limited. for instance, whitehead, johnson, mason, and walker ( ) used observable data by asking respondents the number of times they visited hockey games depending on game intervals and seat prices. phaneuf and earnhart ( ) measured recreational benefits of lakes using trip data under hypothetical trip time and prices. whitehead, dumas, herstine, hill, and buerger ( ) valued the benefits of improving the widths of beaches using actual and contingent trip data collected under different accessibility conditions to beaches of various widths. using the cb method, whitehead ( ) analyzed hurricane evacuation behaviors by asking respondents about their order of fleeing from their homes depending on the strength of the hurricane. overall, few previous studies have incorporated the time series factor into the cb method. this study, on the other hand, examined cb questionnaires from previous studies, such as phaneuf and earnhart ( ) , and developed a new cb questionnaire with time series factors for analyzing the policy timings and orderings. this study mainly examined the effects of information and pricing policies. three information policies are included in the cb questionnaire. the first is the provision of safety information, by the japanese government and kyoto prefectural governments, to ensure safety in traveling to kyoto prefecture, from the discussions of stage of faulkner ( ) and the mlitt manual. note that tourists sometimes may not visit a disaster site without safety information. the second is the provision of event information of the respondents' preferred events that have been performed and/or new tourism facilities that have been established in kyoto. the third is the provision of visitor information regarding the number of tourists who have already visited kyoto. the second and third information policies are designed for improving (respondents') destination images referred by beirman ( ) , chew and jahari ( ) , and ritchie ( ) . previous studies suggested that pricing policies, such as decrements of travel, hotel, and food costs, have a positive effect on tourism demand (garrod & fyall, ; laarman & gregersen, ) . h.i.s. ( ), a major japanese tourism company, also implemented a tourism campaign (price discounting) in collaboration with the japanese government to recover tourism demand from japan to france, following the terrorist attacks in france. thus, this study employed price discounting. thus, tourism demand recovery will be delayed if its effects are not revealed in the recovery process. thus, this study also examined the pricing policy effect through comparisons with information policies. previous studies employed the time series analysis (eilat & einav, ; gurudeo, ; song, li, witt, & fei, ) and the random utility model (baltas, ) in analyzing tourism demands. here, the cb questionnaires collect "yes/no" response data on individual decisions on travel, and thus, this study used the logit model for the estimations. let x be a vector of variables and b be a transported vector of parameters. pr, defined by equation ( ), is the probability of obtaining a negative response ("no") from the ith respondent. equation ( ) shows the log-likelihood function. the estimations were performed using the glm function in r ver . . . . survey questionnaires the following hypothetical site conditions were considered for the cb questionnaires: i) short distance from all respondents' homes to reduce the number of rejected responses typical with long distance travel, ii) actual bird flu experiences to add reality to the hypothetical situation described in the cb questionnaires, and iii) use of a famous site to avoid wrong answers resulting from respondents' unawareness. the questionnaires were in japanese. this study selected kyoto prefecture, one of japan's most famous tourism sites, as the site for the hypothetical case. fig. shows the location of kyoto prefecture with the hypothetical bird flu outbreak. kyoto prefecture is located in the central part of japan (e , n ), satisfying condition (i). its area is . km , and the japan sea lies towards its north, while nara prefecture lies to its south and mie prefecture is located towards its east. the population in was . million (kyoto prefectural government, ) , and, in , approximately , thousand japanese and foreign tourists visited kyoto (kyoto city government, ). kyoto was the capital of japan nearly years from the eighth century onwards. many historical temples and shrines that were built during the period still exist. seventeen historic sites, such as kiyomizu-dera temple and nijo castle, are recognized as world heritage sites. kyoto has three major festivals, which attract tourists: the aoi-matsuri festival in early summer, the gion-matsuri festival in mid-summer, and the jidai-matsuri festival in fall. these events are hugely popular with both japanese and foreign travelers. the bird flu outbreaks (fig. ) in this study are categorized as domestic bird flu, wild bird flu, and other bird flu (ministry of agriculture, forestry, and fisheries, ) . kyoto prefecture is hypothesized to have a case of wild bird flu, satisfying condition (ii). condition (iii) was confirmed by the survey questionnairesdrespondents were asked whether they had heard of and ever been to kyoto. all respondents answered "yes" to the former question, while . % of them answered "yes" to the latter question, thus satisfying condition (iii). the cb questionnaire was implemented in four steps, as shown in appendix . the first step (appendix -a) was to research respondents' credible information source to avoid respondents' distrust in an information source, described in appendix -d. specifying respondents' credible information sources reduces the rate of rejection responses in the cb questionnaires. the second step was to provide explanations on bird flu (appendix -b); the third, to present a hypothetical bird flu outbreak (appendix -c); and the fourth, to implement the cb questionnaire (appendix -d). alternatives and results of respondents' credible information sources are shown in table and appendix -a. the most credible information source is public information ( . %). the contents of explanations on bird flu are almost same as those in the introduction. fig. a in appendix -b shows the actual and hypothetical areas with bird flu outbreak to the left and right, respectively. in the explanation, respondents were also explained that the bird flu outbreak in kyoto had not resulted in any damages to humans and food items. as shown in the right panel of the figure, designating all of kyoto prefecture as a hypothetical area with bird flu outbreak would avoid misunderstandings among the respondents (and hence, any wrong answers). otherwise, some respondents might respond that they visited areas affected by the bird flu outbreak against their will. in addition, the fact that the flu outbreak in kyoto prefecture was attributed to wild birds might have made the hypothetical situation more realistic. announcements made after disasters play an important role in tourism demand recovery. safety information would particularly alleviate tourists' anxiety and assure them that any danger from the situation has passed. this study designed three hypothetical information policies (see appendix -d): safety information (information ), event information (information ), and visitor information (information ), as mentioned in section . . combinations of the policies were presented to the respondents via the cb questionnaires. the combinations were mainly categorized as type a and b, based on the inclusion or exclusion of safety information, respectively. ip denotes the dummy variables for both types, respectively: for yes, for others. the subscript safe (e.g., ip safe ) means safety information (type a) is provided, while nosafe (e.g., ip nosafe ) indicates none is provided. thus, the ip nosafe variable is used to explore the natural recovery process of tourism demand over time. the superscripts event and visitor (e.g., ip event and ip visitor ) mean event and visitor information are provided, respectively. thus, ip event safe and ip visitor safe indicate mixed (simultaneous) policies of event and visitor information with safety information, respectively. ip event nosafe and ip visitor nosafe indicate single policies of event and visitor information without safety information, respectively. for example, ip event safe ¼ means the respondents selected a wtt value to a cb questionnaire when event and safety information are simultaneously provided; ip event nosafe ¼ means the respondents selected a wtt value when only event information is provided. estimating the travel cost variables (hereafter tc) allowed the analysis of the effects of price discounting policies on increasing tourism demand. firstly, a hypothetical tourism design is considered. two-day and one-night travel times were assumed as the hypothetical travel times. the japan tourism agency ( ) reported that the average total overnight stay days in trips per year and average number of trips per year by the japanese in were . and . , respectively. thus, the average overnight trip days per trip is . / . ¼ . in , . / . ¼ . in , and . / . ¼ . in . the average values of overnight trip days per trip indicate that almost half of the japanese do not (or would not like to) travel for over . days. two-day and one-night travel times were also assumed to avoid reject responses due to longer travel times and expensive cost. the website of jtb (a japanese major travel company, http:// www.jtb.co.jp/) showed that the maximum travel cost in january was ¥ , per trip. a sum of ¥ , (half the maximum value) was employed to help respondents understand the price difference. as a hypothetical campaign for price discounting, a sum of ¥ was presented as the lowest price level (such as a positive follow up campaign in ritchie, , p. ) . in examining the survey, it is expected that some respondents might consider ¥ too expensive for a travel trip to kyoto. for example, respondents living in takatsuki city in osaka prefecture could travel to and from kyoto within ¥ in march (west japan railway company, http://www.westjr.co.jp/global/en/; in english). the influence of low price design on tourism demand was confirmed through simulationda larger price discounting would lead to larger tourism demand recovery compared to information policies. respondents were asked about their willingness-to-travel time (hereafter wtt) after the bird flu outbreak was resolved under the hypothetical situation, given information policies and travel costs. the minimum wtt was designed as "a ( ) week" to ensure sufficient planning time for respondents. depending on respondents' annual income, the maximum wtt was "a year ( weeks)." for periods exceeding a year, respondents' different budget constraints would have resulted in varied consumption schemes. thus, the other alternatives were "a month ( weeks)," "three months ( weeks)," "six months ( weeks)," "nine months ( weeks)," and "never ( weeks)." the values within the parentheses denote the value of the wtt variable. appendix -d shows an example of the cb questionnaire. the matrix-type answer format was used in the questions (broberg & br€ annlund, ; evans, flores, & boyle, ; wang & heb, ) . each respondent was assigned to either group a or b (with or without safety information), after which they answered three questionnaires (one for each of the price levels). to avoid order effects, respondents were randomly assigned to either type a or b, and then asked to answer another type questionnaire (bateman & langford, ; halvorsen, ) . a dummy variable, odr, was used to examine the order effect, where odr took for the first questionnaire (type a or b), and for the other. respondents who selected "never" in type a (including safety information) with tc of ¥ (the lowest travel cost) were asked about their reasons for rejection using a free response format. the questionnaire showed four reasons for rejection: "anxiety about being infected by the bird flu (ranxiety)," "distrust safety information provided by central and local governments (rdtrust)," "not willing to travel to kyoto (rnwill)," and "others (rother)." finally, "yes/no" response data for the logit model were constructed from the wtt data (bishop & heberlein, ; habb & mcconell, ) . let wtt ji (wtt j f ; ; ; ; ; g) denote the time that respondent i is willing to travel, given the hypothetical situation. respondent i's positive ("yes") response to wtt ji conduces positive ("yes") responses for the periods wtt ki ! wtt ji (ksj); otherwise, it conduces negative ("no") if responses or the periods wtt hi < wtt ji (hsj). the "yes/no" response data for wtt were constructed accordingly, and the pooling data were employed for the estimations. table shows the questionnaires for the individual characteristics. as observed in previous studies, respondents' gender, age, income, jobs, and educational status were analyzed. respondents' travel experience (exkyoto), interest to tour kyoto (intkyoto), and anxiety about the bird flu (axbf) were also studied as impact factors for the tourism demand recovery process. note that questions on exkyoto, intkyoto, and axbf are asked to respondents before the questionnaire on the credible information in appendix -a; questions on other variables are asked after the cb questionnaire. this study estimated the following three models. model was used to estimate the parameters of the main variables (wtt, information policies from ip safe to ip visitor nosafe , tc, and icm) and reasons for rejection. model included the main reasons for rejection and individual characteristic variables to conduct statistical evaluations for all parameters. here, model also confirmed the signs and statistical evaluations of parameters by wtt periods; the pooling data of a period were used. thus, the wtt variable was not included. a part of the result is examined in the result and discussion section, and appendix shows all the results. model was formulated by eliminating the statistically insignificant variables from model . thus, model provided the final estimation results, and was used for the simulations. the robustness of the signs and statistical evaluations of the parameters were checked using the estimation models. impacts of individual characteristics on decisions were revealed by the results of model . model to do so, this study employed the criterion as less than % of the p-value. cont denotes a constant. in all models, the expected parameters xb the applicability of the cb method was examined using the expected signs of the parameters. b visitor and b event were used to check whether preference ordering on information policies would be preserved. for example, b visitor estimation results were used to simulate policy effects into the tourism demand recovery process. determining a pre-disaster demand level could help to understand the policy effects through simulation analyses. this study designed the standard level as . % following intkyoyo, in table , because actual behavior data (exkyoto) would be inadequate for the stated preference-based simulations due to differences between revealed and stated behaviors (whitehead, ) . moreover, it was assumed that odr ¼ eliminated the order effects in all simulations. the probabilities for a "yes" response from a week to a year were simulated by applying the estimated parameters ( b b) in model and the mean values (x) in tables , and to equation ( ), that is tourism demand recovery processes by model were simulated under the following simulation conditions (scs). here, the interpretations for the superscripts and subscripts of scs are same as ipa and ipb variables, as mentioned earlier. a policy variable that equals (e.g., ip safe ¼ ) refers to an implementation of the policy, and (e.g., ip safe ¼ ) refers to no implementation. for simplicity, the policy effect was assumed to sustain from the starting point ( st week) to the terminal point ( nd week). for example, the effect of safety information beginning at the st week continues until the nd week. ten scs are examined. the sc nosafe showed a natural recovery process, indicating an increase in the number of tourists without policies (all information variables equal to zero). next, the single effect of safety information (ip safe ) is observed for the process under sc safe donly ip safe ¼ . the demand recovery processes by the mixed effects of the safety and event information (ip event safe ), and of safety and visitor information (ip visitor safe ), appear under sc event safe and sc visitor safe , respectively. the conditions under sc nosafe to sc full might be insufficient to allow tourism demand recovery to the pre-disaster demand level. hypothetical policies that overcome the reasons for rejection were designed from sc ranx full to sc allrr full , based on sc full . sc ranx full is a policy to overcome anxiety about getting infected with bird flu (the superscript notation, ranx) under sc full , sc ranx&rdtrust full is a policy to overcome both anxiety and distrust toward government information (the superscript notation, ranx&rdtrust) under sc full , and sc allrr full is a policy to overcome all reasons for rejection (the superscript notation, allrr) under sc full . simulating the travel cost (price) discount due to the simulated tourism demand from sc safe to sc allrr full would help compare effects of price discounting under different information policies. simulated tourism demands were calculated based on the nd week probability levels from sc safe to sc allrr full . let std be a simulated "yes"response (tourism demand) probability for weeks calculated from sc safe to sc allrr full , and let tc be the mean value presented to respondents in table below. then, the simulated discounting costs (sdc) were calculated by equation ( ). the simulated "pure" price discounts for overcoming ranxiety, ranxiety, and rdtrust, and all reasons for rejection, were calculated as the sdc values from sc ranx full , sc ranx&rdtrust full , and sc ranx full , minus the sdc value from sc full . here is an inner product of the vectors of the mean values and the parameters of the other variables. the survey was conducted by an internet research company for -to -year-old residents living in major cities of japan, in february . the respondents' average age and rates of numbers in the cities were designed to be as consistent as possible with the national survey data for (statistics bureau, ) . the company paid online reward points (available for shopping in registered stores) to respondents for motivating them to participate. the questionnaires were sent by e-mail to , respondents who had registered with the company. overall, of , respondents satisfied the above conditions. however, the exact response rate was unknown due to the lack of information collected by the company on the number of non-participants. data were collected from respondents. table presents the individual characteristics of the respondents. the proportion of male respondents was . %, with an average age of . years. the corresponding national survey of japan indicated that these values were . % and . years, respectively, in . the average household income was approximately ¥ . million, while the national survey recorded a value of ¥ . million in (ministry of health, labour and welfare, ). the data indicated that the values corresponding to the male respondents and households in this survey would be slightly higher. however, the average age was almost the same. the highest "yes" response rates for jobs and educational statuses were about . % for homemakers (jbhm) and . % for undergraduate university students (eduv). tourism experience at kyoto (exkyoto) was . %, but . % respondents were interested in visiting kyoto (intkyoto). finally, . % respondents felt anxious about the bird flu outbreak (axbf). table shows the results of the cb questionnaires, including ip safe and ip event nosafe . the second row shows the results for combinations of policies. the third, fourth, and fifth rows show the means, standard deviations, and number of respondents who answered "never," respectively, for ¥ to ¥ , . ip nosafe for ¥ , , respectively. the minimum and maximum numbers for the respondents who replied "never" were (the third row for ip event safe ) and (the fifth row for the ip nosafe ), respectively. finally, table shows the pooling data for the logistic simulations. table shows the reasons for rejection for respondents. one hundred and sixty-three respondents answered "anxiety about being infected by the bird flu." the second and third reasons for rejection were "distrust safety information provided by central and local governments" and "other," respectively. the estimation results in table showed that the parameters of all the models had the same signs, indicating the low probability of multi-collinearity. the obtained parameter signs satisfied expec- thus, the cb questionnaires in this study proved to be appropriate for studying individual preferences. the results of model showed that income (icm), part-time job (jbptj), high school education (edhs), vocational college education (edvc), and anxiety about being infected by the bird flu (axbf) were not statistically significant. the results of model showed that the parameter signs of gnd, age, jbts, jbfp, jbst, edjc, eduv, exkyoto, and intkyoto were positive, while those of mar, jbsom, jbhm, and edtc were negative. finally, the statistically significant and positive sign of b odr indicated that the probability of attaining a "yes" response was influenced by the order effect. the results of main variables by wtt periods from model are shown in table . the complete details of all results are shown in note: n ¼ . standard errors are in parentheses. number of respondents who answered "never" appears in brackets. appendix . the estimated parameters of main variables (from b safe to b tc ) without income parameters (b icm ) are statistically significant in all periods, while the b icm are statistically significant in first week and week, though not in other periods. the parameter signs from b safe to b tc are same in models to . furthermore, the signs of b icm are negative from the st week to the th week, and positive from the th week to the nd week. table shows the simulation results for sc nosafe to sc full . the minimum and maximum values were . and . for the st week, . and . for the th week, . and . for the th week, . and . for the th week, . and . for the th week, and . and . for the nd week, respectively. table shows the results of the price discounting (prices after the discount) policy simulations. the minimum and maximum values were ¥ for sc ranx full and ¥ , for sc full , respectively. the applicability of the cb method and information policy effects were evident from the results of model in table . first, the signs of the parameters were as expected. second, the results b visitor safe > b event safe and b visitor nosafe > b event nosafe indicated that the preference orderings were preserved. thus, the cb method can be used to analyze tourism demand recovery from disasters. the orderings indicated that providing visitor information could be more effective than event information. next, the ordering b safe > b visitor nosafe > b event nosafe indicated that safety information could have the highest effect among all information policies. most tourists were not willing to travel to disaster sites without safety information. the ordering b visitor safe > b event safe > b safe indicated that mixed policies could have a greater recovery effect over single policies. these results also supported the applicability of the cb method for estimating preferences. the findings confirmed the mixed effects of safety and other information. a pure effect of event information was that b event safe À b safe ¼ . >b event nosafe . furthermore, according to the visitor information, b visitor safe À b safe ¼ . >b visitor nosafe . these results indicated that mixed policies could generate synergetic effects. moreover, the fact that the value of b safe was larger than the pure effect values indicated that providing safety information could have the highest effect. table indicates that the parameters of information policies totally tend to increase in spending periods (although there are cases of increment and decrement by periods). for example, the estimated parameters of ip safe (b safe ) are . in the st week and . in the nd week, respectively. the estimated parameters of ip visitor safe (b visitor safe ) are . in the st week and . in the nd week, respectively. the reason could be that the respondents' anxiety toward the bird flu outbreak decreased as time passed; respondents might come to think the bird flu outbreak would not occur. thus, the travel cost parameters (b tc ) decrease from À . in the st week to À . in the nd week. this indicates that the price effects are enhanced over time due to an increase in the number of willing-to-travel respondents by reducing (overcoming) their anxiety. table also shows that the signs of icm parameters (b icm ) turned negative from the st week to the th week to positive from the th week to the nd week b icm was statistically significant for the st week and nd week, and insignificant from the th week to the th week. the results indicated that tourism at disaster sites was an inferior good (that decreases corresponding to increments of icm) in the period just after its occurrence, but changed to a normal good (that increases corresponding to increments of icm) as time passed. this can be attributed to the fact that a tourism site in disaster would be considered a low-quality good that was not be preferred to other non-disaster tourism sites (loomis & walsh, , p. ) . finally, the results for the individual characteristics indicated that the following persons were willing to travel provided information announcements were made: male respondents (because of their tolerance level, they typically suffer less anxiety about being infected by the bird flu compared to women); the elderly, temporary staff, freelance professionals, and university students (because they have ample free time to plan and travel); and those interested in traveling to kyoto. information policies could be effective at attracting these persons to travel. otherwise, persons who were married, operating self-owned businesses, homemakers, and those educated at technical colleges were not swayed by information policies, possibly because of the fear of catching the infection themselves and/or infecting their children (the negative influence of tourists' perceived risks for tourism sites described in law, and rittichainuwat & chakraborty, ). furthermore, they could also possibly have little time to travel because of their jobs or study commitments (the substitution between work and leisure time described in weiermair, ). table shows that information policies cannot help postdisaster tourism demand levels (the maximum was approximately . % for sc full ) recovery to reach the pre-disaster demand level ( . %). overcoming the reasons for the rejection in table makes it possible for the post-disaster demand level to come close to the pre-disaster demand level ( . % for sc allrr full ). thus, an issue in recovering tourism demand is how to compensate for demand losses by the reasons for rejection. the results of price discounting policies (prices after the discount) in table showed that the sdt values for sc safe , sc event safe , sc visitor safe , sc full , and sc allrr full exceed the average travel cost per trip (¥ , in table ). this indicated that the tourism demand recovery, to levels of information policy by price discounting, would not be feasible due to the high discounting rates. here, it might be feasible to discount ¥ , (a discount rate of about %) of sc ranx&rdtrust full that corresponds to overcome ranxiety (the anxiety for the bird flu) and rdtrust (the distrust toward the safety information from governments). for example, the japanese government has implemented a maximum rate of % as a price discount for travel to kyushu area in order to encourage economic recovery after the kumamoto earthquake of (kumamoto prefecture tourist federation, ) . thus, the % price discounting could be feasible as a policy for tourism demand recovery. early implementation of tourism demand recovery policies after decontaminating the disaster site could shorten recovery periods. however, sometimes, lack of information in planning may act as an obstacle to implement the policies before the disaster. this section reexamines the estimation and simulation models based on the above findings in order to consider an optimal timing of tourism demand recovery policies. the estimation and simulation results were used for determining the optimal policy orderings. first, the estimation model is reexamined. the estimation results of table indicate that the main variables would be influenced by wtt periods. thus, model is designed based on model as equations ( ) and ( ). here, including the icm variable with information of the st week and the nd week could help confirm different parameter signs by one week and weeks, as shown in appendix , and to improve estimation accuracy of the estimation model. in equation ( ), d and d are dummy variables; d ¼ for wtt ¼ ( if others) and d ¼ for wtt ¼ ( if others); the g icm and g icm are parameters of the interaction term variables of icm with d and d , respectively. similarly, different policy effects can be observed by taking the interaction term with wtt variable. the notations from g safe to g tc in equation ( ) ) are same as in model . the expected signs are positive for g safe , g event safe , g visitor safe g event nosafe , g visitor nosafe , and g icm , and negative for g tc and g icm from results in tables and next, the simulation procedure was examined. first, it is assumed that only a single policy can be implemented in a period; policymakers cannot implement multiple policies simultaneously. for example, event information could not be announced in one week if safety information was announced at that time. in this case, the safety information (ip safe ) must necessarily be the first policy in order to permit visiting the site after solving the disaster. thus, an efficient ordering of event information, visitor information, and price discounting were examined. the ip event safe and ip visitor safe variables are used as event and visitor information due to the synergetic effects with the safety information, as discussed in section . . thus, the ip event nosafe and ip visitor nosafe variables are designed as zero in all simulations. the price discounting policy is designed as an % discount of the mean value of tc (tc  : ) from the result in section . . the % discount cases (tc  : ; arbitrarily decided) are also simulated for comparisons. note that odr ¼ for all simulations. the terminal point is also the nd week. a policy effect is assumed to sustain from the starting point at j weeks (j { , , , } due to four recovery policies) to the terminal point. for table the starting points and adjusted willingness-to-travel time. st week th week th week th week th week nd week awtt awtt awtt awtt note: the super script a means the starting points of att j s. simulations for analyzing effective policy ordering. xb simplicity, the th and nd weeks are not selected. even if the th and nd weeks are considered as a starting point, the orderings of recovery levels of tourism demand and the optimal policy ordering would be same due to the linearity of wtt variable in the interaction terms. thus, the recovery levels of tourism demand would differ. j ¼ is determined to implement the safety information policy as mentioned above. thus, j' { , , } are the main points for analyzing the optimal ordering. an issue in simulation is how to treat the time delay. for example, the effect of event information policy with wtt variable (wtt  g event safe  ip event safe in equation ( )) beginning at the th week (wtt ¼ ) is calculated as  b g event safe  , even though the beginning time is j ¼ ; the exact calculation is thus, values of the adjusted wtt at jth week (awtt j ) for a policy shown in table are used by replacing the wtt value in the interaction terms (e.g., awtt event  g event safe  ip event safe ; the superscripts of awtt i indicate the policy name). for example, awtt event means the event information policy begins from the th week, and the awtt value at that time is (thus,  b g event safe  is calculated). in the next ( th) week, is assigned as the awtt value (i.e.,  b g event safe  ). ( ) and ( ) were adjusted as b b event g safe in order to eliminate the safety information effect. thus, the simulation model was redesigned as equation ( ) and ( ) . here, k, l, and m { , , } and kslsm. the mean values in table finally, the patterns of policy orderings listed in table are examined. case corresponds to sc nosafe (natural recovery) in table . cases to show the single effects of information policies as time passes by. case corresponds to sc safe in table , whereas cases and do not correspond to sc event safe and sc visitor safe due to the synergetic effects that overestimate tourism demand recovery levels than those calculated by a single policy. case corresponds to the price discounting policy. here, the % price discounting policy (that compensates for the demand losses from the reasons for rejections, ranxiety and rdtrust) was assumed from discussions in section . . cases e were designed to simulate the optimal policy order. for example, case shows that the safety information policy begins from the st week (att ), the event information policy from the th week (att ), the visitor information policy from the th week (att ), and the price discounting policy from the th week (att ). the estimation results in table showed that only g visitor nosafe is not statistically significant. the signs of estimated parameters are same as model . the ordering of degree of information policy parameters was also same as that of model db visitor otherwise, the results of interaction term variables showed that g event safe > g visitor safe > g safe >g event nosafe >g visitor nosafe ; the visitor information parameters were smaller than the event information parameters. similarly, the ordering of pure effects of event and the estimation results indicated that safety information would have the highest effect on demand recovery. the ordering of event and visitor information are reversed with and without the time series factor. the reason could be that the visitor information based on the actual behavior might give respondents a sense of trust regarding the safety of a disaster site regardless of the periods. deutsch and gerard ( ) showed that an individual's behavior is sometimes influenced by the information obtained from another as evidence about reality. furthermore, mcferran, dahl, fitzsimons, & morales. ( ) showed that a part of the purchase behavior was determined by other consumers' purchase quantities. the parameters of visitor information show little changes as time passes by (also see table ). second, combinations of these information policies would generate synergetic effects on tourism demand recovery with and without the time series factor. third, the parameters of icm  d and icm  d in table showed that tourism at disaster sites would be an inferior good in the period immediately after the disaster, and thereafter, changing to a normal good with time. the third result and the negative sign of price parameter might also indicate that researchers would be allowed to analyze the cost-benefit ratio using the consumer surplus for reconstructing infrastructures in tourism sites in a year after the disaster (johansson, ) . finally, the aic and r values of model indicate that it is more suitable for simulations due to the lower and higher values than the ones of models to , respectively. table shows the simulation results from cases e . as references, values in brackets show sc nosafe and sc safe values in table for cases and , respectively. the values of cases e in the " st week" column showed the recovery amounts by policies immediately after disaster site decontaminationdthe first is safety information, followed by visitor information, price discounting, and event information (policy effect order ; peo ). the " nd week" column showed that the order changes to safety information followed by visitor information, event information, and finally price discounting (peo ). thus, implementing safety information as the first step is valid. the result indicates the following: first, planning requires dynamic policy effect analyses because of the possibility of changing policies with time, in contrast to static analyses, due to lack of information. moreover, it would be necessary to consider applicable tourism management frameworks, in disaster, corresponding to policy effect changes with time. second, the pricing policy after solving a disaster will not have a significant effect, in contrast to previous studies, possibly due to respondents' anxiety matching income effects. table also shows that the b tc decreases from À . in week to À . in week . this result indicates that rapid and broad announcements of information polices could be more important. the simulation results of price discounting also indicated that the low price design in the cb questionnaire might not cause bias due to the small differ-enced . and . in weeks for % and % price discounting in table , respectively. the simulation results from cases e are shown in table . in the % price discounting case, the minimum and maximum values in weeks are . % in case and . % in case . thus, the ordering of case could recover tourism demand in weeks to near the standard demand level ( . %). the result showed that the optimal policy ordering is case , which implements safety information as the first step, followed by event information, visitor information, and price discounting, in contrast to the results of policy effect orderings in table dcase for peo and case for peo . the results indicated that the policy orderings based on the effects would not be optimal, that is, they may not achieve the highest recovery of tourism demand due to the time delay (in table ) and the policy effect changes by period. the optimal policy ordering would be valid. after announcing safety information, policymakers and/or companies could encourage tourists to visit the site by events. then, informing the situation, the price discounting could enhance the increment of the number of tourists. the validity of designing the price discounting in the final step is also supported from another viewpoint. canina, enz, and lomanno ( ) stated that price discounting helps increase tourism supply (e.g., hotel rooms), but this comes at a cost to revenues. to elaborate, (extreme) price discounting might have a positive effect on tourism demand recovery, whereas it might deteriorate the finances of the government and/or the tourism company. the measurement of price discount rate without deterioration of finances of other stakeholders should be considered in future research. finally, the optimal timing of implementing the policies is discussed based on table and drawn from table . table shows the changes of synergetic effects of mixed policies (b event safe À b safe and b visitor safe À b safe ) and b tc by wtt periods. here, we assume two conditions: i) the policy ordering of case is employed and ii) the policies were implemented one by one. announcing safety information in the first step as soon as possibledwithin week , immediately after decontaminating the disasterdwould result in faster tourism demand recovery at such sites. since the b event safe À b safe values decrease from the th to the nd week, it is preferable to time the second step (event information) within th to th weeks after the disaster. the increment of b visitor safe À b safe values from the th to the nd week and condition (ii) indicated that it would be timed until the nd week after the th week for the third step timing (visitor information). thus, it would be appropriate to implement the fourth step (price discounting) immediately after the third step due to the decrement of b tc values from the th to the nd week. in summary, it is preferable to implement announcing the safety information (the first step) within one week, the event information (the second step) within th to th week after the first step, visitor information (the third step) within th to nd week after the second step (e.g., th week), and the price discounting until the nd week immediately after the third step (e.g., th week). note that the timing relies on the framework of this study. more proper policy timing analyses are needed for the (optimal) recovery process of restructuring infrastructures of tourism sites as described in faulkner ( ) . disasters in tourism sites urge policymakers to implement effective tourism demand recovery policies through optimal timing. however, the difficulty of micro data collection in disasters and external factors in macro data make it difficult for policymakers and researchers to analyze policy effects. this study examined the cb method with the time series factor as an appreciable method for analyzing the policy effects and timing (orderings) after solving the disaster. a bird flu emergency in kyoto prefecture, japan, acts as the hypothetical disaster in the cb questionnaire. the respondents were asked about their wtt under this hypothetical situation, assuming combinations of three information policies (safety, event, and visitor information) and three travel costs (¥ , , ¥ , , and ¥ , ). the alternatives for wtt were designed to range from week to week . the estimation results indicated the following. first, safety information has the highest effect on demand recovery. the second is event information, if it relates to the time series factor (the willingness to travel time variable), and visitor information, if not. event information would be more important in considering the optimal timing due to its relation with the time series factor. second, combinations of these information policies could generate synergetic effects on tourism demand recovery. third, tourism at disaster sites would be an inferior good in the period immediately after the disaster, changing to a normal good with time. the simulation results indicate the following. first, the necessity to analyze the dynamic policy effect due to changes with time would require researchers to examine applicable management frameworks corresponding to the effect changes (e.g., announcing music festivals in a tourism site until the th week or informing the visitors' looks of enjoying the festivals to other non-visited tourists through the media until the th week). second, the pricing policy, after solving a disaster, would not have a significant effect due to tourists' anxiety. third, the optimal policy ordering and timings are determined as follows: the provision of safety information within a week, event information within th to th week after the disaster, visitor information within the th week to the nd week, and price discounting until the nd week immediately after the third step (e.g., th week). here, the optimal policy ordering results indicated that the policy orderings based on the effects would not be optimal due to the time delay and the policy effect changes. the results conclude that the method of this study could be useful for analyzing dynamic tourism demand processes by recovery policies. the method aims to show policymakers the policy effects and the optimal policy ordering for recovery from disaster damages, especially through advance planning. moreover, the method of this study could be applicable to other disasters, such as earthquakes, hurricane, crimes, and terrorism, by modifying the contents of policies in the cb questionnaire (the solutions for the issues are necessary as well). for example, the policy contents for recovering demands from terrorism would include the announcement of arresting terrorists, establishment of security cameras to monitor terrorism, and police deployment. finally, this study has certain limitations. the first is the difficulty in reducing the number of cb questionnaires when the number of policies or periods increasedthe order effect might occur by the answer format of this study. for example, the choice experiments, which could possibly implement the same analysis in this study, were not employed owing to its difficulty. the second limitation is how to design and estimate the policy effects from the pre-event to the post-event stages, and the feedback effects from the post-event to the pre-event stages, for improving disaster planning, training and education, among others. the third is to research more realistic trip information (e.g., the type and quality of accommodation, food and beverage, and distances from respondents' homes to the tourist site) in order to estimate more realistic price effects, especially in low-price level situations (¥ in hypothetical prices). the simulation results could be improved note: the values of % price discounting cases are in parentheses; (À) means the % price discounting case was not simulated due to nontarget of simulations; the standard demand level is assumed as . % in weeks; the value of sc ranx&rdtrust full in weeks is . in table . by collecting these details, and thus further studies could take up these challenges. additionally, new research could also involve adjusting the difference between real and hypothetical behaviors, and applying the proposed method to varied disaster events, such as earthquakes and tsunamis. no other relationships, conditions, and circumstances present a potential conflict of interest. in general, the bird flu influenza virus infects many birds living in naturedmainly water birds, such as ducks (anas). bird flu does not frequently infect humans, but it rarely does so when a person touches or is in close contact with an infected bird. in recent years, cases of h n type bird flu influenza virus infection on humans has been reported and developed. the observed bird flu influenza type in japan is h n . in asian countries, it has also been reported that symptoms of seriously ill patients infected by the h n type are pneumonia, multiple organ dysfunction, among others, while the main symptom of mildly ill patients infected by h n type observed in the kingdom of the netherlands is conjunctivitis. the who reported that persons were infected, of whom persons were dead from to november . another reason that public organizations pay serious attention to the bird flu is the influenza pandemic caused by various bird flu influenza viruses through infections from birds to humans. hence, the pandemic might rapidly expand across the world. currently, the japanese government prevents the bird flu outbreak by euthanizing infected (or probably infected) birds. note that the bird flu shot (vaccination) has not been implemented in japan. please read the explanation below and answer the next questionnaire. in kyoto prefecture, a wild bird infected by the bird flu was observed and decontaminated (see the left panel in fig. a ). however, the spread of infections to humans and food items was not confirmed. it is expected to be difficult in future to undertake protective measures for whole infection cases of the bird flu due to wide action ranges of wild birds. here, the bird flu is assumed to be observed in the entire area of kyoto prefecture (see the right panel in fig. a ) in the next questionnaire. this bird flu outbreak is called an influenza pandemic, as described above. here, the outbreak is assumed to expand only to birds, and not to humans and food items, among others. note that the area of the outbreak is assumed as only the kyoto prefecture. this questionnaire requests you to provide your assumed travel behavior under the hypothetical situations described below. please read the information carefully. note that the travel costs (¥ , , ¥ , , and ¥ , ) differ by the questions presented to you. [hypothetical situation: travel to kyoto prefecture]. imagine that you plan to travel to kyoto prefecture alone. during your planning, you realize that a bird flu emergency has occurred throughout kyoto prefecture (see the table below) . you are also aware that the japanese local governments could decontaminate the bird flu-affected areas without causing human and physical damages. then, you are provided each of following three pieces of information by a credible information source. please answer what would be the earliest period you would be willing to travel to kyoto prefecture, depending on the information provided to you. (please provide this answer even if you live in kyoto in the present time). your travel period is assumed to be two days and one night, with the assumed travel cost being ¥x b (including accommodation and the travel fee). assume that there are no factors (job, homework, etc.) impeding your travel in each period. q. please answer your main reason for selecting "never" from information to . please write box here [ ]. appendix . estimation results by periods. information information information after a year ( weeks) after nine months ( weeks) after six months ( weeks) after three months ( weeks) after a month ( weeks) after a week never a: only the descriptions provided in the brackets were shown for the type b group. b: x, yen ¼ { (special price for campaign), , , , }. econometric models for discrete choice analysis of travel and tourism demand budget-constraint, temporal, and questionordering effects in contingent valuation studies restoring tourism destinations in crisis crisis and post-crisis tourism destination recovery marketing strategies measuring values of extra-market goods: are indirect measures biased avian influenza: economic and social impacts an alternative interpretation of multiple bounded wtp data: certainty dependent payment card intervals estimations of economic losses from the great east japan earthquake (higashi-nihon dai-shinsai niokeru higaigaku no suikei nitsuite) on the effects of the great kumamoto earthquake in (heisei nen kumamoto-jisin no eikyou-sisan nitsuite) why discounting doesn't work: a hotel pricing update estimating price effects in an almost ideal demand model of outbound thai tourism to east asia destination image as a mediator between perceived risks and revisit intention: a case of post-disaster japan tourist motivation: an appraisal a study of normative and informational social influences upon individual judgment recovery marketing: what to do after a natural disaster. the cornell hotel and restaurant administration quarterly tourism economics and policy (aspects of tourism texts) determinants of international tourism: a three dimensional panel data analysis multiple bounded uncertainty choice data as probabilistic intentions towards a framework for tourism disaster management the recreational value of lake mckenzie, fraser island: an application of the travel cost method managing heritage tourism modeling tourist arrivals using time series analysis: evidence from australia we love paris campaign valuing environmental and natural resources: the econometrics of non-market valuation ordering effects in contingent valuation surveys. environmental and resource economics earthquake devastation and recovery in tourism: the taiwan case chapter section tourism trends in japan the economic theory and measurement of environmental benefits current status and issues on tourism in tohoku area: toward one trillion tourism industry market (tohoku-tiiki niokeru kanko no genjou to kadai: kankousangyou no icchouenka wo mezashite). bank of japan reports and research papers kyushu-hukkou-wari estimating the impact of avian flu on international tourism demand using panel data assessing impacts of sars and avian flu on international tourism demand to asia total research on tourism activity in kyoto (kyotokanko-sougou-chosa) annual statistics of kyoto prefecture in (heisei kyoto-hu-toukei-hakusho) (chapter ) population and household influences of the great kumamoto earthquake for kyushu's economy pricing policy in nature-based tourism the perceived impact of risks on travel decisions recreation economic decisions: comparing benefits and costs earthquake effects on tourism in central italy status on the highly pathogenic bird flu in (heisei nen niokeru koubyougensei-tori-infuruenza no kakuninjoukyou) critical events in miyazaki prefecture (miyazaki-ken no ugoki kikijishou). miyazaki no ugoki tourism and disaster planning. geographical review assessing the impacts of the global economic crisis and swine flu on inbound tourism demand in the united kingdom combining multiple revealed and stated preference data sources: a recreation demand application estimation and welfare calculations in a generalized corner solution model with an application to recreation demand a framework for knowledge-based crisis management in the hospitality and tourism industry crisis and disaster management for tourism (aspects of tourism) perceived travel risks regarding terrorism and disease: the case of thailand tourism demand modelling and forecasting: how should demand be measured? tourism demand modelling and forecasting: modern econometric approaches national survey at . website of the ministry of internal affairs and communications the impact of crisis events and macroeconomic activity on taiwan's international inbound tourism demand. tourism management estimating individual valuation distributions with multiple bounded discrete choice data environmental risk and averting behavior: predictive validity of jointly estimated revealed and stated behavior data valuing beach access and width with revealed and stated preference data consumption benefits of national hockey league game trips estimated from revealed and stated preference demand data dr tadahiro okuyama completed his phd in economics at tohoku university, and is an associate professor at regional design and development this work was supported by jsps kakenhi grant number jp k . the author thanks the participants at the workshop of economics and regional studies in at ryukoku university and the referees for their advice on improving this study. appendix . explanations and questionnaires on contingent behavior (original sentences were written in japanese) . no problem . i feel anxiety* . i cannot imagine. *respondents who select alternative were assigned for ; for others. key: cord- -tdwwu v authors: kawtrakul, asanee; yingsaeree, chaiyakorn; andres, frederic title: semantic tracking in peer-to-peer topic maps management date: journal: databases in networked information systems doi: . / - - - - _ sha: doc_id: cord_uid: tdwwu v this paper presents a collaborative semantic tracking framework based on topic maps which aims to integrate and organize the data/information resources that spread throughout the internet in the manner that makes them useful for tracking events such as natural disaster, and disease dispersion. we present the architecture we defined in order to support highly relevant semantic management and to provide adaptive services such as statistical information extraction technique for document summarization. in addition, this paper also carries out a case study on disease dispersion domain using the proposed framework. this paper gives an overview of a generic architecture we are currently building as part of the semantic tracking project in cooperation with the fao aos project [ ] . initiated by fao and ku, the semantic tracking project aims at providing a wide area collaborative semantic tracking portal for monitoring important events related to agriculture and environment, such as disease dispersion, flooding, or dryness. this implies to deal with any kind of multilingual internet news and other online articles (e.g. wiki-like knowledge and web logs); it describes the world around us rapidly by talking about the update events, states of affairs, knowledge, people and experts who participate in. therefore, the semantic tracking project targets to provide adaptive services to large group of users (e.g. operator, decision makers), depending on all the knowledge we have about the environment (users themselves, communities they are involved in, and device he's using). this vision requires defining an advanced model for the classification, the evaluation, and the distribution of multilingual semantic resources. our approach fully relies on state of the art knowledge management strategies. we define a global collaborative architecture that allows us to handle resources from the gathering to the dissemination. however, sources of these data are scattered across several locations and web sites with heterogeneous formats that offer a large volume of unstructured information. moreover, the needed knowledge was too difficult to find since the traditional search engines return ranked retrieval lists that offer little or no information on semantic relationships among those scattered information, and, even if it was found, the located information often overload since there was no content digestion. accordingly, the automatic extraction of information expressions, especially the spatial and temporal information of the events, in natural language text with question answering system has become more obvious as a system that strives for moving beyond information retrieval and simple database query. however, one major problem that needs to be solved is the recognition of events which attempts to capture the richness of event-related information with their temporal and spatial information from unstructured text. various advanced technologies including name entities recognition and related information extraction, which need natural language processing techniques, and other information technologies, such as geomedia processing, are utilized part of emerging methodologies for information extraction and aggregation with problem-solving solutions (e.g. "know-how" from livestock experts from countries with experiences in handling bird flu situation). furthermore, ontological topic maps are used for organizing related knowledge. in this paper, we present our proposal aiming to integrate and organize the data/information resources dispersed across web resources in a manner that makes them useful for tracking events such as natural disaster, and disease dispersion. the remainder of this paper is structured as follows: section describes the key issues in information tracking as nontrivial problems; in section we introduce the framework architecture and its related many-sorted algebra. section gives more details of the system process regarding the information extraction module. section discusses the personalized services (e.g. knowledge retrieval service and visualization service) provided for collaborative environments. finally, in section , we conclude and give some forthcoming issues. collecting and extracting data from the internet have two main nontrivial problems: overload and scattered information, and salient information and semantic extraction from unstructured text. many experiences [ , , and ] have been done regarding event tracking or special areas or areas related to events monitoring (e.g. the best practice for governments to handle bird flu situation), the collection of important events and their related information (e.g. virus transmission from one area to other locations and from livestock to humans). firstly, target data used for semantic extraction are organized and processed to convey understanding, experience, accumulated learning, and expertise. however, sources of these data are scattered across several locations and websites with heterogeneous formats. for example, the information about bird flu consisting of policy for controlling the events, disease infection management, and outbreak situation may appear in different websites as shown in fig. . egypt -update february the egyptian ministry of health and population has confirmed the country's th death from h n avian influenza. the -year-old female whose infection was announced on february, died today. consequently, collecting required information from scattered resources is very difficult since the semantic relations among those resources are not directly stated. although it is possible to gather those information, the collected information often overload since there is no content digestion. accordingly, solving those problems manually is impossible. it will consume a lot of time and cpu power. the system that can collect, extract and organize those information according to contextual dimensions automatically, is our research goal for knowledge construction and organization. secondly, only salient information must be extracted to reduce time consumption for users to consume the information. in many case, most of salient information (e.g. time of the event, location that event occurred, the detail of the event) are left implicitly in the texts. for example: in the text in fig. , the time expression " february" mentioned only "date and month" of the bird flu event but did not mention the 'year'. the patient and her condition (i.e. ' -year-old female', and 'died') was caused by bird flu which is written in the text as 'avian influenza' and 'h n avian influenza'. accordingly, the essential component of computational model for event information capturing is the recognition of interested entities including time expression, such as 'yesterday', 'last monday', and 'two days before', which becomes an important part in the development of more robust intelligent information system for event tracking. information extraction in traditional way processes a set of related entities in the format of slot and filler, but the description of information in thai text such as locations, patient's condition, and time expressions can not be limited to a set of related entities because of the problems of using zero anaphora [ ] . moreover, to activate the frame for filling the information, name entity classification must be robust as it has been shown in [ ] . in this section, we give an overview of the modeling we are providing. preliminary parts of our framework have been previously introduced to the natural language processing and database community [ ] . in the following, we present our p p framework and related many-sorted algebra modeling. let us introduce our design approach of an ontological topic map for event semantic tracking. the ontological topic map [ ] helps to establish a standardized, formally and coherently defined classification regarding event tracking. one of our current focus and challenges has to develop a comprehensive ontology, which defines the terminology set, data structure and operations regarding semantic tracking and monitoring in the field of agriculture and environment. the semantic tracking algebra is a formal and executable instantiation of the resulting event tracking ontology. our algebra has to achieve two tasks: ( ) first, it serves as a knowledge layer between the users (e.g. agriculture experts) and the system administration (e.g. it scientists and researchers). let us remind the notion of many sorted algebra [ ] . such algebra consists of several sets of values and a set of operations (functions) between these sets. our semantic tracking algebra is a domain-specific many-sorted algebra incorporating a type system for agriculture and environment data. it consists of two sets of symbols called sorts (e.g. topic, rss postings) and operators (e.g. tm_transcribe, semantic_similarity); the function sections constitute the signature of the algebra. its sorts, operators, sets, and functions are derived from our agriculture ontology. second order signature [ ] is based on two coupled many-sorted signatures where the toplevel signature provides kinds (set of types) as sorts (e.g. data, resource, semantic_data) and type constructors as operators (e.g. set). to illustrate the approach, we assume the following simplified many-sorted algebra: kinds data, resource, semantic_data, topic_maps, set type constructor -> data topic -> resource rss, htm // resource document type -> semantic_data lsi_sm, rss_sm, htm_sm // semantic and metadata vectors -> tm tm(topic maps) unary operations ∀ resource in resource, resource → sm: semantic_data,tm tm_transcribe ∀ sm in semantic_data sm → set(tm) semantic_similarity the notion sm:semantic_data is to be read as "some type sm in semantic_data," and means there is a typing mapping associated with the tm_transcribe operator. each operator determines the result type within the kind of semantic_data, depending on the given operand resource types. the semantic merging operation takes two or more operands that are all topic maps values. the select takes an operand type set (tm) and a predicate of type topic and returns a subset of the operand set fulfilling the predicate. from the implementation of view, the resource algebra is an extensible library package providing a collection of resource data types and operations for agriculture and environment resource computation. the major research challenge will be the formalization and the standardization of cultural resource data types and semantic operations through iso standardization. as shown in fig. , the proposed framework consists of six main services. the detail of each service is outlined as followed: to generate useful knowledge from collected documents, two important modules, information extraction and knowledge extraction, are utilized. ontological topic maps and domain-related ontologies defined in owl [ ] are used as a knowledge base to facilitate the knowledge construction and storage process as it has been shown in garsho's review [ ] . the standard iso / iec topic maps (iso ) facilitates the knowledge interoperability and composition. the information extraction and integration module is responsible for summarizing the document into a predefined frame-like/structured database, such as . the knowledge extraction and generalization is responsible for extracting useful knowledge (e.g. general symptom of disease) from collected document. latent semantic analysis will be applied to find new knowledge or relationships that are not explicitly stored in the knowledge repository. language engineering and knowledge engineering techniques are key methods to build the target platform. for language engineering, word segmentation [ ] , named entity recognition [ ] , shallow parsing [ ] , shallow anaphora resolution and discourse processing [ , , and ] have been used. for knowledge engineering, ontological engineering, task-oriented ontology, ontology maintenance [ ] and topic maps [ ] model have been applied. the information, both unstructured and semi-structured documents are gathered from many sources. periodic web crawler and html parser [ ] are used to collect and organize related information. the domain specific parser [ ] is used to extract and generate meta-data (e.g. title, author, and date) for interoperability between disparate and distributed information. the output of this stage is stored in the document warehouse. to organize the information scattered at several locations and websites, textual semantics extraction [ ] is used to create a semantic metadata for each document stored in the document warehouse. guided by domain-based ontologies associated to reasoning processes [ ] and ontological topic map, the extraction process can be taught of as a process for assigning a topic to considered documents or extracting contextual metadata from documents following xiao's approach [ ] . knowledge retrieval service: this module is responsible for creating response to users' query. the query processing based on tmql-like requests is used to interact with the knowledge management layer. knowledge visualization: after obtaining all required information from the previous module, the last step is to provide the means to help users consume that information in an efficient way. to do this, many visualization functions is provided. for example, spatial visualization can be used to visualize the information extracted from the information extraction module and graph-based visualization can be used to display hierarchal categorization in the topic maps in an interactive way [ ] . due to page limitation, this paper will focus in only information extraction module, knowledge retrieval service module and knowledge visualization service module. the proposed model for extracting information from unstructured documents consists of three main components, namely entity recognition, relation extraction, and output generation, as illustrate in fig. . the entity recognition module is responsible for locating and classifying atomic elements in the text into predefined categories such as the names of diseases, locations, and expressions of times. the relation extraction module is responsible for recognizing the relations between entities recognized by the entity recognition module. the output of this step is a graph representing relations among entities where a node in the graph represents an entity and the link between nodes represents the relationship of two entities. the output generation module is responsible for generating the n-tuple representing extracted information from the relation graph. the details of each module are described as followed. to recognize an entity in the text, the proposed system utilizes the work of h. chanlekha and a. kawtrakul [ ] that extracts entity using maximum entropy [ ] , heuristic information and dictionary. the extraction process consists of three steps. firstly, the candidates of entity boundary are generated by using heuristic rules, dictionary, and statistic of word co-occurrence. secondly, each generated candidate is then tested against the probability distribution modeled by using maximum entropy. the features used to model the probability distribution can be classified into four categories: word features, lexical features, dictionary features, and blank features as described in [ ] . finally, the undiscovered entity is extracted by matching the extracted entity against the rest of the document. the experiment with , words corpus, , words for training and , words for testing, shown that the precision, recall and f-score of the proposed method are . %, . %, . % respectively. to extract the relation amongst the extracted entities, the proposed system formulates the relation extraction problem as a classification problem. each pair of extracted entity is tested against the probability distribution modeled by using maximum entropy to determine whether they are related or not. if they are related, the system will create an edge between the nodes representing those entities. the features used to model the probability distribution are solely based on the surface form of the word surrounding the considered entities; specifically, we use the word n-gram and the location relative to considered entities as features. the surrounding context is classified into three disjointed zone: prefix, infix, and suffix. the infix is further segmented into smaller chunks by limiting the number of words in each chunk. for example, to recognize the relation between victim and condition in the sentence "the [victim] whose [condition] was announced on ....", the prefix, infix and suffix in this context is 'the', 'whose', and 'was announced on ....' respectively. to determine and to assess the "best" n-gram parameter and number of words in each chunk of the system, we conduct the experiment with documents, documents for training and documents for testing. we vary the n-gram parameter from to and set the number of words in each chunk as , , , , , and . the result is illustrated in fig. . the evident shows that f-score is maximum when n-gram is and number of words in each chunk is . the precision, recall and f-score at the maximum f-score are . %, . % and . % respectively. after obtaining a graph representing relations between extracted entities, the final step of information extraction is to transform the relation graph into the n-tuple representing extracted information. heuristic information is employed to guide the transformation process. for example, to extract the information about disease outbreak (i.e. disease name, time, location, condition, and victim), the transformation process will starts by analyzing the entity of the type condition, since each n-tuple can contain only one piece of information about the condition. it then travels the graph to obtain all entities that are related to considered condition entity. after obtaining all related entities, the output n-tuple is generated by filtering all related entities using constrain imposed by the property of each slot. if the slot can contains only one entity, the entity that has the maximum probability will be chosen to fill the slot. in general, if the slot can contain up to n entities, the top-n entities will be selected. in addition, if there is no entity to fill the required slot, the mode (most frequent) of the entity of that slot will be used to fill instead. the time expression normalization using rule-based system and synonym resolution using ontology are also performed in this step to generalize the output n-tuple. the example of the input and output of the system are illustrated in fig. . distributed adaptive and automated services require exploiting all the environmental knowledge stored in ontological topic maps that is available about the elements involved in the processes [ ] . an important category of this knowledge is related to devices' states; indeed, knowing if a device is on, in sleep mode, off, if its battery still has autonomy of five minutes or four days, or if it has a wired or wireless connection, etc. helps adapting services that can be delivered to this device. for each device, we consider a state control that is part of the device's profile. then, of course, we use the information contained in communities' and users' profiles. personalized services rely on user-related contexts such as localization, birth date, languages abilities, professional activities, hobbies, communities' involvement, etc. that give clues to the system about users' expectations and abilities. in the remainder of this section, we present the two main adaptive services: the knowledge service and the knowledge visualization service based on our model. the knowledge retrieval service module is responsible for interacting with the topic maps repository to generate answers to user's tmql-like queries [ ] . the framework currently supports three types of query. the detail of each query type is summarized in table . the knowledge visualization service is responsible for representing the extracted information and knowledge in an efficient way. users require to access to concise organization of the knowledge. schneiderman in [ ] pointed that "the visual information-seeking mantra is overview first, zoom and filter, then details ondemand". in order to locate relevant information quickly and explore the semanticrelated structure, our flexible approach regarding two ways of visualizations (spatialbased or graph-based visualization) is described in the following. the spatial-based visualization functions help users to visualize the extracted information (e.g. the bird flu outbreak situation extracted in fig. .) using web-based geographical information system, such as google earth. this kind of visualization allows the users to click on the map to get the outbreak situation of the area according to their requests. in addition, by viewing the information in the map users can see the spatial relations amongst the outbreak situations easier than without the map. one usage example of google earth integrated system for visualizing the extracted information about bird flu situation is shown in fig. . related works we agree that distributed knowledge management has to assume two principles [ ] related to the classification: ( ) autonomy of classification for each knowledge management unit (such as community), and ( ) coordination of these units in order to ensure a global consistency. having a decentralized peer-to-peer knowledge management, the swap platform [ ] is designed to enable knowledge sharing in a distributed environment. pinto et al. provide interesting updates and changes support between peers. however, vocabularies in swap have to be harmonized; which implies to have some loss of knowledge consistency. but even if we share the approach of core knowledge structure that is expendable, the vocabulary, in our case, is common and fully shared by the community, so the knowledge evaluation and comparison can be more effective. moreover, swap provides some kind of personalization (user interface mainly) but does not go as far as the semantic tracking does. from our point of view, swap definitely lacks environmental knowledge management that is required to perform advanced services; on the other hand, dbglobe [ ] is a service-oriented peer-to-peer system where mobile peers carrying data provide the base for services to be performed. its knowledge structure is quite similar to our project as it is using metadata about devices, users and data within profiles; moreover, communities are also focused on one semantic concept. dbglobe relies on axml [ ] in order to perform embedded calls to web services within xml. thus, it provides a very good support for performing services but does not focus on users and environments knowledge in order to offer optimized authoritarian adaptive services. described as a p p dbms, ambientdb [ ] relies on the concept of ambient intelligence, which is very similar to our vision of adaptive services with automatic cooperation between devices and personalization. however, although ambientdb is using the effective chord distributed hash table to index the metadata related to resources, it lacks the environmental knowledge management provided inside our project that is necessary to achieve adaptive collaborative distribution and personalized query optimization. the extraction framework described in this paper is closely related to promed-plus [ ] , a system for the automatic "fact" extraction from plain-text reports about outbreaks of infectious epidemics around the world to database, and mitap [ ] , a prototype sars detecting, monitoring and analyzing system. the difference between our framework and those systems is that we also emphasize on generating the semantic relations among the collected resources and organizing those information by using topic map model. the proposed information extraction model that formulates the relation extraction problem as a classification problem is motivated by the work of j.suzuki et. al. [ ] . this innovated work has proposed a hdag kernel solving many problems in natural language processing. the use of classification methods in information extraction is not new. intuitively, one can view the information extraction problem as a problem of classifying a fragment of text into a predefined category which results in a simple information extraction system such as a system for extracting information from job advertisements [ ] and business cards [ ] . however, those techniques require the assumption that there should be only one set of information in each document, while our model could support more than one set of information. as communities generate increasing amounts of transactions and deal with fast growing data, it is very important to provide new strategies for their collaborative management of knowledge. in this paper, we presented and described our proposal regarding information modeling for adaptive semantic management which aims at extracting information and knowledge from unstructured documents that spread throughout the internet by emphasizing on information extraction technique, event tracking and knowledge organizing. we first motivated the need for such modeling in order to provide personalized services to users who are involved in semantic tracking communities. the motivation for this work is definitely to improve user's access to semantic information and to reach high satisfaction levels for decision making. then, we gave an overview of our approach's algebra with its operators, focusing on update and consistency policies. we finally proposed and defined adaptive services that enable collaborative project to automatically dispatch semantic and to make the query results more relevant. this challenging work needs more complicate natural language processing with deeply semantic relations interpretation. know-what: a development of object property extraction from thai texts and query system a maximum entropy approach to natural language processing atomicity for p p based xml repositories the role of classification(s) in distributed knowledge management thai named entity extraction by incorporating maximum entropy model with simple heuristic information elementary discourse unit segmentation for thai using discourse cue and syntactic information mitap for sars detection owl web ontology language reference. w c recommendation ambientdb: p p data management middleware for ambient intelligence living with topic maps and rdf centering: a framework for modeling the local coherence of discourse proceedings of the th international conference on very large data bases. very large data bases second-order signature: a tool for specifying data models, query processing, and optimization a framework of nlp based information tracking and related knowledge organizing with topic maps automatic thai ontology construction and maintenance system a unified framework for automatic metadata extraction from electronic document know-what: a development of object property extraction from thai texts and query system information extraction by text classification profile-based event tracking event recognition with fragmented object tracks, icpr application framework based on topic maps a flexible ontology reasoning architecture for the semantic web on data management in pervasive computing environments ontoedit empowering swap: a case study in supporting distributed, loosely-controlled and evolving engineering of ontologies (diligent) dbglobe: a serviceoriented p p system for global computing topic management in spatial-temporal multimedia blog bootstrap cleaning and quality control for thai tree bank construction the eyes have it: a task by data type taxonomy for information visualizations kernels for structured natural language data thai word segmentation based on global and local unsupervised learning know-who: person information from web mining topic map query language (tmql) event recognition on news stories and semi-automatic population of an ontology, wi using categorial context-shoiq(d) dl to integrate context-aware web ontology metadata information extraction from epidemiological reports information extraction by text classification: corpus mining for features the work described in this paper has been supported by the grant of national electronics and computer technology center (nectec) no. nt-b- - - - - , under the project "a development of information and knowledge extraction from unstructured thai document". key: cord- -lx krl v authors: domínguez-salas, sara; gómez-salgado, juan; andrés-villas, montserrat; díaz-milanés, diego; romero-martín, macarena; ruiz-frutos, carlos title: psycho-emotional approach to the psychological distress related to the covid- pandemic in spain: a cross-sectional observational study date: - - journal: healthcare (basel) doi: . /healthcare sha: doc_id: cord_uid: lx krl v anxiety, depression, and stress are common and expected reactions to the coronavirus disease (covid- ) pandemic. the objective of this study is to analyze psychological distress in a sample of spanish population, identifying the predictive nature of the information received, the preventive measures taken, level of concern, beliefs, and knowledge about the infection. a cross-sectional observational study was conducted on a sample of participants. data were collected through a self-prepared questionnaire and the general health questionnaire (ghq- ). bivariate analyses and logistic regressions were performed. of the total participants, . % presented psychological distress. the study population actively sought information about coronavirus, expressed a high level of concern and knowledge, and the most frequent preventive behavior was hand washing. as predictive factors, the degree of concern for covid- was identified (odds ratio (or) = . , % confidence interval (ci) = [ . , . ]), the number of hours spent consulting information on covid- (or = . , % ci = [ . , . ]), or the need for psychological support (or = . , % ci = [ . , . ]), among others. these results could help design more effective strategies towards a psycho-emotional approach for the population when in similar health crisis situations. there is a need for interventions aimed at the psychological well-being of the population that meet the needs of their reality. the world health organization (who), on march, classified the health crisis triggered by coronavirus disease as the pandemic in the face of , reported cases and deaths in countries [ ] . in spain, the state of health alert was declared on march [ ] , most previous studies look at beliefs about the disease and protection and transmission measures by analyzing their relationship with protective behaviors [ , , , ] and, more rarely, this relationship is assessed regarding the psychological effects of an epidemic. because of all the above, when approaching the current pandemic situation by covid- , the information, knowledge, beliefs and concerns of the population should be taken into account given their influence on both the psychological and emotional impact this situation has on the population and on preventive behaviors. the objective of this study was to analyze psychological distress on a sample of the spanish population during the beginning of the contagion curve in the covid- pandemic, identifying the predictive nature that the information received, the preventive measures taken, the level of concern for transmitting the infection or being infected, the beliefs and the level of knowledge about the infection may have on psychological distress. cross-sectional observational study. this study initially included a total of participants. in order to participate, it was necessary to comply with the following conditions: (i) living in spain during the pandemic; (ii) being years of age or older; and (iii) accepting the informed consent. a strict selection criterion was adopted, eliminating all questionnaires with an answer percentage of less than % ( questionnaires), leaving questionnaires in the final sample. questionnaires were received from the spanish provinces and the small autonomous cities located in north africa. a specific questionnaire was developed for data collection, which included socio-demographic data, information received, prevention measures, beliefs, concerns, and population's knowledge about covid- . questions from similar previous studies [ ] were adapted and new ones were added to meet the objectives of the study and cover the characteristics of the population. as sociodemographic data, the variables collected were age, sex, level of studies, marital status, people with which they cohabited, and employment situation. the information received was assessed by evaluating the number of sources of information and the hours spent listening, reading, or watching news about the pandemic per day. items evaluating the accessibility, quantity, quality, and usefulness of information received through the media and official channels were included, with five categorized response options from very bad to very good. questions about the amount of information received on symptoms, prognosis, treatments, routes of transmission, and preventive measures were added. a dichotomous response question (yes/no) was included to assess whether the person contrasted the information received with official sources. prevention measures were evaluated through questions with five answer options categorized from never to always regarding how often the following behaviors were performed: covering your mouth using your elbow when coughing or sneezing; avoiding sharing utensils (e.g., fork) during meals; washing hands with soap and water; washing hands with hydro-alcoholic solution; washing hands immediately after coughing, touching your nose or sneezing; washing hands after touching potentially contaminated objects; wearing a mask regardless of the presence or absence of symptoms; leaving at least a meter and a half of separation from others. beliefs and concerns about covid- were assessed through likert-type answer questions from to , a higher score meaning higher agreement. to assess participants' knowledge, five basic questions on knowledge about covid- regarding its transmission, symptoms, and prevention measures were included with "yes", "no", and "i don't know" as answer options. the questionnaire was pre-piloted by a panel of experts formed by psychologists, occupational doctors and nurses, epidemiologists, and public health experts. subsequently, a piloting was carried out in which people from different professions, educational levels, sex, age, and geographical areas of spain participated. no comprehension issues or relevant incidents were identified. psychological adjustment was measured by the general health questionnaire (ghq- ) [ ] , a tool used to assess mental health and psychological well-being. this consists of items with four answer options; the first two are assigned a score of points, and the last two are assigned a score of point, so the total score ranges from to . the questionnaire has been adapted and validated for the spanish population, obtaining good internal consistency (cronbach's alpha coefficient of . ) and good psychometric properties [ ] . the cut-off point set for the general population was three, considering psychological distress those with scores greater than or equal to [ ] . cronbach's alpha amounted to . . data were collected through an online questionnaire, the qualtrics ® survey and storage platform. in this way, the confinement measures established during the pandemic did not interfere with the data collection process. for the sampling, the snowball method was chosen, involving professional colleges and associations, universities, and scientific societies in the process of disseminating the information, as well as through social networks and press. the questionnaires were collected between march and april. the health alert was decreed in spain thirteen days before the start of the study. the analyses were performed using the spss statistical software ( . ) (ibm, armonk, ny, usa). the presence or absence of psychological distress was assessed for each independent variable (information received, preventive measures taken, level of concern about transmitting the infection or getting infected, beliefs, and level of knowledge about the infection). subsequently, bivariate analyses were performed, including chi-squared test and student's t-test for the independent variables, depending on their type. crammer's v and cohen's d effect size indexes were also calculated with the following cut-off points: to . , negligible; . to . , small; . to . , medium; from . on, high. then, with the aim of studying the predictive ability for psychological distress of the different sets of variables, logistic regression analyses (controlled by sex and age) were carried out including variables with p value < . . finally, variables that manifested to have a predictive nature in each of the models were included in a global model (model ). odds ratios (ors) were calculated with a % confidence interval. the ethical principles set out in the helsinki declaration have been followed. the permission of the participants was obtained through an informed consent in which they expressed their voluntary desire to participate in the study. data were recorded anonymously and treated confidentially. the study was authorized by the research ethics committee of huelva, belonging to the andalusian ministry of health (pi / ). this study is integrated into a larger investigation that includes other variables on the psychological impact of the covid- pandemic on the general population and on healthcare professionals. some of the results that differ from the present study have already been published [ ] . the description of sociodemographic data is shown in table . the sample consisted of a greater number of women ( . %), most with university or higher education level ( . %), married ( . %), and a mean age of . . most of them were working away from home ( . %), . % at home via teleworking, and . % were not working. data on information received on covid- and its sources were analyzed. participants were identified as consulting a mean of . (sd = . ) different sources of information, being social networks the most widely used ( . %), followed by television ( . %), official bodies or scientific societies websites ( . %), friends or family ( . %), online or printed press ( . %), google or other search engines ( . %), radio ( . %), and official phone numbers or information apps ( . %). the results showed no statistically significant differences between this variable and the presence of psychological distress (t = . , p = . , cohen's d = . ). regarding the number of hours spent seeking information on covid- , the results were higher in the group that presented psychological distress, as compared to the group that did not present it (m = . , sd = . , and m = . , sd = . , respectively). statistically significant differences were found between both groups (t = . , p ≤ . , cohen's d = . , small effect size). taking into account the assessment made by the participants on the information provided by the media, participants with psychological distress rated the information provided by the media as more accessible (m (table ) . when analyzing the frequency of use of the recommended preventive measures (table ) , the most common ones reported by participants have been washing hands with soap and water (m = . , sd = . ), washing hands after touching potentially contaminated objects (m = . , sd = . ), leaving at least a meter and a half of separation from others (m = . , sd = . ), and avoiding sharing utensils during meals (m = . , sd = . ). the last most commonly adopted measure was "wearing a mask regardless of the presence or absence of symptoms" (m = . , sd = . ). statistically significant differences were found in terms of the use of preventive measures and the development of psychological distress. seven of the eight measures showed significant differences (p = . in all cases), with effect sizes ranging from negligible to small. in each of them, the mean score obtained was higher in the group of subjects who presented psychological distress ( table ). the only exception was in the preventive measure "leaving at least a meter and a half of separation from others", where this group of participants obtained a lower mean score (m = . , sd = . ), as compared to the group which did not present psychological distress (m = . , sd = . ). in response to concerns about covid- (table ), participants expressed that being a transmitter of the infection was their main concern (m = . , sd = . ), followed by the degree of general concern about covid- (m = . , sd = . ), and the degree of concern about becoming infected was in the last place (m = . , sd = . ). the results showed statistically significant differences between both groups of subjects for all the variables (p < . in all cases), with small effect sizes. in this regard, the group of patients with psychological distress had higher scores (m = . information on the relationship between beliefs and knowledge about covid- and the presence of psychological distress is presented in table . in view of the participants' beliefs on covid- , those who presented a higher score have been related with the need to provide a psychological support service to both the persons and families similarly, when subjects were asked whether they felt it necessary to offer psychological support to professionals and volunteers who are directly involved in the health crisis, to individuals and families affected by covid- , as well as to the general population, the group with psychological distress showed significantly higher scores (table ) . finally, most participants showed a high level of knowledge about covid- . thus, most correctly answered questions were related with the need to isolate infected people ( . %), transmission routes ( . %), the incubation period ( . %), and on the infective capacity of asymptomatic people ( . %). however, only . % correctly answered questions related to the symptoms of the virus. no statistically significant association was found between any variables on the level of knowledge about covid- and the presence of psychological distress (p > . in all cases). logistic regression models are displayed in table . degree of concern about covid- model , which is related with concerns about covid- , showed a predictive ability of %, higher than the previous models (χ = . , p < . ), correctly classifying . % of participants ( . % sensitivity and . % specificity). those subjects with a higher degree of concern about covid- were . times more likely to suffer psychological distress ( % ci = . , . ). similarly, participants with a higher degree of concern about becoming infected with the virus were . times more likely to develop psychological distress ( % ci = . , . ). with finally, model (global model), where variables that showed a predictive ability in previous models were included, presented a predictive ability of . %, correctly classifying . % of participants ( . % sensitivity and . % specificity). the variables that showed a predictive ability were sex, age, number of hours consulting information on covid- , assessment of the information provided by the media in terms of accessibility, assessment of the information available on the prognosis of the disease, washing hands with hydroalcoholic solution, degree of concern about covid- , degree of concern to become infected, belief about the likelihood of survival if infected, level of confidence in the diagnostic ability of the health system, risk of getting infected, the belief about the effectiveness of preventive measures, and the need to offer psychological support to the general population ( table ). the variables that showed a higher weight, with ors greater than , were being female (or = . , % ci = [ . , . ]), degree of concern about covid- (or = . , ci % = the results indicate that the population is actively looking for information on covid- . participants consulted several sources of information, with social media being the most common one. people with psychological distress spent more hours a day looking for information, and considered it more accessible, albeit of worse quality and usefulness. in addition, the information provided by the official channels in terms of quantity and usefulness was valued with lower scores. choosing the internet as the main source of information is consistent with the results of previous studies [ ] . the lack or inadequacy of the information has been identified as a stressor during this pandemic, which leads the population to find answers to their concerns [ ] . the internet is currently the leading source of information worldwide, and users approach it as the first means of communication and information for health-related issues [ ] . abd-alrazaq et al. analyzed the contents of the social network twitter that were related to covid- and identified the topics that most affected users: the origin of the virus; routes of transmission; impact on people and countries (death toll, stress and fear, travels, economy, and xenophobia), and risk and spread control measures [ ] . another similar analysis of the content on social networks related to covid- grouped the topics of interest into five categories: ( ) update of new cases and their impact; ( ) first-line reports on the epidemic and its prevention measures; ( ) expert opinions on the outbreaks of the infection; ( ) frontline health services; and ( ) global reach of the epidemic and identification of suspected cases [ ] . the concern and need for information of the population is reflected in the use of social networks. the study conducted by li et al. revealed that, following the outbreak of covid- , the expression of negative feelings on social media such as anxiety, depression or outrage increased significantly. users expressed greater concern for their health and that of their families, and less interest in leisure and friends [ ] . on the other hand, zhao et al. identified an evolution of the content on social networks from the beginning of the health crisis, being it from negative to neutral, and a progressive increase in the expression of positive emotions [ ] . the use of the internet as a source of health-related information also implies a risk. as cuan-baltazar et al. state, the quality of the information available on the internet on covid- does not meet the quality criteria and may lead to a worrying situation of misinformation to the non-healthcare related population who do not have criteria to discriminate [ ] . a recent critical analysis of the contents of the websites that disclosed the preventive measures before covid- revealed that, in most cases, the information was ambiguous and not in line with who recommendations. less than half of participants reported on the proper use of masks and that the most correct information was provided by official bodies' websites [ ] . regarding adherence to preventive measures, the behavior that participants stated most often was hand washing. participants with psychological distress performed preventive measures more frequently than those without distress, except for leaving a meter and a half of separation from others. the high adhesion obtained to hand washing and respiratory hygiene measures is consistent with results from previous studies [ , [ ] [ ] [ ] . these measures are in line with who recommendations [ ] and are among the most suggested ones to deal with the covid- pandemic [ , ] . the practice of preventive measures was associated with the perception of risk of coronavirus infection [ ] . the results of this study coincide with wang et al. by identifying the flattering influence of psychological distress on preventive behaviors regarding the spread of covid- [ ] . in relation to depressive symptomatology, studies show that the implementation of more precautionary behaviors and greater social distance is associated with a higher level of anxiety [ , ] . still, authors like cowling et al. found that a lower use of hygiene measures and greater social distancing have been associated with increased anxiety [ ] . it seems clear that social distancing is related to the psychological impact, leading to greater symptomatology. what does not seem to be so clear is the role of individual protective measures, which may be mediated by other variables such as the perceived risk or vulnerability of getting infected. the results of the present study indicate a high level of public concern regarding covid- , especially for those participants who presented psychological distress. these results are supported by findings from similar studies that reveal a high public concern about the covid- pandemic [ , , [ ] [ ] [ ] , calling it terrifying [ ] . the cause of most concern among participants was the possibility of being a transmitter. however, in similar studies, the main concerns were the infection of a family member [ ] or getting infected with coronavirus [ , ] . according to cori et al., an individual's risk perception is modulated by four elements: voluntariness, knowledge, visibility, and trust; regarding the latter, the unknown risks are perceived as more threatening [ ] . however, wolf et al. identified that people with less health knowledge considered themselves less likely to get infected with coronavirus [ ] . the uncertainty expressed by the population to this new disease manifests itself with situations of anxiety, depression, and sleep disorders [ ] . people in confinement, as a measure of containment in the face of the spread of the covid- epidemic, reported having low sleep quality aggravated by anxiety and stress [ ] . faced with the situation of concern and uncertainty generated by the health crisis, studies have described the level of public confidence regarding the measures put in place by their governments. some authors identified that most participants felt that the country could win the battle against coronavirus [ ] , were satisfied with the epidemic control measures taken [ ] and were motivated to follow the government's recommendations on quarantine and social distancing [ ] . on the contrary, the study of wolf et al. revealed that half of the participants did not trust their government's ability to contain the covid- outbreak, and people with less health knowledge were more likely to rely on the government's actions [ ] . mcfadden's results point to health workers as the better able to lead the covid- pandemic response strategy, according to the population's assessment [ ] . in order to face the concerns about covid- , coping strategies such as focusing on the problem and seeking alternatives, receiving emotional support and positive assessment of the situation [ ] , and doing physical exercise are recommended [ ] . participants in this study showed a high level of knowledge about covid- , except for their symptoms. these results support those obtained in previous similar studies that describe a good degree of knowledge on the part of the population, albeit disparately. on the one hand, there are authors who reported that participants were generally aware of coronavirus [ ] , its symptoms [ , ] , routes of transmission [ , ] , and preventive measures [ , ] . on the other hand, some authors identified knowledge gaps related to symptoms [ ] and preventive measures [ ] . regarding university students, a good level of knowledge about the covid- pandemic and its preventive measures has been described, especially among students attending life sciences degree courses [ ] . knowledge of the covid- pandemic has been associated with willingness towards preventive measures and less confidence in the success of the fight against the virus [ ] . abdelhafiz et al. found that older people with low education, lower income, and living in rural areas tend to have less knowledge about the covid- pandemic [ ] . however, the profile of the person with little knowledge described by zhong et al. is young women with low level of education, who are unemployed or students [ ] . in this study, sex, accessibility to information, hours spent looking for information about coronavirus, degree of concern, belief of becoming infected, washing hands with hydroalcoholic solution, and perceived need for psychological help have been identified as factors with higher predictive weight of psychological distress. these results are in line with previous studies which have identified an association between female gender, negative affect, and detachment and higher levels of depression, anxiety, and stress [ ] . quarantine as a measure of containment has negative psychosocial consequences such as symptoms of depression, anxiety, anger, stress, post-traumatic stress, social isolation, loneliness, and stigmatization [ ] . psychological support interventions are needed to approach the situation, as the absence of psychological support is associated with higher levels of anxiety and depression [ ] . bäuerle et al. proposed a self-guided tool to promote psychological well-being based on mindfulness to reduce stress in the face of the covid- crisis, to enhance coping strategies, perceive self-effectiveness, and mobilize personal resources [ ] . several community care initiatives have been described, which have been managed by mental health professionals who act as counsellors and by volunteer staff. phone calls and apps provide support, advice, and training to address the psycho-emotional impact of the pandemic [ , ] . the cross-sectional observational design of the study can be considered a limitation as it offers a photograph of what is happening at a precise time and does not allow inferring that such levels of psychological distress occur equally throughout the pandemic period. however, being able to obtain data at the time of the rise of the contagion curve is precisely what gives greater value to the study. the sample collection was not randomized and there were more women than men, factors that were compensated with a large sample and a representation of all the provinces and autonomous cities. it is difficult to compare the results between countries because confinement measures or cessation of labor activities differ greatly among them. further study is planned to check for the effects at different stages of the pandemic. this study revealed a strong psychological impact on the population as a result of the covid- pandemic. the results describe a population profile that searches for information about the coronavirus by consulting various sources of information, although social media was the most widely used. with regard to adherence to preventive measures, the behavior that participants most often reported was hand washing and respiratory hygiene. the results of our study indicate that the population has a high level of concern and knowledge in relation to covid- , and this is especially true for those who presented psychological distress. logistic regression analyses, on the other hand, have shown an adequate adjustment for the most part and an explained variance that exceeds % in the global model, being sex, degree of concern about the virus, getting infected, accessibility to information, number of hours looking for information, hand washing with hydroalcoholic solution, amount of information available on the prognosis of the disease, beliefs about the risk of infection, or need for psychological care for the population, among others, the predictors with greatest weight for psychological distress. these results could help design more effective strategies for a psycho-emotional approach of the population in similar health crisis situations. interventions aimed at the psychological well-being of the population are necessary to meet the needs of their reality. who director-general's opening remarks at the media briefing on covid- - por el que se declara el estado de alarma para la gestión de la situación de crisis sanitaria ocasionada por el covid- (royal decree-law / , of march , declaring the state of alarm for the health crisis situation's management caused by covid- ) por el que se regula un permiso retribuido recuperable para las personas trabajadoras por cuenta ajena que no presten servicios esenciales, con el fin de reducir la movilidad de la población en el contexto de la lucha contra el covid- (royal decree-law / , of march , which regulates a recoverable paid leave for self-employed persons who do not provide essential services, in order to reduce population mobility in the fight against covid- context). available online enfermedad por coronavirus, covid- (scientific-technical information: coronavirus disease, covid- ). available online coronavirus/pdf/ _informe_cientifico_sanidad_covid- .pdf (accessed on the sars epidemic in hong kong consistent detection of novel coronavirus in saliva q&as on covid- and related health topics epidemiology, causes, clinical manifestation and diagnosis, prevention and control of coronavirus disease (covid- ) during the early outbreak period: a scoping review available online covid- and mental health: a review of the existing literature the factors affecting household transmission dynamics and community compliance with ebola control measures: a mixed-methods study in a rural village in sierra leone the experience of quarantine for individuals affected by sars in toronto avoidance behaviors and negative psychological responses in the general population in the initial stage of the h n pandemic in hong kong factors influencing compliance with quarantine in toronto during the sars outbreak ebola and healthcare worker stigma accepted monitoring or endured quarantine? ebola contacts' perceptions in senegal knowledge and risk perceptions of the ebola virus in the united states immediate psychological responses and associated factors during the initial stage of the coronavirus disease (covid- ) epidemic among the general population in china association of self-perceptions of aging, personal and family resources and loneliness with psychological distress during the lock-down period of covid- generalized anxiety disorder, depressive symptoms and sleep quality during covid- outbreak in china: a web-based cross-sectional survey a review of coronavirus disease- (covid- ) prediction for the spread of covid- in india and effectiveness of preventive measures lockdown contained the spread of novel coronavirus disease in huangshi city, china: early epidemiological findings effectiveness of the measures to flatten the epidemic curve of covid- . the case of spain the positive impact of lockdown in wuhan on containing the covid- outbreak in china adoption of preventive measures during and after the influenza a (h n ) virus pandemic peak in spain knowledge, attitudes and practices (kap) related to the pandemic (h n ) among chinese general population: a telephone survey association between knowledge of zika transmission and preventative measures among latinas of childbearing age in farm-working communities in south florida effect of knowledge and perceptions of risks on ebola-preventive behaviours in ghana community psychological and behavioral responses through the first wave of the influenza a(h n ) pandemic in hong kong public perceptions, anxiety, and behaviour change in relation to the swine flu outbreak: cross sectional telephone survey the validity of two versions of the ghq in the who study of mental illness in general health care propiedades psicométricas y valores normativos del general health questionnaire (ghq- ) en población general española the general health questionnaire related health factors of psychological distress during the covid- pandemic in spain the psychological impact of quarantine and how to reduce it: rapid review of the evidence misinformation of covid- on the internet: infodemiology study top concerns of tweeters during the covid- pandemic: infoveillance study chinese public's attention to the covid- epidemic on social media: observational descriptive study the impact of covid- epidemic declaration on psychological consequences: a study on active weibo users assessment of health information about covid- prevention on the internet: infodemiological study. jmir public health surveill. , , e long-term effect of the influenza a/h n pandemic: attitudes and preventive behaviours one year after the pandemic perceptions and behaviors related to hand hygiene for the prevention of h n influenza transmission among korean university students during the peak pandemic period the influenza a (h n ) pandemic in reunion island: knowledge, perceived risk and precautionary behaviour coronavirus diseases (covid- ) current status and future perspectives: a narrative review psychological and behavioral responses in south korea during the early stages of coronavirus disease (covid- ) demographic and attitudinal determinants of protective behaviours during a pandemic: a review awareness, attitudes, and actions related to covid- among adults with chronic conditions at the onset of the u.s. outbreak: a cross-sectional survey covid- and iranian medical students; a survey on their related-knowledge, preventive behaviors and risk perception study of knowledge, attitude, anxiety & perceived mental healthcare need in indian population during covid- pandemic the network investigation on knowledge, attitude and practice about covid- of the residents in anhui province risk perception and covid- social capital and sleep quality in individuals who self-isolated for days during the coronavirus disease (covid- ) outbreak in january in china knowledge, attitudes, and practices towards covid- among chinese residents during the rapid rise period of the covid- outbreak: a quick online cross-sectional survey perceptions of the adult us population regarding the novel coronavirus outbreak narrative synthesis of psychological and coping responses towards emerging infectious disease outbreaks in the general population: practical considerations for the covid- pandemic ensuring mental health care during the sars-cov- epidemic in france: a narrative review use of rapid online surveys to assess people's perceptions during infectious disease outbreaks: a cross-sectional survey on covid- understanding knowledge and behaviors related to covid- epidemic in italian undergraduate students: the epico study a nationwide survey of psychological distress among italian people during the covid- pandemic: immediate psychological responses and associated factors psychosocial impact of quarantine measures during serious coronavirus outbreaks: a rapid review comparison of prevalence and associated factors of anxiety and depression among people affected by versus people unaffected by quarantine during the covid- epidemic in southwestern china an e-mental health intervention to support burdened people in times of the covid- pandemic: cope it psychological crisis intervention response to the covid pandemic: a tunisian centralised protocol psychological assistance during the coronavirus disease outbreak in china this article is an open access article distributed under the terms and conditions of the creative commons attribution (cc by) license funding: this research received no external funding. the authors declare no conflicts of interest. key: cord- -vgs w b authors: ma, rongyang; deng, zhaohua; wu, manli title: effects of health information dissemination on user follows and likes during covid- outbreak in china: data and content analysis date: - - journal: int j environ res public health doi: . /ijerph sha: doc_id: cord_uid: vgs w b background: covid- has greatly attacked china, spreading in the whole world. articles were posted on many official wechat accounts to transmit health information about this pandemic. the public also sought related information via social media more frequently. however, little is known about what kinds of information satisfy them better. this study aimed to explore the characteristics of health information dissemination that affected users’ information behavior on wechat. methods: two-wave data were collected from the top wechat official accounts on the xigua website. the data included the change in the number of followers and the total number of likes on each account in a -day period, as well as the number of each type of article and headlines about coronavirus. it was used to developed regression models and conduct content analysis to figure out information characteristics in quantity and content. results: for nonmedical institution accounts in the model, report and story types of articles had positive effects on users’ following behaviors. the number of headlines on coronavirus positively impacts liking behaviors. for medical institution accounts, report and science types had a positive effect, too. in the content analysis, several common characteristics were identified. conclusions: characteristics in terms of the quantity and content in health information dissemination contribute to users’ information behavior. in terms of the content in the headlines, via coding and word frequency analysis, organizational structure, multimedia applications, and instructions—the common dimension in different articles—composed the common features in information that impacted users’ liking behaviors. since the outbreak of the novel coronavirus (covid- )-infected pneumonia (ncp) in december , it has quickly spread across the world. the world health organization declared the outbreak of covid- as a global public health emergency. more than million cases have been confirmed as of june [ ]. covid- has attracted attention worldwide, and information and discussions about it have been spreading on the internet, especially on social media. the ubiquity and ease of access make social media a powerful complement to traditional methods in information dissemination [ ] . according to a report released by csm media research, . % of chinese people purposefully seek pandemic information online, and approximately . % use wechat more frequently during the outbreak than before it occurred [ ] . social media is widely used for disseminating health information [ ] . as one of the most popular social platforms in china, wechat has more than . billion monthly active users [ ] and has become a frequently used information dissemination platform [ ] . wechat contains a specific module called wechat official account [ ] , which is a platform operated by institutions, communities, or individuals. wechat official accounts are widely used to share stories, report news, or disseminate various types of information. information on these accounts can be posted by anyone, including experts, novices, and even saboteurs [ ] . wechat has changed channels of health information dissemination and manners of obtaining feedback [ ] . for instance, evidence-based clinical practice guidelines in medical fields are traditionally spread by publishing in peer-reviewed journals, sending emails or paper notices to physicians, and advertising through news media outlets [ ] . however, wechat official accounts now enable guidelines for covid- to be shared on this platform when they are simultaneously published by health authorities. in response to the pandemic, people prefer to receive real-time news and instructions on personal protection [ ] . as such, many wechat official accounts have posted articles about ncp. however, different account operators tend to post various types of articles in different numbers. for example, some accounts have reported the number of infected cases every day to keep people informed about the pandemic state. some accounts have instructed the public to protect themselves. some accounts have refuted fake news to avoid confusion and inappropriate interventions. health information posted on these accounts can have a great impact on receivers' behavior because of its real-time nature and various forms [ ] . they can express their appreciation and interest by liking an article or following an account [ ] . we found that the number of followers on many official accounts changed dramatically within a week. meanwhile, the number of likes differs greatly among articles. in this work, we aimed to determine whether and how health information dissemination affected users' information behavior in terms of following an account and liking a post. researchers studied the influence of health information on information behaviors on different social media platforms, such as facebook and microblogging sites. the findings are shown in table . zika, mers, and chikungunya messages motivate the public to search for related information frequently and to post actively. bragazzi et al. [ ] mahroum et al. [ ] jang et al. [ ] mommy blog users with a personal connection to the health issue tend to post articles about it. burke-garcia et al. [ ] adoption and sharing behavior pregnancy-related information influences expectant mothers to adopt and share from the perspective of perceived influence and prenatal attachment. zhu et al. [ ] harpel. [ ] lupton. [ ] commenting behavior microblog information correlated with the vaccine event or environmental health in china can significantly influence users' comments. an et al. [ ] wang et al. [ ] prevention behavior instagram facebook intervention messages on breast cancer can effectively affect prevention behavior and lead to high exposure scores in consideration of the influence of leaders' opinion. wright et al. [ ] however, we found that few researchers concentrated on wechat in china. the above did not study detailed characteristics in the information. these studies mainly focused on the effect of health information on information behavior. however, during the pandemic, users may concentrate on different types of information, and their reaction to a given information may vary. for example, more people have followed wechat official accounts to give continuous attention to the pandemic [ ] , users' behavior on social networks is the inclination to use social network services and various activities [ ] . researchers studied users' information behavior on some popular social media platforms. for example, bian et al. [ ] found that the tendency to discuss promotional lynch syndrome-related health information on twitter shows users' prevention awareness. iftikhar et al. [ ] clarified that health information on whatsapp, facebook, and twitter can urge users to verify it on google. meanwhile, gan [ ] summarized three factors, namely, hedonic, social, and utilitarian gratifications, which affect the tendency of wechat users to like a post. users' information behavior is manifested everywhere on the internet [ ] . their behavior on wechat official accounts includes acquiring information, liking a post [ ] , and following an account. different behaviors may reflect different inclinations. for example, reading an article shows users' interest in a certain health theme [ ] . liking a post reflects their preference and appreciation [ , ] . after reading an article, users can like it to show their appreciation for the important message [ ] , and following accounts may indicate that users want to know what is being posted and their willingness to pay continuous attention [ ] . however, to the best of our knowledge, few studies have focused on analyzing the influence of information on users' information behavior to explore specific characteristics that satisfy wechat users. thus, this study aimed to address this issue. for this purpose, we developed multiple and simple linear regression models. we chose the number of different types of articles and the aggregated number of headlines on ncp posted on the selected accounts in a -day period as independent variables (a total of seven) to denote the health information source and reflect the dissemination state. we also chose the number of new followers and likes in this period as dependent variables to represent users' information behavior. then, we analyzed the relationship between information and behavior in quantity. we selected the number of related articles because it is a critical indicator in evaluating information dissemination [ ] . besides, for the impact of content on liking behaviors, we chose all of the headlines on ncp which won more than , likes to conduct our content analysis. information can affect users' information behavior on other media [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] . we want to explore whether information conveyed in each type of articles posted on wechat can play the role, impacting users' following and liking behavior. thus, on wechat official accounts, we drew the following hypotheses-h to h . these articles will be classified into different types in the later part of this paper. h : the headlines with a great number of likes may possess common characteristics in content that can impact users' liking behavior. we collected data from the xigua website (data.xiguaji.com) in china. it is a big data platform that provides operational data on wechat official accounts. the data include the number of articles posted in the last days, comments, the number of likes, and other information on each account. xigua is an open-access website for researchers, and official accounts on this website can be classified into different fields, such as economy, sports, and health. we focused on health and used data on monthly rankings. we used bazhuayu, a chinese web crawler software for data collection, to collect data within the top accounts as shown in figure . the outbreak of the disease in china occurred on january . since then, information regarding the pandemic has attracted considerable attention. at this time, the online reaction of the public may be greatly intense as they were faced with this severe condition suddenly. in a short period, their behaviors may be easier and more obvious to observe than before. thus, we selected january , and january , as two time nodes to collect and classify account information, including the name, rank, operator, and number of followers. these accounts can be classified into three types based on their operators: nonmedical institution, medical institution, and individual accounts. different types of accounts are operated by different stakeholders. nonmedical institution accounts are operated by companies and governments; medical institution accounts are administrated by hospitals, including maternal and child care service centers; and individual accounts are managed by individuals. table presents the number of accounts. then, we calculated the change in followers in - january . because we intended to study information on ncp, we filtered several accounts to identify the influence of information on the pandemic and deleted those who did not post any article related to ncp. for the remaining accounts, . % ( / ) were nonmedical institution accounts, . % ( / ) were medical institution accounts, and . % ( / ) were individual accounts. figures and show the screenshots of the rank list and data collection page, respectively. following an account and liking a post can represent users' activity. the change in the number of followers in the -day period and the aggregated number of likes in the headlines that are correlated with ncp can reflect users' information behavior. thus, we used them as two dependent variables. we recorded the state of articles on every account and counted the number of posts on ncp. we classified them into six types; namely, counter-rumor, report, science, story, instruction, and others. we classified articles that struck sensationalism or misinformation and clarified a fact as a counterrumor article. we classified articles on news about the state of the pandemic, several facts, or a press conference conducted by the national health commission or other governmental institutions as a report. we also grouped an interview with professionals as a report. we categorized posts on scientific outcomes about ncp, explanations of this new virus, or information about psychology under science. we identified shared self-or public-description articles about how physicians resisted the pandemic in hospitals as a story. we identified posts that instructed the public to protect themselves or published a diagnosis and treatment guideline as instruction. other posts, such as commentary, appealing for aid, advocating, and encouraging articles, were grouped under others. we classified posts that integrated more than one type of topic based on titles and main contents. this classification standard was approved by all the authors. we defined six independent variables for the six article types. following an account and liking a post can represent users' activity. the change in the number of followers in the -day period and the aggregated number of likes in the headlines that are correlated with ncp can reflect users' information behavior. thus, we used them as two dependent variables. we recorded the state of articles on every account and counted the number of posts on ncp. we classified them into six types; namely, counter-rumor, report, science, story, instruction, and others. we classified articles that struck sensationalism or misinformation and clarified a fact as a counter-rumor article. we classified articles on news about the state of the pandemic, several facts, or a press conference conducted by the national health commission or other governmental institutions as a report. we also grouped an interview with professionals as a report. we categorized posts on scientific outcomes about ncp, explanations of this new virus, or information about psychology under science. we identified shared self-or public-description articles about how physicians resisted the pandemic in hospitals as a story. we identified posts that instructed the public to protect themselves or published a diagnosis and treatment guideline as instruction. other posts, such as commentary, appealing for aid, advocating, and encouraging articles, were grouped under others. we classified posts that integrated more than one type of topic based on titles and main contents. this classification standard was approved by all the authors. we defined six independent variables for the six article types. moreover, we counted the number of headlines on ncp to explore its correlation with likes. each account will post many articles on ncp every day, and headline is the first one with a conspicuous title and illustration. we recorded and counted the total number of likes in each article in this period. we defined headlines as another independent variable. table presents the collected and processed sample data. the scale of change in the number of followers was , ; for the number of likes, the scale was . before estimating the models, we tested whether the variables were normal. we used a one-sample kolmogorov-smirnov test to examine the normality of variables. table shows the results. the sample size was (n = ). all p values were below . . therefore, all variables were normal and could be estimated in the linear regression models. we developed a multiple linear regression model to explore the relationship between the change in the number of followers and the six types of articles. meanwhile, we developed a simple linear regression model for the aggregated number of likes. we proved the normality of variables. models are shown in the following two equations. y i represents the change in the number of followers in the -day period. counter-rumor i , report i , science i , story i , instruction i , and others i denote the number of counter-rumor, report, science, story, instruction, and other types of articles, respectively. y i ' represents the aggregated number of likes in headlines in this period. headlines i indicates the total number of headlines related to the pandemic. where i = , , . . . , n index all accounts; α to α are the parameters to be estimated; ε i is the corresponding residue. where i = , , . . . , n index all accounts; β and β are the parameters to be estimated; ε i is the corresponding residue. we designed our research to figure out the effect of information quantity on users' information behaviors. we were also interested in the effect of content. we found an interesting phenomenon that among the accounts whose articles were usually unpopular, one article received a large number of likes. it did not correspond to the popularity of the account. for example, west china hospital lagged in the rank list, and most of its posted headlines were plain. nevertheless, on january , it posted a headline that received , likes, which might be a crucial factor that can affect our regression result. we can hardly find an article that received such an unexpected number of likes. we determined the reason why these articles were exceptional in terms of users' liking behavior. to further conduct our study, we browsed the selected headlines that received more than , likes in this period and explored their characteristics in terms of content that could affect the liking behavior. we examined a total of headlines. we code them from perspectives: the account group that an article comes from, original/non-original articles, the article type, and the length of articles. meanwhile, we recorded the form of multimedia applied in each article to show information (including the number of videos, pictures and graphics). the codebook is presented in tables a -a in appendix a. the intercoder reliability was tested to be ideal. the coding results and some statistics are shown in table . we used spss . to analyze the data. table shows the estimation results based on the least squares method and stepwise regression. we developed model - to represent nonmedical institutions, medical institutions, and individual accounts, respectively. however, not all models could fit well. for nonmedical institution accounts in model , the variables of report and story types had a significant effect (b = . , p = . ; b = . , p = . ) and played a positive role. the remaining variables were insignificant. for the medical institution accounts in model , the variables of report and science types were significant (b = . , p = . ; b = . , p < . ) and positive. however, for individual accounts in model , we did not obtain any result. model and had adjusted r of . and . , respectively, denoting an acceptable fit. we were unable to obtain a satisfactory result for model . thus, we partially confirmed h . this section explored h . table shows the simple linear regression result based on the least squares method and stepwise regression. among the three groups, only nonmedical institution accounts in model showed significance. the variable of headlines played a positive role (b = . , p< . ). the adjusted r was . , denoting an acceptable fit. we did not discover significance for medical institution and individual accounts. thus, h was partially confirmed. we found some impact factors of information dissemination on behavior, but we did not obtain a significant result in model and when we analyzed headlines and likes. it may because of some exceptional articles with a large number of likes recorded in table that led to insignificance when we analyzed model . when we discarded this datum, the analysis result was significant. thus, this factor could remarkably affect our results. some findings in content analysis are as follows. of the articles, ( %) were posted by nonmedical institution accounts. of these articles, were posted by dingxiang doctor, which was the most active account. dingxiang doctor was also the second-most popular in the rank list. besides, one account named dingxiang yuan posted article. these two accounts are affiliated with the same company called hangzhou lianke meixun biomedical technology corporation. in addition to these articles, ( %) was posted by medical institution accounts, and ( %) were posted by personal accounts. however, the only article posted by west china hospital received the most number of likes and reached , . the reason why articles from the medical institution group accounted for the least proportion may be that these accounts usually post affairs about their affiliated hospitals, which may be less interesting in the public opinion. compared with them, the public tend to prefer articles from nonmedical institution accounts. these accounts usually post various types of articles about common sense or short stories, which are easy for the public to understand and receive. this may be the reason why the public pay more attention to them and their articles. among the articles, ( %) were original, and ( %) were not. we did not identify an evident preference for originality. in these headlines, instruction, story, others, and counter-rumor types accounted for % ( / ), % ( / ), % ( / ), and % ( / ), respectively. report articles had the same proportion, accounting for % ( / ). for the two others, one article presented a timeline since the pandemic broke out, whereas the other article revealed several latent dangers after the city was locked down. story-type articles were confirmed to be positive in regression analysis. an instruction-type article could provide suggestions during a public health emergency. this article type might be the most popular because it met users' demands to seek prevention. wu [ ] believed that perceived usefulness is a precondition of users' overall satisfaction. perceived usefulness can affect users' attitude and determine the continuance of using an information system [ ] . reading behavior can show the perceived usefulness from users [ ] , and liking an article may denote their gratifications [ ] . instruction articles will inspire users' perceived usefulness and promote an account's popularity. we studied the length of each article and the method of transmitting information in table . the length varied among headlines; articles in the ( %) was coded as " ", possessing - characters. many articles limited the content length by using visual aids. for example, all articles applied infographics, including images and graphics. the post "can you go out without a mask? experts recommended the proper wearing of masks." by west china hospital had the most images (up to ). infographics and other visual aids, such as videos, can promote health information communication [ ] . using visuals based on conventional text gains an ideal outcome from the perspective of health information promotion [ ] . infographics and videos can help users visualize information and facilitate the straightforward understanding of information. as a result, the content is concise and clear, and it may help account operators to improve their performance in dissemination, making it easier for the public to receive information. although the types of articles varied, most of them integrated different types. for example, the article "can you go out without a mask? experts recommended the proper wearing of masks." not only provided an instruction but also appended a report on the pandemic. the counter-rumor article "novel coronavirus fears alcohol and high temperature, but vinegar, saline, and smoke are useless: rumors you need to know." also taught several prevention methods. none of the articles had only one type of content. our coders classified the articles on the titles and main content. diversity in types could simultaneously enhance the practicability of the content and meet users' different demands. however, the main part of the article should be specific to prevent it from being misinterpreted. we counted and recorded high-frequency words in these articles. they are shown in table . words occurring more than five times were listed in it. all the articles introduced general features about covid- , mentioning some same words, such as pandemic, doctor, infection, and so on. along with it, we found that different types of these articles all referred to one common dimension. that is the instruction. for example, words including mask, wash hands, and isolation indicated instructions on how to protect the public. they existed in most of these articles. besides, arbidol hydrochloride capsules is a drug used to relieve the state of infected cases. this noun also showed up in different articles, introducing an instruction on selecting drugs. most of them introduced instructions on prevention. it may because of the usefulness perceived by readers that helped these articles win a large number of likes. table . high-frequency words in each article. a wuhan doctor was suspected of being infected. he recovered after days' isolation at home! please spread his life-saving strategy to everyone! the usefulness of the article. the comments suggested that the article could facilitate the timely acquisition of knowledge about prevention during a health crisis. furthermore, a story about the heroic contributions of doctors and other people may inspire readers. a counter-rumor type of article may help users identify inaccurate information and prevent them from adopting inappropriate prevention methods. however, popularity is accompanied with limitations, and this issue should be considered. given the severity reported in these articles, information may lead to unnecessary public panic. some people even protect themselves excessively, as the disease is devastating if uncontrolled. however, with the contributions of physicians, we must be hopeful for the future situation. therefore, the aspect of reducing the negative effects of articles on readers should be considered by account operators. a prevention guideline against the new pneumonia. scientific prevention, we should not believe and transmit rumors. coronavirus, pneumonia, protection, prevention, infection, transmission . . . effects on readers are positive frequently but should consider the limitation these articles usually have positive effects on readers. for example, figure shows the screenshot of reviews from readers of an article with the most likes. most of the readers admired and appreciated the usefulness of the article. the comments suggested that the article could facilitate the timely acquisition of knowledge about prevention during a health crisis. furthermore, a story about the heroic contributions of doctors and other people may inspire readers. a counter-rumor type of article may help users identify inaccurate information and prevent them from adopting inappropriate prevention methods. however, popularity is accompanied with limitations, and this issue should be considered. given the severity reported in these articles, information may lead to unnecessary public panic. some people even protect themselves excessively, as the disease is devastating if uncontrolled. however, with the contributions of physicians, we must be hopeful for the future situation. therefore, the aspect of reducing the negative effects of articles on readers should be considered by account operators. the organizational structure, location, and description of information on social media affect public attention [ ] . multimedia applications, such as infographics and videos, make the structure clear and concise. certain types of articles can satisfy the public's demand for information and improve the popularity of articles. diversity in types can promote users' liking behavior and health the organizational structure, location, and description of information on social media affect public attention [ ] . multimedia applications, such as infographics and videos, make the structure clear and concise. certain types of articles can satisfy the public's demand for information and improve the popularity of articles. diversity in types can promote users' liking behavior and health information dissemination because of the superior content of articles. besides, these articles always referred to a common dimension which introduced some instructions. h was proven to be true. this study aimed to explore the effects of health information dissemination on users' information behavior on wechat official accounts. our hypotheses were tested using two-wave data collected from the xigua website over a period of days. meanwhile, we further explored the content characteristics based on the headlines with an unexpected number of likes to answer our research question. first, our results suggested that not all types of articles significantly affected the users' tendency to follow an account. for nonmedical institution accounts, report and story types positively influenced the change in the number of followers. for medical institution accounts, report and science types exerted positive effects. however, we did not find significant relationships for individual accounts. second, the number of headlines on the pandemic contributed to likes for nonmedical institutions. however, we did not obtain the same result in medical institution and individual accounts. for medical institution accounts, we found an article with an unexpected number of likes of up to , . when we rejected this information, the analysis result was significant. thus, several articles with an unexpected number of likes could determine the regression result to a great extent. third, we reviewed headlines to further explore the characteristics of information from articles that could affect users' liking behavior. in the headlines, organizational structure, the manner of description, and the application of multimedia contributed to unexpected likes. users showed an inclination to instruction and story types, especially those from nonmedical institution accounts. health authorities should take advantage of these accounts to enhance health information dissemination and reduce public panic. meanwhile, paying attention to methods of delivering messages was crucial, for the application of multimedia such as graphics, videos or pictures may make it easier for the public to understand and receive information. besides, diversity in types was also crucial in encouraging likes, and the role of instructions should not be left out. these dimensions composed the common features of information that can impact users' likes. this study has theoretical implications. on facebook, twitter, and other social media, information dissemination can affect users' information behavior [ ] . in the present study, we expanded the research scope to wechat in china, especially in the health field. we identified several factors that affected users' information behavior on this platform. for nonmedical institution accounts, report and story types of information should be emphasized. likewise, report and science types should be promoted for medical institution accounts. particular account groups, multiform transmission, and diversity in types, including instruction and story, are essential for promoting popularity. this study also has practical implications. first, on social media, account operators can promote information dissemination. analyzing users' information behavior may allow them to determine the kind of information that satisfies the public. they should fully utilize the superiority of headlines to enhance diffusion. second, the above conclusions could be explored further and in depth by analyzing twitter, facebook, or youtube trends in other countries, contributing to the worldwide campaign in the medical informatics and health dissemination domains. this strategy might help authorities determine what kind of information the public needs. if dissemination is efficient, the public will receive accurate information and useful prevention suggestions in a timely manner. this method can help health authorities successfully manage this public health emergency [ ] . this research has several limitations. during the data analysis, we did not identify the significance of some variables because the sample size was small. for example, we only assessed individual accounts among accounts. for this reason, we could not ensure that such insignificant factors would not contribute to users' information behavior. in our future work, we will involve more comments from users on different groups of accounts and expand the sample size to conduct further analyses. we considered that likes mainly denote users' appreciation. however, some readers may like an article without a valid reason or for conformity [ ] . other users can express their positive feelings by commenting. thus, in further studies, we will assess the implications of users' behavior, such as liking, following, commenting, and even sharing, based on quantitative methods. interviews that reflect users' actual feelings are also essential. we will also explore the effect of content on readers by analyzing their comments. the effects of information dissemination on users' information behavior during the covid- pandemic were examined. two models were developed to test our hypotheses that were partially confirmed. in content analysis, some common characteristics that contributed to users' tendency to like a post were identified. however, this study has some limitations. in our future work, we will include more accounts and adopt measures such as developing a synthetic model and quantitatively assessing these behaviors to solve the problems. who. novel coronavirus ( -ncov) situation report- importance of social media alongside traditional medical publications expectation survey report on users' media consumption and use during the epidemic csm media research: hongkong, china, . available online disseminating research findings preparing for generation y wechat annual data a study on influential factors of wechat public accounts information transmission hotness understanding the function constitution and influence factors on communication for the wechat official account of top tertiary hospitals in china: cross-sectional study exploring knowledge filtering processes in electronic networks of practice social media-promoted weight loss among an occupational population: cohort study using a wechat mobile phone app-based campaign twelve years of clinical practice guideline development, dissemination and evaluation in canada how social media exposure to health information influences chinese people's health protective behaviour during air pollution: a theory of planned behaviour perspective the research on the influencing factors of users' liking behavior in wechat global reaction to the recent outbreaks of zika virus: insights from a big data analysis public reaction to chikungunya outbreaks in italy-insights from an extensive novel data streams-based structural equation modeling analysis when information from public health officials is untrustworthy: the use of online news, interpersonal networks, and social media during the mers outbreak in south korea perceptions about disseminating health information among mommy bloggers: quantitative study pregnancy-related information seeking and sharing in the social media era among expectant mothers in china: qualitative study pregnant women sharing pregnancy-related information on facebook: web-based survey study the use and value of digital media for information about pregnancy and early motherhood: a focus group study selection of users behaviors towards different topics of microblog on public health emergencies empirical study on recogniti on and influence of opinion leaders in emergency partnering with mommy bloggers to disseminate breast cancer risk information: social media intervention research on influential factors of thumbs-up of interior advertorial of wechat official accounts an approach to the study of communicative acts uncertainty in times of medical emergency: knowledge gaps and structural ignorance during the brazilian zika crisis from concerned citizens to activists: a case study of south korean mers outbreak and the role of dialogic government communication and citizens' emotions on public activism when ignorance is bliss the role of motivation to reduce uncertainty in uncertainty reduction theory medicine: before covid- , and after a new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication mothers' perceptions of the internet and social media as sources of parenting and health information: qualitative study health care professionals' social media behaviour and the underlying factors of social media adoption and use: quantitative study professional use of social media by pharmacists: a qualitative study engaging the family medicine community through social media mapping physician twitter networks: describing how they work as a first step in understanding connectivity, information flow, and message diffusion medicine . : social networking, collaboration, participation, apomediation, and openness instrumental utilities and information seeking state of the art in social network user behaviours and its future using social media data to understand the impact of promotional information on laypeople's discussions: a case study of lynch syndrome health-seeking influence reflected by online health-related messages received on social media: cross-sectional survey understanding wechat users' liking behaviour: an empirical study in china what makes us click "like" on facebook? examining psychological, technological, and motivational factors on virtual endorsement traffic in social media i: paths through information networks understanding likes on facebook: an exploratory study an empirical study on influencing factors of continuous attention to wechat public accounts: an information characteristics perspective the research on the factors influencing the dissemination of media official micro-blog in the event of emergency patient continued use of online health care communities: web mining of patient-doctor communication information technology adoption and continuance: a longitudinal study of individuals' behavioural intentions study on multi-channel reading behavior choice in the all-media age sometimes more is more: iterative participatory design of infographics for engagement of community members with varying levels of health literacy impact of game-inspired infographics on user engagement and information processing in an ehealth program creating collective attention in the public domain: human interest narratives and the rescue of floyd collins impact of communication measures implemented during a school tuberculosis outbreak on risk perception among parents and school staff the emotional request and subject alienation of this article is an open access article distributed under the terms and conditions of the creative commons attribution (cc by) license we thanks for the funding support of school of medicine and health management, huazhong university of science and technology. the authors declare no conflict of interest. key: cord- - rgz t authors: radandt, siegfried; rantanen, jorma; renn, ortwin title: governance of occupational safety and health and environmental risks date: journal: risks in modern society doi: . / - - - - _ sha: doc_id: cord_uid: rgz t occupational safety and health (osh) activities were started in the industrialized countries already years ago. separated and specific actions were directed at accident prevention, and the diagnosis, treatment and prevention of occupational diseases. as industrialization has advanced, the complexity of safety and health problems and challenges has substantially grown, calling for more comprehensive approaches. such development has expanded the scope, as well as blurred the borders between specific activities. in the modern world of work, occupational safety and health are part of a complex system that involves innumerable interdependencies and interactions. these are, for instance, safety, health, well-being, aspects of the occupational and general environment, corporate policies and social responsibility, community policies and services, community social environment, workers’ families, their civil life, lifestyles and social networks, cultural and religious environments, and political and media environments. a well-functioning and economically stable company generates resources to the workers and to the community, which consequently is able to maintain a positive cycle in development. a high standard of safety and health brings benefits for everyone: the company, the workers and the whole community. these few above-mentioned interactions elucidate the need for an integrated approach, and the modelling of the complex entity. if we picture osh as a house, this integrated approach could be the roof, but in order to build a stable house, it is also necessary to construct a solid basement as a foundation to the house. these basement "stones" are connected to each other, and are described in more detail in sections . - . . section . focuses on the existing hazards, while section . mainly considers the exposure of workers to health hazards. health, due to its complexity, however, is not only influenced and impaired by work-related hazards, but also by hazards arising from the environment. these two sub-chapters are thus linked to section . . in addition, the safety levels of companies may affect the environment. the strategies and measures needed for effective risk management, as described in section . , therefore also contribute to reducing the risks to the environment. in the case of work that is done outdoors, the hazards arising from the environment understandably have to be given special attention. here, the methods applied to tackle the usual hazards at workplaces are less effective. it is necessary to develop protective measures to avoid or minimize hazards present in the environment. namely, agriculture, forestry and construction involve these types of hazards, and affect high numbers of workers on a global scale. finally, hazards in the environment or in leisure-time activities can lead to strain and injuries which -combined with hazards at work -may result in more severe health consequences. as an example one can mention the hazardous substances in the air causing allergies or other illnesses. another example is the strain on the musculoskeletal system from sports and leisuretime activities causing low back pain and other musculoskeletal disorders. depending on the type of hazard, the three topics, namely, safety, health and the environment, may share the common trait that the proper handling of risks, i.e., how to reduce probabilities and/or consequences of unwanted events is not always possible within a risk management system. this is true when one moves into the realm of uncertainty, i.e., when there is uncertain, insufficient or no knowledge of the consequences and/or probabilities (see chapter ). . integrated multi-sectorial bodies for policy design and planning (national safety and health committee). . comprehensive approach in osh activities. . multi-disciplinary expert resources in inspection and services. . multi-professional participation of employers' and workers' representatives. . joint training in integrated activities. . information support facilitating multi-professional collaboration. international labour office (ilo) ( ) international labour conference, st session, report vi, ilo standards-related activities in the area of occupational safety and health: an in-depth study for discussion with a view to the elaboration of a plan of action for such activities. sixth item on the agenda. international labour office, geneva. what are the main challenges arising from the major societal changes for business/companies and workers/employees? how can these challenges be met in order to succeed in the growing international competition? what is the role of occupational safety and health (osh) in this context? the above-mentioned changes create new possibilities, new tasks and new risks to businesses in particular, and to the workers as well. in order to optimize the relation between the possibilities and the risks (maximize possibilities -minimize risks) there is a growing need for risk management. risk management includes all measures required for the target-oriented structuring of the risk situation and safety situation of a company. it is the systematic application of management strategies for the detection, assessment, evaluation, mastering and monitoring of risks. risk management was first considered exclusively from the point of view of providing insurance coverage for entrepreneurial risks. gradually the demands of jurisdiction grew, and the expectations of users and consumers increased with regard to the quality and safety of products. furthermore, the ever more complex problems of modern technology and ultimately the socioeconomic conditions have led to the development of risk management into an independent interdisciplinary field of work. risks can be regarded as potential failures, which may decrease trust in realizing a company's goals. the aim of risk management is to identify these potential failures qualitatively and quantitatively, and to reduce them to the level of a non-hazardous and acceptable residual risk potential. the development and formulation of a company's risk policy is regarded as the basis of effective risk management. this includes, first and foremost, the management's target concept with respect to the organization of work, distribution of labour, and competence of the departments and persons in charge of risk management. risk issues are important as far as acceptance of technology is concerned. it is not enough to reduce the problem to the question of which risks are tolerable or acceptable. it appears more and more that, although the risks themselves are not accepted, the measures or technologies causing them are. value aspects have an important role in this consideration. a positive or negative view of measures and technologies is thus influenced strongly by value expectations that are new, contradictory and even disputed. comparing risks and benefits has become a current topic of discussion. the relation between risks and benefits remains an unanswered question. the general public has a far broader understanding of the risks and benefits of a given technology than the normal understanding professed by engineering sciences which is limited to probability x harm. the damage or catastrophe potential, qualitative attributes such as voluntary nature and controllability also play an important role in the risk assessment of a technology. a normative setting for a certain, universally accepted risk definition according to engineering science is therefore hardly capable of consensus at the moment. the balanced management of risks and possibilities (benefits) is capable of increasing the value of a company. it may by far surpass the extent of legal obligations: for example, in germany, there is a law on the control and transparency for companies (kontrag) . the respective parameters may be defined accurately as follows: • strategic decisions aim to offer opportunities for acquiring benefit, taking into consideration risks. • risks that can have negative consequences to the technological capacities, the profitability potential, the asset values and the reputation of a company, as well as the confidence of shareholders are identified and measured. • the management focuses on important possibilities and risks, and addresses them adequately or reduces them to a tolerable level. the aim is not to avoid risks altogether, but to create opportunities for promoting proactive treatment of all important risks. the traditional occupational health hazards, such as physical, chemical and biological risks, as well as accidents, will not totally disappear as a consequence of change, nor will heavy physical work. about - % of workers are still exposed to such hazards. there is thus need to still develop risk assessment, prevention and control methods and programmes for these often well-known hazards. in many industrialized countries, prevention and control programmes have had a positive impact by reducing the trends of occupational diseases and accidents, particularly in big industries. some developing countries, however, show an increase in traditional hazards. international comparisons, however, are difficult to make because of poor coverage, underreporting, and poor harmonization of concepts, definitions and registration criteria. statistics on occupational accidents are difficult to compare, and therefore data on their total numbers in europe should be viewed with caution. the majority of countries, however, have shown declining trends in accident rates irrespective of the absolute numbers of accidents. some exceptions to this general trend have nevertheless been seen. the accident risk also seems to shift somewhat as regards location, so that instead of risks related to machines and tools, particularly the risks in internal transportation and traffic within the workplace grow in relative importance. this trend may increase in future, particularly as the work place, as well as the speed and volume of material flows are increasing. a threat is caused by lengthened working hours, which tend to affect the vigilance of workers and increase the risk of errors. small-scale enterprises and micro-enterprises are known to have a lower capacity for occupational health and safety than larger ones. in fact, a higher accident risk has been noted in medium-sized companies, and a lower risk in very small and very large enterprises. we can conclude that this is due to the higher mechanization level and energy use in small and medium-sized enterprises (sme) compared with micro-enterprises, which usually produce services. on the other hand, the better capacity of very large enterprises in safety management is demonstrated by their low accident rates. the production of chemicals in the world is growing steadily. the average growth has been between - % a year during the past - decades. the total value of european chemical production in was about usd billion, i.e. % of the world's total chemical production, and it has increased % in the -year period of - . the european union (eu) is the largest chemical producer in the world, the usa the second, and japan the third. there are some , different chemical entities in industrial use, but only about , are so-called high-production volume (hpv) chemicals produced (and consumed) in amounts exceeding , tons a year. the number of chemicals produced in amounts of - , tons a year is about , . but volume is not necessarily the most important aspect in chemical safety at work. reactivity, toxicological properties, and how the chemicals are used, are more important. the european chemical companies number some , , and in addition there are , plants producing plastics and rubber. surprisingly, as many as % of these are smes employing fewer than workers, and % are micro-enterprises employing fewer than workers. thus, the common belief that the chemical industry constitutes only large firms is not true. small enterprises and self-employed people have much less competence to deal with chemical risk assessment and management than the large companies. guidance and support in chemical safety is therefore crucial for them. the number of workers dealing with chemicals in the european work life is difficult to estimate. the chemical industry alone employs some . million workers in europe, i.e. about % of the workforce of manufacturing industries. about %, i.e. over , work in chemical smes. but a much higher number of workers are exposed in other sectors of the economy. there is a distinct trend showing that the use of chemicals is spreading to all sectors, and thus exposures are found in all types of activities: agriculture, forestry, manufacturing, services and even in high-tech production. the national and european surveys on chemical exposures in the work environment give very similar results. while about % of the eu work-ers were exposed to hazardous chemicals, the corresponding figure in central and eastern european countries may be much higher. the workers are exposed simultaneously to traditional industrial chemicals, such as heavy metals, solvents and pyrolytic products, and to "new exposures", such as plastics monomers and oligomers, highly reactive additives, cross-linkers and hardeners, as well as to, for example, fungal spores or volatile compounds in contaminated buildings. this implies that some million people in the eu are exposed at work, and usually the level of exposure is one to three orders of magnitude higher than in any other environment. about the same proportion ( % of the workforce, i.e. , ) of the finnish workers in the national survey reported exposure. the chemicals to which the largest numbers of workers are exposed occur typically in smes; they are e.g. detergents and other cleaning chemicals, carbon monoxide, solvents, environmental tobacco smoke, and vegetable dusts. european directives on occupational health and safety require a high level of protection in terms of chemical safety in all workplaces and for all workers. risk assessment and risk management are key elements in achieving these requirements. the risk assessment of chemicals takes place at two levels: a) systems-level risk assessment, providing a dose-response relationship for a particular chemical, and serving as a basis for standard setting. risk assessment at the systems level is carried out in the pre-marketing stage through testing. this consequently leads to actions stipulated in the regulations concerning standards and exposure limits, labelling and marking of hazardous chemicals, limitations in marketing, trade and use. in this respect, the level of safety aimed at remains to be decided. is it the reasonably achievable level or, for example, the level achieved by the best available technology? the impact is expected to be system-wide, covering all enterprises and all workers in the country. this type of risk assessment is an interactive practice between the scientific community and the politically controlled decision making. a high level of competence in toxicology, occupational hygiene and epidemiology is needed in the scientific community. and the decision makers must have the ability to put the risk in concern into perspective. in most countries the social partners also take part in the political decision making regarding occupational risks. b) workplace risk assessment directed at identifying the hazards at an individual workplace and utilizing standards as a guide for making decisions on risk management. risk assessment at workplace level leads to practical actions in the company and usually ensures compliance with regulations and standards. risk assessment is done by looking at the local exposure levels and comparing these with standards produced in the type a) risk assessment. risk management is done through preventive and control actions by selecting the safest chemicals, by controlling emissions at their source, by general and local ventilation, and by introducing safe working practices. if none of the above is effective, personal protective devices must be taken into use. noise is a nearly universal problem. the products of technology, which have freed us from the day-to-day survival struggle, have been accompanied by noise as the price of progress. however, noise can no longer be regarded as an inevitable by-product of today's society. not only is noise an undesirable contaminant of our living environment, but high levels of noise are frequently present in a variety of work situations. many problems arise from noise: annoyance, interference with conversation, leisure or sleep, effects on work efficiency, and potentially harmful effects, particularly on hearing. in short, noise may affect health, productivity, and well-being. the selection of appropriate noise criteria for the industry depends on knowledge of the effects of noise on people, as well as on the activities in which they are engaged. many of the effects are dependent on the level, and the magnitude of the effects varies with this level. hearing damage is not the only criterion for assessing excessive noise. it is also important to consider the ability and ease of people to communicate with each other. criteria have therefore been developed to relate the existing noise environment to the ability of the typical individual to communicate in areas that are likely to be noisy. the effects of noise on job performance are difficult to evaluate. in general, one can say that sudden, intermittent, high-intensity noise impedes efficient work more than low-intensity and steady-state noise. the complexity of the task with which noise interferes plays a major role in determining how much noise actually degrades performance. two common ways in which noise can interfere with sleep are: delaying the onset of sleep, and shifting sleep stages. one effect of noise that does not seem to depend strongly on its level is annoyance. under some circumstances, a dripping water faucet can be as annoying as a jackhammer. there are no generally accepted criteria for noise levels associated with annoyance. if the noise consists of pure tones, or if it is impulsive in nature, serious complaints may arise. new information and communication technologies (ict) are being rapidly implemented in modern work life. about % of workers have computers at work, and about % are e-mail and internet users. there are three main problem areas in the use of new ict at work. these are: ) the visual sensory system, ) the cognitive processes, and ) the psychomotoric responses needed for employing hand-arm systems. all three have been found to present special occupational health and even safety problems, which are not yet fully solved. the design of new more user-friendly technology is highly desirable, and the criteria for such technology need to be generated by experts in neurophysiology, cognitive psychology and ergonomics. it is important to note that the productivity and quality of information-intensive work requiring the use of ict depends crucially on the user-friendliness of the new technology interface, both the hardware and software. communication and information technologies will change job contents, organization of work, working methods and competence demands substantially in all sectors of the economy in all countries. a number of new occupational health and safety hazards have already arisen or are foreseen, including problems with the ergonomics of video display units, and musculoskeletal disorders in shoulder-neck and arm-hand systems, information overload, psychological stress, and pressure to learn new skills. the challenge to occupational health and safety people is to provide health-based criteria for new technologies and new types of work organization. it is also important to contribute to the establishment of healthy and safe work environments for people. in the approved and draft standards of the international standardization organization, iso, there are altogether about different targets dealing with the standardization of eyesight-related aspects. vision is the most important channel of information in information-intensive work. from the point of view of seeing and eye fatigue, the commonly used visual display units (vdu) are not the most optimal solutions. stability of the image, poor lighting conditions, reflections and glare, as well as invisible flicker, are frequent problems affecting vision. the displays have, however, developed enormously in the s, and there is evidence that the so-called flat displays have gradually gained ground. information-intensive work may increasingly load the vision and sense of hearing, particularly of older workers. even relatively minor limitations in vision or hearing associated with ageing have a negative effect on receiving and comprehending messages. this affects the working capacity in information-intensive work. the growing haste in information-intensive work causes concern among workers and occupational health professionals. particularly older workers experience stress, learning difficulties and threat of exclusion. corrective measures are needed to adjust the technology to the worker. the most important extension of the man-technology interface has taken place in the interaction of two information-processing elements: the central nervous system and the microprocessor. the contact is transmitted visually and by the hands, but also by the software which has been developed during the s even more than the technology itself. many problems are still associated with the immaturity of the software, even though its user-friendliness has recently greatly improved. the logic and structure of the software and the user systems, visual ergonomics, information ergonomics, the speed needed, and the forgiving characteristics of programs, as well as the possibility to correct the commands at any stage of processing are the most important features of such improvements. also the user's skills and knowledge of information technology and the software have a direct effect on how the work is managed and how it causes workload. the user-friendliness and ergonomics of the technology, the disturbing factors in the environment, haste and time pressure, the work climate, and the age and professional skills of the individual user, even his or her physical fitness, all have an impact on the cognitive capacity of a person. this capacity can to a certain extent be improved by training, exercise and regulating the working conditions, as well as with expert support provided for the users when difficulties do occur. the use of new technologies has been found to be associated with high information overload and psychological stress. the problem is not only typical for older workers or those with less training, but also for the super-experts in ict who have shown an elevated risk of psychological exhaustion. there are four main types of ergonomic work loads: heavy dynamic work that may overload both the musculoskeletal and cardiovascular system; re-petitive tasks which may cause strain injuries; static work that may result in muscular pain; and lifting and moving heavy loads, which may result in overexertion, low back injury, or accidental injuries. visual ergonomics is gaining in importance in modern work life. the overload of the visual sensory system and unsatisfactory working conditions may strain the eye musculature, but can also cause muscle tension in the upper part of the body. this effect is aggravated if the worker is subjected to psychological stress or time pressure. in addition to being a biological threat, the risk of infections causes psychological stress to workers. the improved communication between health services and international organizations provides help in the early detection and control of previously unknown hazards. nevertheless, for example, the danger related to drug abusers continues to grow and present a serious risk to workers in, for example, health services and the police force. some new viral or re-emerging bacterial infections also affect health care staff in their work. the increase in the cases of drug-resistant tuberculosis is an example of such a hazard. the goal of preventive approaches is to exert control on the cause of unwelcome events, the course of such negative events or their outcome. in this context, one has to decide whether the harmful process is acute (an accident) or dependent on impact duration and stimulus (short-, medium-, and long-term). naturally, the prevention approaches depend on the phases of the harmful process, i.e. whether the harm is reversible, or whether it is possible only to maintain its status, or to slow down the process. it is assumed here that a stressful factor generates an inter-individual or intra-individual strain. thus the effects and consequences of stress are dependent on the situation, individual characteristics, capabilities, skills and the regulation of actions, and other factors. the overall consideration is related to work systems characterized by work contents, working conditions, activities, and actions. system performance is expected of this work system, and this system performance is characterized by a performance structure and its conditions and requirements (figure . ). the performance of the biopsychosocial unit, i.e. the human being, plays an important role within the human performance structure (see figures . and . ). the human being is characterized by external and internal features, which are closely related to stress compatibility, and thus to strain. in this respect, preventive measures serve to optimize and ensure performance, on the one hand, and to control stress and strain, on the other. preventive measures aim to prevent bionegative effects and to facilitate and promote biopositive responses. • the internal factors affecting performance are described by performance capacity and performance readiness. • performance capacity is determined by the individual's physiological and psychological capacity. • performance readiness is characterized by physiological fitness and psychological willingness. • the external factors affecting performance are described by organizational preconditions/requirements and technical preconditions/requirements. • regarding the organizational requirements, the organizational structure and organizational dynamics are of significance. • in the case of technical requirements, the difficulties of the task, characterized by machines, the entire plant and its constructions, task content, task design, technical and situation-related factors, such as work layout, anthropometrics, and quality of the environment, are decisive (table . ). mental stress plays an increasing role in the routine activities of enterprises. through interactive models of mental stress and strain, it is possible to represent the development of mental strain and its impairing effects (e.g. tension, fatigue, monotony, lack of mental satisfaction). it is important to distinguish the above-mentioned impairing effects from each other, since they can arise from different origins and can be prevented or eliminated by different means. activities that strain optimally enhance health and promote safe execution of work tasks. stress essentially results from the design parameters of the work system or workplace. these design parameters are characterized by, e.g.: • technology, such as work processes, work equipment, work materials, work objects; • anthropometric design; • work techniques, working hours, sharing of work, cycle dependence, job rotation; • physiological design that causes strain, fatigue; • psychological design that either motivates or frustrates; • information technology, e.g. information processing, cognitive ergonomic design; • sociological conditions; and • environmental conditions, e.g. noise, dust, heat, cold. the stress structure is very complex, and we therefore need to look at the individual parameters carefully, taking into account the interactions and links between the parameters at the conceptual level. the design parameters impact people as stress factors. as a result, they also turn into conditions affecting performance. such conditions can basically be classified into two types: a person's internal conditions, characterized in particular by predisposition and personality traits, and a person's external conditions, determined mainly by the design parameters. when we look at performance as resulting from regulated or reactive action, we find three essential approaches for prevention: • the first approach identifies strain. it is related to anatomical, biochemical, histological, physiological characteristic values, typical curves of organ systems, the degree of utilization of skills through stress, and thus the degree of utilization of the dynamics of physiological variables in the course of stress. • the second approach is related to the control of strain. the aim is to identify performance limits, the limits of training and practice, and to put them into positive use. adaptation and fatigue are the central elements here. • the third approach for prevention is related to reducing strain. the aim is to avoid harm, using known limits as guidelines (e.g. maximum workplace concentration limit values for harmful substances, maximum organspecific concentration values, biological tolerance values for substances, limit values for noise and physical loads). however, the use of guideline values can only be an auxiliary approach, because the stress-strain concept is characterized by highly complex connections between the exogenous stress and the resulting strain. an objectively identical stress will not always cause the same level of strain in an individual. due to action regulation and individual characteristic values and curves of the organ systems (properties and capabilities), differences in strain may occur. seemingly identical stress can cause differing strain due to the superposition of partial stress. combinations of partial stress can lead to compensatory differences (e.g. physiological stress can compensate for psychological stress) or accumulation effects. partial stress is determined by the intensity and duration of the stress, and can therefore appear in differing dimensions and have varying effects. in assessing overall stress, the composition of the partial stress according to type, intensity, course and time is decisive. partial stress can occur simultaneously and successively. in our considerations, the principle of homeostasis plays an important role. however, optimizing performance is only a means to an end in a prevention programme. the actual purpose is to avoid harm, and thus to control strain. harm is a bionegative effect of stress. the causative stress is part of complex conditions in a causal connection. causal relationships can act as dose-effect relationships or without any relation to the dose. in this respect, the causative stress condition can form a chain with a fixed or variable sequence; it can add up, multiply, intensify or have an effect in only specific combinations, and generate different effects (e.g. diseases). we are thus dealing with a multicausal model or a multi-factor genesis. low back pain is an example of a complex phenomenon. the incidence of musculoskeletal disorders, especially low back pain, is rapidly increasing. several occupational factors have been found to increase the risk for low back pain. some studies indicate that psychosocial and work-related conditions are far more accurate in the prognosis of disability than are physical conditions. chronic low back pain is perceived as a phenomenon which encompasses biological, social and psychological variables. according to the model of adaptation, the goal of reducing risks is to increase a person's physical abilities (i.e. flexibility, strength, endurance), the use of body mechanics, techniques to protect the back (following the rules of biomechanics), to improve positive coping skills and emotional control. the following unfavourable factors leading to back pain have been identified at workplaces: • the lifting of too heavy loads. • working in a twisted or bent-down position. • work causing whole-body vibration. • working predominantly in a sitting position. • carrying heavy loads on the shoulders. the prevention of acute back pain and the prevention of work disability must entail several features. one important element is work safety, which can be maximized by screening a worker's physical and intellectual capacities, by ensuring ergonomic performance of the work procedures, and by increasing awareness of proper working techniques that do not strain the back. the use of adaptation programmes makes it possible to attain a higher performance level and to be able to withstand more strain (figure . ) . research-based methods of training optimize and improve performance. they are a means for controlling stress and strain with the aim of preventing bionegative effects and facilitating and promoting biopositive responses. the stress (load) and strain model and human performance can be described as follows: • causative stress generates an inter-individual or intra-individual strain. • the effects and consequences depend on a person's properties, capabilities, skills and the regulation of actions, individual characteristics of the organ systems, and similar factors. • within the performance structure, the performance of the biopsychosocial unit, i.e. the human being, plays an important role. the human being is characterized by external and internal factors, which in turn are closely related to stress compatibility and thus to strain. the connection between stress and harm plays a significant role in the research on occupational health hazards. how should this connection be explored? different hypotheses exist in replying to this question, but none of them have been definitively proven. the three most common hypotheses today are: stress occurring in connection with a person's life events. the number and extent of such events is decisive. problem-coping behaviour and/or social conditions are variables explaining the connection between stress and harm. . the additive stress hypothesis. the ability to cope with problems and the social conditions has an effect on harm which is independent of the stress resulting from life events. when we refer to the complexity of risks in this context of occupational safety our focus shall be on the enterprise. there are different kinds of risks to be found in enterprises. many of them are of general importance, i.e. they are in principle rather independent of an enterprise's size or its type of activity. how to deal with such risks shall be outlined to some extent here. in order to treat those risks at work successfully resources are needed whose availability often depends on the enterprise's situation. the situation in enterprises usually is a determining indicator for available resources to control and develop safety and health and thus performance of the enterprise and its workers and employees through appropriate preventive measures. this situation has been described to some extent in chapter . big companies usually have well-developed safety and health resources, and they often transfer appropriate policies and practices to the less developed areas where they operate. even in big enterprises, however, there is fragmentation of local workplaces into ever smaller units. many of the formerly in-built activities of enterprises are outsourced. new types of work organizations are introduced, such as flat and lean organizations, increase of telework and call centres, many kinds of mobile jobs and network organizations. former in-company occupational health services are frequently transferred to service providers. this leads to the establishment of high numbers of micro-enterprises, small scale enterprises (sses), small and medium-sized enterprises (smes) and self-employed people. sses and smes are thus becoming the most important employers in the future. from a number of studies there is evidence that at least among a part of sses and smes awareness of osh risks is low. both managers and workers often do not see the need to improve occupational safety and health or ergonomic issues and their possibilities and benefits by reducing or eliminating risks at work. as these types of enterprises, even more the self-employed, do not have sufficient resources or expertise for implementing preventive measures, the need for external advisory support, services and incentives is evident and growing. interpersonal relations in sses and smes being generally very good provides a strong chance for effectively supporting them. other special features in the structure of small and medium-sized enterprises to be considered are: • direct participation of the management in the daily activities; • the management structure is designed to meet the requirements of the manager; • less formal and standardized work processes, organizational structures and decision processes; • no clear-cut division of work: -wide range of tasks; and -less specialization of the employees; • unsystematic ways of obtaining and processing information; • great importance of direct personal communication; • less interest in external cooperation and advice; • small range of internal, especially long-term and strategic planning; and • stronger inclination of individual staff members to represent their own interests. the role of occupational health services (ohs) in smes is an interdisciplinary task, consisting of: • risk assessment: -investigation of occupational health problems according to type of technology, organization, work environment, working conditions, social relationships. • surveillance of employees' health: -medical examinations to assess employees' state of health; and -offering advice, information, training. • advice, information, training: -measures to optimize safety and health protection; and -safe behaviour, safe working procedures, first aid preparedness. different kinds of risks are found in enterprises (see table . ). these different types of risks need to be handled by an interlinked system to control the risks and to find compromises between the solutions. figure . illustrates these linkages. the promotion of safety and health is linked to several areas and activities. all of these areas influence the risk management process. the results of risk treatment not only solve occupational health and safety problems, but they also give added value to the linked fields. specific risk management methods are needed to reach the set goal. one needs to know what a risk is. the definition of risk is essential: a risk is a combination of a probability -not frequency -of occurrence, and the associated unwelcome outcome or impact of a risk element (consequence). risk management is recognized as an integral part of good management practice. it is a recurring process consisting of steps which, when carried out in a sequence, allow decision making to be improved continuously. risk management is a logical and systematic method of identifying, analyzing, evaluating, treating, monitoring and communicating risks arising during any activity, function, or process in a manner enabling the organization to minimize losses and maximize productive opportunities. different methods are available for analyzing problems. each method is especially suited to respond to certain questions and less suited for others. a complex "thinking scheme" is necessary for arranging the different analyses correctly within the system review. such a scheme includes the following steps: . defining the unit under review: the actual tasks and boundaries of the system (a fictitious or a real system) must be specified: time, space and state. . problem analysis: all problems existing in the defined system, including problems which do not originate from the system itself, are detected and described. . causes of problems: all possible or probable causes of the problems are identified and listed. . identifying interaction: the dependencies of the effect mechanisms are described, and the links between the causes are determined. . establishing priorities and formulating targets: to carry out this step, it is necessary to evaluate the effects of the causes. . solutions to the problems: all measures needed for solving the individual problems are listed. the known lists usually include technical as well as non-technical measures. since several measures are often appropriate for solving one problem, a pre-selection of measures has to be done already at this stage. however, this can only be an approach to the solution; the actual selection of measures has to be completed in steps and . . clarifying inconsistencies and setting priorities: as the measures required for solving individual problems may be inconsistent in part, or may even have to be excluded as a whole, any inconsist-encies need to be clarified. a decision should then be made in favour or against a measure, or a compromise may be sought. . determining measures for the unit under review: the measures applicable to the defined overall system are now selected from the measures for the individual problems. . list of questions regarding solutions selected for the overall system: checking whether the selected measures are implementable and applicable for solving the problems of the overall system. . controlling for possible new problems: this step consists of checking whether new problems are created by the selected solution. the close link between cause and effect demands that the processes and sub-processes must be evaluated uniformly, and risks must be dealt with according to a coordinated procedure. the analysis is started by orientation to the problem. this is done in the following steps: . recognizing and analyzing the problem according to its causes and extent, by means of a diagnosis and prediction, and comparison with the goals aimed at. . description and division of the overall problem into individual problem areas, and specifying their dependencies. . defining the problem and structuring it according to the objectives, time relation, degree of difficulty, and relevance to the goal. . detailed analysis of the causes, and classification in accordance to the possible solution. the analysis of the problem should be integrated into the overall analytical process in accordance with the thinking schemes described earlier. the relevance and priorities related to the process determine the starting point for the remaining steps of the analysis. analyses are divided into quantitative and qualitative ones. quantitative analyses include risk analyses, that is, theoretical safety-related analysis methods and safety analyses, e.g. classical accident analyses. qualitative analyses include failure mode and effect analyses, hazard analyses, failure hazard analyses, operating hazard analyses, human error code and effect analyses, information error and effect analyses. the theoretical safety-related analysis methods include inductive and deductive analyses based on boolean models. inductive analyses are, e.g. fault process analyses. deductive analyses are fault tree analyses, analytical processes and simulation methods. theoretical safety-related analysis methods which are not based on boolean models are stochastic processes, such as markow's model, risk analyses and accident analyses which, as a rule, are statistical or probability-related analyses. a possible scheme to begin with is shown in figure . . since absolute safety, entailing freedom from all risks, does not exist in any sphere of life, the task of those dealing with safety issues is to avert hazards and to achieve a sustainable reduction of the residual risk, so that it does not exceed a tolerable limit. the extent of this rationally acceptable risk is also influenced by the level of risk which society intuitively considers as being acceptable. those who propose definitions of safety are neither authorized nor capable of evaluating the general benefit of technical products, processes and services. risk assessment is therefore focused at the potential harm caused by the use or non-use of the technology. the guidelines given in "a new approach to technical harmonization and standards" by the council resolution of may are valid in the european union. the legal system of a state describes the protective goals, such as protection of life, health, etc., in its constitution, as well as in individual laws and regulations. as a rule, these do not provide an exact limit as to what is still a tolerable risk. this limit can only be established indirectly and unclearly on the basis of the goals and conceptions set down by the authorities and laws of a state. in the european union, the limits are expressed primarily in the "basic safety and health requirements". these requirements are then put into more concrete terms in the safety-related definitions issued by the bodies responsible for preparing industrial standards. compliance with the standards is voluntary, but it is presumed that the basic requirements of the directives are met. the term which is opposite to "safety" is "hazard". both safe and hazardous situations are founded on the intended use of the technical products, processes and services. unintended use is taken into account only to the extent that it can be reasonably foreseen. the risks present in certain events are, in a more narrow sense, unwelcome and unwanted outcomes with negative effects (which exceed the range of acceptance). unwelcome events are • source conditions of processes and states; • processes and states themselves; and • effects of processes and states which can result in harm to persons or property. an unwelcome event can be defined as a single event or an event within a sequence of events. possible unwelcome events are identified for a unit under review. the causes may be inherent in the unit itself, or outside of it. in order to determine the risks involved in unwelcome events, it is necessary to identify probabilities and consequences. the question arises: are the extent and probability of the risk known? information is needed to answer this question. defining risk requires information concerning the probability of occurrence and the extent of the harm of the consequences. uncertainty is given if the effects are known but the probability is unknown. ignorance is given if both the effects and the probability are unknown. figure . shows the risk analysis procedure according to the type of information available. since risk analyses are not possible without practical, usable information, it is necessary to consider the nature of the information. the information is characterized by its content, truth and objectivity, degree of confirmation, the possibility of being tested, and the age of the information. the factors determining the content of the information are generality, precision and conditionality. the higher the conditionality, the smaller is the generality, and thus the smaller the information content of the statement. truth is understood as conformity of the statement with the real state of affairs. the closer that the information is to reality, the higher is its information content, and the smaller its logical margin. the degree of controllability is directly dependent on the information content: the bigger the logical margin, the smaller the information content, and thus the higher the probability that the information content will prove its worth. in this respect, probability plays a role in the information content: the greatest significance is attributed to the logical hypothetical probability and statistical probability of an event. objectivity and age are additional criteria for any information. the age and time relation of information play a particularly important role, because consideration of the time period is an important feature of analysis. as a rule, information and thus the data input in the risk analysis consist of figures and facts based on experience, materials, technical design, the organization and the environment. in this regard, most figures are based on statistics on incidents and their occurrences. factual information reveals something about the actual state of affairs. it consists of statements related to past conditions, incidents, etc. forecast-type predictions are related to real future conditions, foretelling that certain events will occur in the future. explanatory information replies to questions about the causes of phenomena, and provides explanations and reasons. it establishes links between different states based on presumed cause-effect relationships. subjunctive information expresses possibilities, implying that certain situations might occur at present or in the future, thus giving information about conceivable conditions, events and relationships. normative information expresses goals, standards, evaluations and similar matters; it formulates what is desirable or necessary. the main problem with risk analyses is incomplete information, in particular regarding the area of "uncertainty". in the eu commission's view, recourse to the so-called precautionary principle presupposes that potentially dangerous effects deriving from a phenomenon, product or process have been identified via objective scientific evaluation, and that scientific evaluation does not allow the risk to be determined with sufficient certainty. recourse to the precautionary principle thus takes place in the framework of general risk management that is concretely connected to the decision-making process. if application of the precautionary principle results in the decision that action is the appropriate response to a risk, and that further scientific information is not needed, it is still necessary to decide how to proceed. apart from adopting legal provisions which are subject to judicial control, a whole range of actions is available to the decision-makers (e.g. funding research, or deciding to inform the public about the possible adverse effects of a product or procedure). however, the measures may not be selected arbitrarily. in conclusion, the assessment of various risks and risk types which may be related to different types of hazards requires a variety of specific risk assessment methods. if one has dependable information about the probability and consequences of a serious risk or risky event, one should use the risk assessment procedure shown in figure . . • major industrial accidents; • damage caused by dangerous substances; • nuclear accidents; • major accidents at sea; • disasters due to forces of nature; and • acts of terrorism. • dangerous substances discharged (fire, explosion); • injury to people and damage to property; • immediate damage to the environment; • permanent or long-term damage to terrestrial habitats, to fresh water, to marine habitats, to aquifers or underground water supplies; and • cross-border damage. • technical failure: devices, mountings, containers, flanges, mechanical damage, corrosion of pipes, etc.; • human failure: operating error, organizational failure, during repair work; • chemical reaction; • physical reaction; and • environmental cause. system analysis is the basis of all hazard analyses, and thus needs to be done with special care. system analysis includes the examination of the system functions, particularly the performance goals and admissible deviations in the ambient conditions not influenced by the system, the auxiliary sources of the system (e.g. energy supply), the components of the system, and the organization and behaviour of the system. geographical arrangements, block diagrams, material flow charts, information flow charts, energy flow charts, etc. are used to depict technical systems. the objective is to ensure the safe behaviour of the technical systems by design methods, at least during the required service life and during intended use. qualitative analyses are particularly important in practice. as a rule, they are form sheet analyses and include failure mode and effect analyses, which examine and determine failure modes and their effects on systems. the preliminary hazard analysis looks for the hazard potentials of a system. the failure hazard analysis examines the causes of failures and their effects. the operating hazard analysis determines the hazards which may occur during operation, maintenance, repair, etc. the human error mode and effect analysis examines error modes and their effects which occur because of wrong behaviour of humans. the information error mode and effect analysis examines operating, maintenance and repair errors, fault elimination errors and the effects caused by errors in instructions and faulty information. theoretical analysis methods include the fault tree analysis, which is a deductive analysis. an unwelcome incident is provided to the system under review. then all logical links and/or failure combinations of components or partial system failures which might lead to this unwelcome incident are assembled, forming the fault tree. the fault tree analysis is suited for simple as well as for complex systems. the objective is to identify failures which might lead to an unwelcome incident. the prerequisite is exact knowledge about the functioning of the system under review. the process, the functioning of the components and partial systems therefore need to be present. it is possible to focus on the flow of force, of energy, of materials and of signals. the fault process analysis has a structure similar to that of the fault tree analysis. in this case, however, we are looking for all unwelcome incidents as well as their combinations which have the same fault trigger. analysis of the functioning of the system under review is also necessary for this. analyses can also be used to identify possible, probable and actual risk components. the phases of the analysis are the phases of design, including the preparation of a concept, project and construction, and the phases of use which are production, operation and maintenance. in order to identify the fault potential as completely as possible, different types of analyses are usually combined. documentation of the sufficient safety of a system can be achieved at a reasonable cost only for a small system. in the case of complex systems, it is therefore recommended to document individual unwelcome incidents. if solutions are sought and found for individual unwelcome incidents, care should be taken to ensure that no target conflicts arise with regard to other detail solutions. with the help of the fault tree, it is possible to analyse the causes of an unwelcome incident and the probability of its occurrence. decisions on whether and which redundancies are necessary can in most cases be reached by simple estimates. four results can be expected by using a fault tree: . the failure combination of inputs leading to the unwelcome event; . the probability of their occurrence; . the probability of occurrence of the unwelcome event; and . the critical path that this incident took from the failure combination through the fault tree. a systematic evaluation of the fault tree model can be done by an analytical evaluation (calculation) or by simulation of the model (monte-carlo method). a graphic analysis of the failure process is especially suited to prove the safety risk of previously defined failure combinations in the system. the failure mode and effect analysis and the preliminary hazard analysis as mentioned previously, no method can disclose all potential faults in a system with any degree of certainty. however, if one starts with the preliminary hazard analysis, then at least the essential components with hazard potential will be defined. the essential components are always similar, namely, kinetic energy, potential energy, source of thermal energy, radioactive material, biological material, chemically reactive substance. with the fault tree method, any possible failure combinations (causes), leading to an unwelcome outcome, can then be identified additionally. re- are especially suited for identifying failures in a system which pose a risk. liability parameters can be determined in the process, e.g. the frequency of occurrence of failure combinations, the frequency of occurrence of unwelcome events, non-availability of the system upon requests, etc. the failure effect analysis is a supplementary method. it is able to depict the effects of mistakes made by the operating personnel, e.g. when a task is not performed, or is performed according to inappropriate instructions, or performed too early, too late, unintentionally or with errors. it can pinpoint also effects resulting from operating conditions and errors in the functional process or its elements. an important aspect of all hazard analyses is that they are only valid for the respective case under review. every change in a parameter basically requires new analyses. this applies to changes in the personnel structure and the qualification of persons, as well as to technical specifications. for this reason, it is necessary to document the parameters on which each analysis is based. the results of the hazard analyses form the basis for the selection of protective measures and measures to combat the hazards. if the system is modified, the hazards inherent in the system may change, and the measures to combat the hazards may have to be changed as well. this may also mean that the protective measures or equipment which existed at time x for the system or partial system in certain operating conditions (e.g. normal operation, set-up operation, and maintenance phase) may no longer be compatible. different protective measures, equipment or strategies may then be needed. however, hazard analyses do not merely serve to detect and solve potential failures. they form the basis for the selection of protective measures and protective equipment, and they can also test the success of the safety strategies specified. a selection of methods used for hazard analysis is given in annex to section . . risk assessment is a series of logical steps enabling the systematic examination of the hazards associated with machinery. risk assessment is followed, whenever necessary, by actions to reduce the existing risks and by implementing safety measures. when this process is repeated, it eliminates hazards as far as possible. risk assessment includes: • risk analysis: -determining the limits of machinery; -identifying hazards; and -estimating risks. • risk evaluation. risk analysis provides the information required for evaluating risks, and this in turn allows judgements to be made on the safety of e.g. the machinery or plant under review. risk assessment relies on decisions based on judgement. these decisions are to be supported by qualitative methods, complemented, as far as possible, by quantitative methods. quantitative methods are particularly appropriate when the foreseeable harm is very severe or extensive. quantitative methods are useful for assessing alternative safety measures and for determining which measure gives best protection. the application of quantitative methods is restricted to the amount of useful data which is available, and in many cases only qualitative risk assessment will be possible. risk assessment should be conducted so that it is possible to document the used procedure and the results that have been achieved. risk assessment shall take into account: • the life cycle of machinery or the life span of the plant. • the limitations of the machinery or plant, including the intended use (correct use and operation of the machinery or plant, as well as the consequences of reasonably foreseeable misuse or malfunction). • the full range of foreseeable uses of the machinery (e.g. industrial, nonindustrial and domestic) by persons identified by sex, age, dominant hand usage, or limiting physical abilities (e.g. visual or hearing impairment, stature, strength). • the anticipated level of training, experience or ability of the anticipated users, such as: -operators including maintenance personnel or technicians; -trainees and juniors; and -general public. • exposure of other persons to the machine hazards, whenever they can be reasonably foreseen. having identified the various hazards that can originate from the machine (permanent hazards and ones that can appear unexpectedly), the machine designer shall estimate the risk for each hazard, as far as possible, on the basis of quantifiable factors. he must finally decide, based on the risk evaluation, whether risk reduction is required. for this purpose, the designer has to take into account the different operating modes and intervention procedures, as well as human interaction during the entire life cycle of the machine. the following aspects in particular must be considered: • construction; transport; • assembly, installation, commissioning; • adjusting settings, programming or process changeover; • instructions for users; • operating, cleaning, maintenance, servicing; and • checking for faults, de-commissioning, dismantling and safe disposal. malfunctioning of the machine due to, e.g. • variation in a characteristic or dimension of the processed material or workpiece; • failure of a part or function; • external disturbance (e.g. shock, vibration, electromagnetic interference); • design error or deficiency (e.g. software errors); • disturbance in power supply; and • flaw in surrounding conditions (e.g. damaged floor surface). unintentional behaviour of the operator or foreseeable misuse of the machine, e.g.: • loss of control of the machine by the operator (especially in the case of hand-held devices or moving parts); • automatic (reflexive) behaviour of a person in case of a machine malfunction or failure during operation; • the operator's carelessness or lack of concentration; • the operator taking the "line of least resistance" in carrying out a task; • behaviour resulting from pressure to keep the machine running in all circumstances; and • unexpected behaviour of certain persons (e.g. children, disabled persons). when carrying out a risk assessment, the risk of the most severe harm that is likely to occur from each identified hazard must be considered, but the greatest foreseeable severity must also be taken into account, even if the probability of such an occurrence is not high. this objective may be met by eliminating the hazards, or by reducing, separately or simultaneously, each of the two elements which determine the risk, i.e. the severity of the harm from the hazard in question, and the probability of occurrence of that harm. all protective measures intended to reach this goal shall be applied according to the following steps: this stage is the only one at which hazards can be eliminated, thus avoiding the need for additional protective measures, such as safeguarding machines or implementing complementary protective measures. . information about the residual risk. information for use on the residual risk is not to be a substitute for inherently safe design, or for safeguarding or complementary protective measures. risk estimation and evaluation must be carried out after each of the above three steps of risk reduction. adequate protective measures associated with each of the operating modes and intervention procedures prevent operators from being prone to use hazardous intervention techniques in case of technical difficulties. the aim is to achieve the lowest possible level of risk. the design process is an iterative cycle, and several successive applications may be necessary to reduce the risk, making the best use of available technology. four aspects should be considered, preferably in the following order: . the safety of the machine during all the phases of its life cycle; . the ability of the machine to perform its function; . the usability of the machine; and . the costs of manufacturing, operating and dismantling the machine. the following principles apply to technical design: service life, safe machine life, fail-safe and tamper-proof design. a design which ensures the safety of service life has to be chosen when neither the technical system nor any of its safety-relevant partial functions can be allowed to fail during the service life envisaged. this means that the components of the partial functions need to be exchanged at previously defined time intervals (preventive maintenance). in the case of a fail-safe design, the technical system or its partial functions allow for faults, but none of these faults, alone or in combination, may lead to a hazardous state. it is necessary to specify just which faults in one or several partial systems can be allowed to occur simultaneously without the overall system being transferred into a hazardous state (maximum admissible number of simultaneous faults). a failure or a reduction in the performance of the technical system is accepted in this case. tamper-proof means that it is impossible to intentionally induce a hazardous state of the system. this is often required of technical systems with a high hazard potential. strategies involving secrecy play a special role in this regard. in the safety principles described here, redundant design should also be mentioned. the probability of occurrence and the consequences of damage are reduced by multiple arrangements, allowing both for subsystems or elements to be arranged in a row or in parallel. it is possible to reduce the fault potential of a technical system by the diversification of principles: several different principles are used in redundant arrangements. the spatial distribution of the function carriers allows the possibilities to influence faults to be reduced to one function. in the redundant arrangements, important functions, e.g. information transmission, are therefore designed in a redundant manner at different locations. the measures to eliminate or avoid hazards have to meet the following basic requirements: their effect must be reliable and compulsory, and they cannot be circumvented. reliable effect means that the effect principle and construction design of the planned measure guarantee an unambiguous effect, that the components have been designed according to regulations, that production and assembly are performed in a controlled manner, and that the measure has been tested. compulsory effect includes the demand for a protective effect which is active at the start of a hazardous state and during it, and which is deactivated only when the hazardous state is no longer present, or stops when the protective effect is not active. technical systems are planned as determined systems. only predictable and intended system behaviour is taken into account when the system is designed. experience has shown, however, that technical systems also display stochastic behaviour. that is, external influences and/or internal modifications not taken into consideration in the design result in unintended changes in the system's behaviour and properties. the period of time until the unintended changes in behaviour and/or in properties occur, cannot be accurately determined; it is a random variable. we have to presume that there will be a fault in every technical system. we simply do not know in advance when it will take place. the same is true for repairs. we know that it is generally possible in systems requiring re-pair to complete a repair operation successfully, but we cannot determine the exact time in advance. using statistical evaluations, we can establish a timedependent probability at which a "fault event" or "completion of a repair operation" occurs. the frequency of these events determines the availability of the system requiring repair. technical systems are intended to perform numerous functions and, at the same time, to be safe. the influence of human action on safety has to be taken into account in safety considerations as well (i.e. human factor). a system is safe when there are no functions or action sequences resulting in hazardous effects for people and/or property. risks of unwelcome events (in the following called "risk of an event") are determined on the basis of the experience (e.g. catalogue of measures) with technical systems. in addition to this, safety analyses are used (e.g. failure mode and effect analysis, hazard analysis, failure hazard analysis, operating hazard analysis, information error analysis), as well as mathematical models (e.g. worst-case analysis, monte-carlo procedure, markow's models). unwelcome events are examined for their effects. this is followed by considerations about which design modifications or additional protective measures might provide sufficient safety against these unwelcome events. the explanations below present the basic procedure for developing safety-relevant arrangements and solutions, i.e. the thinking and decision-making processes, as well as selecting criteria that are significant for the identification of unwelcome events, the risk of an event, the acceptance limits and the adoption of measures. before preparing the final documentation, it is essential to verify that the limit risk has not been exceeded, and that no new unwelcome events have occurred. the sequence scheme describes the procedure for developing safety arrangements and for finding solutions aiming to avoid the occurrence of unwelcome events which exceed the acceptance limits, by selecting suitable measures. in this context, it is assumed that: • an unwelcome event is initially identified as a single event within a comprehensive event sequence (e.g. start-up of a plant), and the risk of an event and limit risk are determined. • the selection of technical and/or non-technical measures is subject to a review of the content and the system, and the decision regarding a solution is then made. • the number of applicable measures is limited, and therefore it may not be possible to immediately find a measure with an acceptable risk for a preliminary determination of the unwelcome event. • implementation of the selected solution can result in the occurrence of a new unwelcome event. • in the above cases, a more concrete, new determination of the unwelcome event and/or the unit under review, or the state of the unit under review, and another attempt at deciding upon measures may lead to the desired result, although this may have to be repeated several times before it is successful. in the case of complex event sequences, several unwelcome events may become apparent which have to be tackled by the respective set of measures. in accordance with the sequence scheme, the unit under review and its state have to be determined first. this determination includes information on, e.g., • product type, dimension, product parts/elements distinguished according to functional or construction aspects, if applicable; • intended use; • work system or field of application; • target group; • supply energy, transformed in the product, transmitted, distributed, output; • other parameters essential to safety assessment according to type and size of the product; • known or assumed effects on the product or its parts (e.g. due to transport, assembly, conditions at the assembly site, operation, maintenance); • weight, centre of gravity; • materials, consumables; • operating states (e.g. start-up, standstill, test run, normal operation); • condition (new, condition after a period in storage/shutdown, after repair, in case of modified operating conditions and/or other significant changes); and • known or suspected effects on humans. the next step is the identification of unwelcome events. they are source conditions of processes and states, or processes and states themselves. they can be the effects of processes and states which can cause harm to people or property. an unwelcome event can be a single event or part of a sequence of events. one should look for unwelcome events in sequences of processes and functions, in work activities and organizational procedures, or in the work environment. care has to be taken that the respective interfaces are included in the considerations. deviations and time-dependent changes in regard to the planned sequences and conditions have to be taken into account as well. the risk of an unwanted event results from the probability statement which takes into account both • the expected frequency of occurrence of the event; and • the anticipated extent of harm of the event. the expected frequency of occurrence of an event leading to harm is determined by, e.g., • the probability of the occurrence itself; • the duration and frequency of exposure of people (or of objects) in the danger zone, e.g. -extremely seldom (e.g. during repair), -seldom (e.g. during installation, maintenance and inspection), -frequently, and -very frequently (e.g. constant intervention during every work cycle); • the influence of users or third parties on the risk of an event. the extent of harm is determined by, e.g., • the type of harm (harm to people and/or property); • the severity of the harm (slight/severe/fatal injury of persons, or corresponding damage to property); and • number of people or objects affected. in principle, the safety requirements depend on the ratio of the risk of an event to the limit risk. criteria for determining the limit risk are, e.g., • personal and social acceptance of hazards; • people possibly affected (e.g. layman, trained person, specialized worker); • participation of those affected in the process; and • possibilities of averting hazards. the safety of various technical equipment with comparable risk can, for instance, be achieved • primarily by technical measures, in some cases; and • mainly by non-technical measures, in other cases. this means that several acceptable solutions with varying proportions of technical and non-technical measures may be found for a specific risk. in this context, the responsibility of those involved should be taken into consideration. technical measures are developed on the basis of e.g. the following principles: • avoiding hazardous interfaces (e.g. risk of crushing, shearing); hazard sources (e.g. radiation sources, flying parts, hazardous states and actions as well as inappropriate processes); • limiting hazardous energy (e.g. by rupture disks, temperature controllers, safety valves, rated break points); • using suitable construction and other materials (e.g. solid, sufficiently resistant against corrosion and ageing, glare-free, break-proof, non-toxic, non-inflammable, non-combustible, non-sliding); • designing equipment in accordance with its function, material, load, and ergonomics principles; • using fail-safe control devices employing technical means; • employing technical means of informing (e.g. danger signal); • protective equipment for separating, attaching, rejecting, catching, etc.; • suction equipment, exhaust hoods, when needed; • protection and emergency rooms; and • couplings or locks. technical measures refer to, e.g., • physical, chemical or biological processes; • energy, material and information flow in connection with the applied processes; • properties of materials and changes in the properties; and • function and design of technical products, parts and connections. the iterative (repeated) risk reduction process can be concluded after achieving adequate risk reduction and, if applicable, a favourable outcome of risk comparison. adequate risk reduction can be considered to have been achieved when one is able to answer each of the following questions positively: • have all operating conditions and all intervention procedures been taken into account? • have hazards been eliminated or their risks been reduced to the lowest practicable level? • is it certain that the measures undertaken do not generate new hazards? • are the users sufficiently informed and warned about the residual risks? • is it certain that the operator's working conditions are not jeopardized by the protective measures taken? • are the protective measures compatible with each other? • has sufficient consideration been given to the consequences that can arise from the use of a machine designed for professional/industrial use when it is used in a non-professional/non-industrial context? • is it certain that the measures undertaken do not excessively reduce the ability of the machine to perform its intended function? there are still many potential risks connected with hazardous substances about which more information is needed. because the knowledge about the relation between their dose and mode of action is not sufficient for controlling such risks, more research is needed. the following list highlights the themes of the numerous questions related to such risks: • potentially harmful organisms; • toxicants, carcinogens; • pesticides, pollutants, poisonous substances; • genetically engineered substances; • relation between chemical and structural properties and toxicity; • chemical structure and chemical properties and the relation to reactivity and reaction possibilities of organic compounds to metabolic reaction and living systems; • modes of action, genotoxicity, carcinogenicity, effects on humans/animals; • potentially harmful organisms in feedstuffs and animal faeces; • viruses and pathogens; • bacteria in feedstuffs and faeces; • parasites in feedstuffs and animal faeces; • pests in stored feedstuffs; • probiotics as feed additives; and • preservatives in feedstuffs. violent actions damaging society, property or people have increased, and they seem to spread both internationally as well as within countries. these new risks are difficult to predict and manage, as the very strategy of the actors is to create unexpected chaotic events. certain possibilities to predict the potential types of hazards do exist, and comprehensive predictive analyses have been done (meyerson, reaser ) . new methodologies are needed to predict the risk of terrorist actions, and also the strategies for risk management need to be developed. due to the numerous background factors, the preparedness of societies against these risks needs to be strengthened. table . lists important societal systems which are vulnerable to acts of terrorism. the situation in the developing countries needs to be tackled with specific methods. one has to answer the following questions: • what specific examples of prevention instruments can be offered? • what are the prerequisites for success? • how can industrialized countries assist the developing countries in carrying out preventive actions? • how should priorities be set according to the available resources? one possibility is to start a first-step programme, the goal of which is higher productivity and better workplaces. it can be carried out by improving • storage and handling of materials: -provide storage racks for tools, materials, etc.; -put stores, racks etc. on wheels, whenever possible; -use carts, conveyers or other aids when moving heavy loads; -use jigs, clamps or other fixtures to hold items in place. • work sites: -keep the working area clear of everything that is not in frequent use. • machine safety: -install proper guards to dangerous tools/machines; -use safety devices; -maintain machines properly. • control of hazardous substances: -substitute hazardous chemicals with less hazardous substances; -make sure that all organic solvents, paints, glues, etc., are kept in covered containers; -install or improve local exhaust ventilation; -provide adequate protective goggles, face shields, earplugs, safety footwear, gloves, etc.; -instruct and train workers; -make sure that workers wash their hands before eating and drinking, and change their clothes before going home. • lighting: -make sure that lighting is adequate. • social and sanitary facilities: -provide a supply of cool, safe drinking water; -have the sanitary facilities cleaned regularly; -provide a hygienic place for meals; -provide storage for clothing or other belongings; -provide first aid equipment and train a qualified first-aider. • premises: -increase natural ventilation by having more roof and wall openings, windows or open doorways; -move sources of heat, noise, fumes, arc welding, etc., out of the workshop, or install exhaust ventilation, noise barriers, or other solutions; -provide fire extinguishers and train the workers to use them; -clear passageways, provide signs and markings. • work organization: -keep the workers alert and reduce fatigue through frequent changes in tasks, opportunities to change work postures, short breaks, etc.; -have buffer stocks of materials to keep work flow constant; -use quality circles to improve productivity and quality. risk combination of the probability of an event and its consequences. the term "risk" is generally used only when there is at least a possibility of negative consequences. in some situation, risk arises from the possibility of deviation from the expected outcome or event. outcome of an event or a situation, expressed in quality and in quantity. it may result in a loss or in an injury or may be linked to it. the result can be a disadvantage or a gain. in this case the event or the situation is the source. in connection with every analysis it has to be checked whether the cause is given empirically, or follows a set pattern, and whether there is scientific agreement regarding these circumstances. note there can be more than one consequence from one event. note consequences can range from positive to negative. the consequences are always negative from the viewpoint of safety. extent to which an event is likely to occur. note iso - : gives the mathematical definition of probability as "a real number in the interval to attached to a random event. it can be related to a long-run relative frequency of occurrence or to a degree of belief that an event will occur. for a high degree of belief the probability is near ". note frequency rather than probability may be used in describing risk. degrees of belief about probability can be chosen as classes or ranks such as: rare/unlikely/moderate/likely/almost certain, or incredible/improbable/ remote/occasional/probable/frequent. remark: informal language often confuses frequency and probability. this can lead to wrong conclusions in safety technology. probability is the degree of coincidence of the time frequency of coincidental realization of a fact from a certain possibility. coincidence is an event which basically can happen, may be cause-related, but does not occur necessarily or following a set pattern. it may also not occur (yes-or-no-alternative). data for probability of occurring with specific kinds of occurrence and weight of consequences can be: in a statistical sense: empirical, retrospective, real in a prognostic sense: speculative, prospective, probabilistic occurrence of a particular set of circumstances regarding place and time. an event can be the source of certain consequences (empirically to be expected with certain regularity). the event can be certain or uncertain. the event can be a single occurrence or a series of occurrences. the probability associated with the event can be estimated for a given period of time. task range by which the significance of risk is assessed. note risk criteria can include associated costs and benefits, legal and statutory requirements, socio-economic and environmental aspects, the concerns of stakeholders, priorities and other inputs to the assessment. the way in which a stakeholder views a risk based on a set of values or concerns. note risk perception depends on the stakeholder's needs, issues and knowledge. note risk perception can differ from objective data. exchange or sharing of information about risk between the decision-makers and other stakeholders. overall process of risk analysis and risk evaluation. systematic use of information to identify sources and to estimate the risk. note risk analysis provides a basis for risk evaluation, risk treatment, and risk acceptance. note information can include historical data, theoretical analyses, informal opinions, and the concerns of stakeholders. process used to assign figures, values to the probability and consequences of a risk. note risk estimation can consider cost, benefits, the concerns of stakeholders, and other variables, as appropriate for risk evaluation. process of comparing the estimated risk against given risk criteria to determine the significance of a risk. process of selection and implementation of measures to modify risk. note risk treatment measures can include avoiding, optimizing, transferring or retaining risk. actions implementing risk management decisions. note risk control may involve monitoring, re-evaluation, and compliance with decisions. process, related to a risk, to minimize the negative and to maximize the positive consequences (and their respective probabilities). actions taken to lessen the probability, negative consequences or both, associated with a risk. limitation of any negative consequences of a particular event. decision not to become involved in, or action to withdraw from, a risk situation. sharing with another party the burden of loss or benefit of gain, for a risk. note legal or statutory requirements can limit, prohibit or mandate the transfer of a certain risk. note risk transfer can be carried out through insurance or other agreements. note risk transfer can create new risks or modify existing ones. note relocation of the source is not risk transfer. acceptance of the burden of loss, or benefit of gain, from a particular risk. note risk retention includes the acceptance of risks that have not been identified. note risk retention does not include means involving insurance, or transfer in other ways. this includes risk assessment, risk treatment, risk acceptance and risk communication. risk assessment is risk analysis, with identification of sources and risk estimation, and risk evaluation. risk treatment includes avoiding, optimizing, transferring and retaining risk. → risk acceptance → risk communication harm physical injury or damage to the health of people or damage to property or the environment [iso/iec guide ]. note harm includes any disadvantage which is causally related to the infringement of the object of legal protection brought about by the harmful event. note in the individual safety-relevant definitions, harm to people, property and the environment may be included separately, in combination, or it may be excluded. this has to be stated in the respective scope. potential source of harm [iso/iec guide ]. the term "hazard" can be supplemented to define its origin or the nature of the possible harm, e.g., hazard of electric shock, crushing, cutting, dangerous substances, fire, drowning. in every-day informal language, there is insufficient differentiation between source of harm, hazardous situation, hazardous event and risk. circumstance in which people, property or the environment are exposed to one or more hazards [iso/iec guide ]. note circumstance can last for a shorter or longer period of time. event that can cause harm [din en ] . the hazardous event can be preceded by a latent hazardous situation or by a critical event. combination of the probability of occurrence of harm and the severity of that harm [iso/iec guide ]. note in many cases, only a uniform extent of harm (e.g. leading to death) is taken into account, or the occurrence of harm may be independent of the extent of harm, as in a lottery game. in these cases, it is easier to make a probability statement; risk assessment by risk comparison [din en ] thus becomes much simpler. note risks can be grouped in relation to different variables, e.g. to all people or only those affected by the incident, to different periods of time, or to performance. the probabilistic expectation value of the extent of harm is suitable for combining the two probability variables. note risks which arise as a consequence of continuous emission, e.g. noise, vibration, pollutants, are affected by the duration and level of exposure of those affected. risk which is accepted in a given context based on the current values of society [iso/iec guide ]. the acceptable risk has to be taken into account in this context, too. note safety-relevant definitions are oriented to the maximum tolerable risk. this is also referred to as limit risk. note tolerability is also based on the assumption that the intended use in addition to a reasonably predictable misuse of the products, processes and services, is complied with. freedom from unacceptable risk [iso/iec guide ]. note safety is indivisible. it cannot be split into classes or levels. note safety is achieved by risk reduction, so that the residual risk in no case exceeds the maximum tolerable risk. existence of an unacceptable risk. note safety and danger exclude one another -a technical product, process or service cannot be safe and dangerous at the same time. means used to reduce risk [iso/iec guide ]. note protective measures at the product level have priority over protective measures at the workplace level. preventive measure means assumed, but not proven, to reduce risk. risk remaining after safety measures have been taken [din en ] . note residual risk may be related to the use of technical products, processes and services. systematic use of available information to identify hazards and to estimate their risks [iso/iec guide ]. determination of connected risk elements of all hazards as a basis for risk assessment. decision based on the analysis of whether the tolerable risk has been exceeded [iso/iec guide ]. overall process of risk analysis and risk evaluation [iso/iec guide ]. use of a product, process or service in accordance with information provided by the supplier [iso/iec guide ]. note information provided by the supplier also includes descriptions issued for advertising purposes. use of a product, process or service in a way not intended by the supplier, but which may result from readily predictable human behaviour [iso/iec guide ]. safety-related formulation of contents of a normative document in the form of a declaration, instructions, recommendations or requirements [compare en , safety related]. the information set down in technical rules is normally restricted to certain technical relations and situations; in this context, it is presumed that the general safety-relevant principles are followed. a procedure with the aim to reduce risk of a (technical) product, process or service according to the following steps serves to reach the safety goals in the design stage: • safety-related layout; • protective measures; • safety-related information for users. function inevitable to maintain safety. a function which, in case of failure, allows the tolerable risk to be immediately exceeded. depending on the situation, it is possible to use one method or a combination of several methods. intuitive hazard detection spontaneous, uncritical listing of possible hazards as a result of brainstorming by experts. group work which is as creative as possible. writing ideas down (on a flip chart) first, then evaluating them. technical documentation (instructions, requirements) is available for many industrial plant and work processes, describing the hazards and safety measures. this documentation has to be obtained before continuing with the risk analysis. the deviations between the set point and the actual situation of individual components are examined. information on the probability of failure of these elements may be found in technical literature. examining the safety aspects in unusual situations (emergency, repair, starting and stopping) when plans are made to modify the plant or processes. a systematic check of the processes and plant parts for effects in normal operation and in case of set point deviations, using selected question words (and -or -not -too much -too little?). this is used in particular for measurement, control units, programming of computer controls, robots. all possible causes and combinations of causes are identified for an unwanted operating state or an event, and represented in the graphic format of a tree. the probability of occurrence of an event can be estimated from the context. the fault tree analysis can also be used retrospectively to clarify the causes of events. additional methods may be: human reliability analysis a frequency analysis technique which deals with the behaviour of human beings affecting the performance of the system, and estimates the influence of human error on reliability. a hazard identification and frequency analysis technique which can be used at an early stage in the design phase to identify and critically evaluate hazards. operating safety block program a frequency analysis technique which utilizes a model of the system and its redundancies to evaluate the operating safety of the entire system. classifying risks into categories, to establish the main risk groups. all typical hazardous substances and/or possible accident sources which have to be taken into account are listed. the checklist may be used to evaluate the conformity with codes and standards. this method is used to estimate whether coincidental failures of an entire series of different parts or modules within a system are possible and what the probable effects would be. estimate the influence of an event on humans, property or the environment. simplified analytical approaches, as well as complex computer models can be used. a large circle of experts is questioned in several steps; the result of the previous step together with additional information is communicated to all participants. during the third or fourth step the anonymous questioning concentrates on aspects on which no agreement is reached so far. basically this technique is used for making predictions, but is also used for the development of new ideas. this method is particularly efficient due to its limitation to experts. a hazard identification and evaluation technique used to establish a ranking of the different system options and to identify the less hazardous options. a frequency analysis technique in which a model of the system is used to evaluate variations of the input conditions and assumptions. a means to estimate and list risk groups; reviews risk pairs and evaluates only one risk pair at a time. overview of data from the past a technique used to identify possible problem areas; can also be used for frequency analysis, based on accident and operation safety data, etc. a method to identify latent risks which can cause unforeseeable incidents. cssr differs from the other methods in that it is not conducted by a team, and can be conducted by a single person. the overview points out essential safety and health requirements related to a machine and simultaneously to all relevant (national, european, international) standards. this information ensures that the design of the machine complies with the issued "state of the art" for that particular type of machine. the "what-if" method is an inductive procedure. the design and operation of the machine in question are examined for fairly simple applications. at every step "what-if" questions are asked and answered to evaluate the effect of a failure of the machine elements or of process faults in view of the hazards caused by the machine. for more complex applications, the "what-if" method is most useful with the aid of a "checklist" and the corresponding work division to allocate specific features of the process to persons who have the greatest experience and practice in evaluating the respective feature. the operator's behaviour and professional knowledge are assessed. the suitability of the equipment and design of the machine, its control unit and protective devices are evaluated. the influence of the materials processed is examined, and the operating and maintenance records are checked. the checklist evaluation of the machine generally precedes the more detailed methods described below. fmea is an inductive method for evaluating the frequency and consequences of component failure. when operating procedures or operator errors are investigated, then other methods may be more suitable. fmea can be more time-consuming than the fault tree analysis, because every mode of failure is considered for every component. some failures have a very low probability of occurrence. if these failures are not analyzed in depth this decision should be recorded in the documentation. the method is specified in iec "analysis techniques for system reliability -procedure for failure mode and effects analysis (fmea)". in this inductive method, the test procedures are based on two criteria: technology and complexity of the control system. mainly, the following methods are applicable: • practical tests of the actual circuit and fault simulation on certain components, particularly in suspected areas of performance identified during the theoretical check and analysis. • simulation of control behaviour (e.g. by means of hardware and/or software models). whenever complex safety-related parts of control systems are tested, it may be necessary to divide the system into several functional sub-systems, and to exclusively submit the interface to fault simulation tests. this technique can also be applied to other parts of machinery. mosar is a complete approach in steps. the system to be analyzed (machinery, process, installation, etc.) is examined as a number of sub-systems which interact. a table is used to identify hazards and hazardous situations and events. the adequacy of the safety measures is studied with a second table, and a third table is used to look at their interdependency. a study, using known tools (e.g. fmea) underlines the possible dangerous failures. this leads to the elaboration of accident scenarios. by consensus, the scenarios are sorted in a severity table. a further table, again by consensus, links the severity with the targets of the safety measures, and specifies the performance levels of the technical and organizational measures. the safety measures are then incorporated into the logic trees and the residual risks are analyzed via an acceptability table defined by consensus. ilo ( ) , , , , , the risks to health at work are numerous and originate from several sources. their origins vary greatly and they cause vast numbers of diseases, injuries and other adverse conditions, such as symptoms of overexertion or overload. traditional occupational health risk factors and their approximate numbers are given in table . . the exposure of workers to hazards or other adverse conditions of work may lead to health problems, manifested in the workers' physical health, psysical workload, psychological disturbances or social aspects of life. workers may be exposed to various factors alone or in different types of combinations, which may or may not show interaction. the assessment of interacting risk factors is complex and may lead to substantial differences in the final risk estimates when compared with estimates of solitary factors. examples of interaction between different risk factors in the work environment are given in table . . the who estimate of the total number of occupational diseases among the billion workers of the world is million a year. this is likely to be an under-estimate due to the lack of diagnostic services, limited legislative coverage of both workers and diseases, and variation in diagnostic criteria between different parts of the world. the mortality from occupational diseases is substantial, comparable with other major diseases of the world population such as malaria or tuberculosis. the recent ilo estimate discloses . million deaths a year from work-related causes in the world including deaths from accidents, dangerous substances, and occupational diseases. eightyfive percent ( %) of these deaths take place in developing countries, where the diagnostic services, social security to families and compensation to workers are less developed. although the risk is decreasing in the industrialized world, the trend is increasing in the rapidly industrializing and transitory countries. a single hazard alone, such as asbestos exposure, is calculated to cause , cancers a year with a fatal outcome in less than two years after diagnosis (takala ). the incidence rates of occupational diseases in well registered industrialized countries are at the level of - cases/ , active employees/year, i.e., the incidence levels are comparable with major public health problems, such as cardiovascular diseases, respiratory disorders, etc. in the industrialized countries, the rate of morbidity from traditional occupational diseases, such as chemical poisonings, is declining, while musculoskeletal and allergic diseases are on the increase. about biological factors that are hazardous to workers' health have been identified in various work environments. some of the new diseases recognized are blood-borne infections, such as hepatitis c and hiv, and exotic bacterial or viral infections transmitted by increasing mobility, international travelling and migration of working people. also some hospital infections and, e.g., drug-resistant tuberculosis, are being contracted increasingly by health care personnel. in the developing countries the morbidity picture of occupational diseases is much less clear for several reasons: low recognition rates, rotation and turnover of workers, shorter life expectancy which hides morbidity with a long latency period, and the work-relatedness of several common epidemic diseases, such as malaria and hiv/aids (rantanen ). the estimation of so-called work-related diseases is even more difficult than that of occupational diseases. they may be about -fold more prevalent than the definite occupational diseases. several studies suggest that siegfried radandt, jorma rantanen and ortwin renn work-related allergies, musculoskeletal disorders and stress disorders are showing a growing trend at the moment. the prevention of work-related diseases is important in view of maintaining work ability and reducing economic loss from absenteeism and premature retirement. the proportion of work-relatedness out of the total morbidity figures has been estimated and found surprisingly high (nurminen and karjalainen , who ) (see table . ). the public health impact of work-related diseases is great, due to their high prevalence in the population. musculoskeletal disorders are among the three most common chronic diseases in every country, which implies that the attribution of work is very high. similarly, cardiovascular diseases in most industrialized countries contribute to % of the total mortality. even a small attributable fraction of work-relatedness implies high rates of morbidity and mortality related to work. the concept of disease is in general not a simple one. when discussing morbidity one has to recognize three different concepts: . illness = an individual's perception of a health problem resulting from either external or internal causes. . disease = an adverse health condition diagnosed by a doctor or other health professional. . sickness = a socially recognized disease which is related to, for example, social security actions or prescription of sick leave, etc. when dealing with occupational and work-related morbidity, one may need to consider any of the above three aspects of morbidity. a recognized occupational disease, however, belongs to group , i.e. it is a sickness defined by legal criteria. medical evidence is required to show that the condition meets the criteria of an occupational disease before recognition can be made. there are dozens of definitions for occupational disease. the content of the concept varies, depending on the context: a) the medical concept of occupational disease is based on a biomedical or other health-related etiological relationship between work and health, and is used in occupational health practice and clinical occupational medicine. b) the legal concept of occupational disease defines the disease or conditions which are legally recognized as conditions caused by work, and which lead to liabilities for recognition, compensation and often also prevention. the legal concept of occupational disease has a different background in different countries, often declared in the form of an official list of occupational diseases. there is universal discrepancy between the legal and medical concept, so that in nearly all countries the official list of recognized occupational diseases is shorter than the medically established list. this automatically implies that a substantial proportion of medically established occupational diseases remain unrecognized, unregistered, and consequently also uncompensated. the definition of occupational disease, as used in this chapter, summarizes various statements generated during the history of occupational medicine: an occupational disease is any disease contracted as a result of exposures at work or other conditions of work. the general criteria for the diagnosis and recognition of an occupational disease are derived from the core statements of various definitions: . evidence on exposure(s) or condition(s) in work or the work environment, which on the basis of scientific knowledge is (are) able to generate disease or some other adverse health condition. . evidence of symptoms and clinical findings which on the basis of scientific knowledge can be associated with the exposure(s) or condition(s) in concern. . exclusion of non-occupational factors or conditions as a main cause of the disease or adverse health condition. point often creates problems, as several occupationally generated clinical conditions can be caused also by non-occupational factors. on the other hand, several factors from different sources and environments are involved in virtually every disease. therefore the wordings "main cause" or "principal cause" are used. the practical solution in many countries is that the attribution of work needs to be more than %. usually the necessary generalizeable scientific evidence is obtained from epidemiological studies, but also other types of evidence, e.g. well documented clinical experience combined with information on working conditions may be acceptable. in some countries, like finland, any disease of the worker which meets the above criteria can be recognized as an occupational disease. in most other countries, however, there are official lists of occupational diseases which determine the conditions and criteria on which the disease is considered to be of occupational origin. in who launched a new concept: work-related disease (who ) . the concept is wider than that of an occupational disease. it includes: a) diseases in which the work or working conditions constitute the principal causal factor. b) diseases for which the occupational factor may be one of several causal agents, or the occupational factor may trigger, aggravate or worsen the disease. c) diseases for which the risk may be increased by work or work-determined lifestyles. the diseases in category (a) are typically recognized as legally determined occupational diseases. categories (b) and (c) are important regarding the morbidity of working populations, and they are often considered as important targets for prevention. in general, categories (b) and (c) cover greater numbers of people, as the diseases in question are often common noncommunicable diseases of the population, such as cardiovascular diseases, musculoskeletal disorders, and allergies and, to a growing extent, stressrelated disorders (see table . ). the concept of work-related disease is very important from the viewpoint of occupational health risk assessment and the use of its results for preventive purposes and for promoting health and safety at work. this is because preventive actions in occupational health practice cannot be limited only to legally recognized morbidity. the lists of occupational diseases contain great numbers of agents that show evidence on occupational morbidity. according to the ilo recommendation r ( ): list of occupational diseases, the occupational diseases are divided into four main categories: . diseases resulting from single causes following the categories listed in table . . the most common categories are physical factors, chemical agents, biological factors and physical work, including repetitive tasks, poor ergonomic conditions, and static and dynamic work. . diseases of the various organs: respiratory system, nervous system, sensory organs, internal organs, particularly liver and kidneys, musculoskeletal system, and the skin. . occupational cancers. . diseases caused by other conditions of work. research on risk perception shows differences in how different types of risks are viewed. instant, visible, dramatic risk events, particularly ones that cause numerous fatalities or severe visible injuries in a single event generally arouse much attention, and are given high priority. on the other hand, even great numbers of smaller events, such as fatal accidents of single workers, arouse less attention in both the media and among regulators, even though the total number of single fatal accidents in a year may exceed the number of fatalities in major events by several orders of magnitude. occupational diseases, with the exception of a few acute cases, are silent, develop slowly, and concern only one or a few individuals at a time. furthermore, the diseases take months or years to develop, in extreme cases even decades, after the exposure or as a consequence of accumulation of exposure during several years. as occupational health problems are difficult to detect and seriously under-diagnosed and under-reported, they tend to be given less priority than accidents. the perception of occupational disease risk remains low in spite of their severity and relatively high incidence. particularly in industrialized countries, the extent of occupational health problems is substantially greater than that of occupational accidents. on a global scale, the estimated number of fatalities due to occupational accidents is , and the respective estimate for fatalities due to work-related diseases is . million a year, giving a fatal accident/fatal disease ratio of to . the corresponding ratio in the eu- is to (takala ). the risk distribution of ods is principally determined by the nature of the work in question and the characteristics of the work environment. there is great variation in the risk of ods between the lowest and highest risk occupations. in the finnish workforce, the risk between the highest risk and the lowest risk occupations varies by a factor of . the highest risk occupations carry a risk which is - times higher than the average for all occupations. the risk of an occupational disease can be estimated on the basis of epidemiological studies, if they do exist in the case of the condition in question. on the other hand, various types of economic activity, work and occupations carry different types of risks, and each activity may have its own risk profile. by examining the available epidemiological evidence, we can recognize high-risk occupations and characterize the typical risks connected with them ( figure . , table . ). as an example, the risk of occupational asthma, dermatosis or musculoskeletal disorders is common in several occupations, but not in all. there may be huge differences in risks between different occupations. the occupations carrying the highest risk for occupational asthma, occupational skin diseases and work-related tenosynovitis, in - , are shown in table . . assessment of the risk of occupational diseases has an impact on research priorities. table . shows the priorities for research in four countries. the similarity of the priorities is striking, revealing that the problems related to the risks of occupational diseases are universal. the diagnosis of occupational diseases is important for the treatment of the disease, and for prevention, registration and compensation. the diagnosis is based on information obtained from: a) data on the work and the work environment usually provided by the employer, occupational health services, occupational safety committee, or expert bodies carrying out hygienic and other services for the workplace. b) information on the health examination of individual workers. the authorities in many countries have stipulated legal obligations for high-risk sectors to follow up the workers' health and promote early detection of changes in their health. occupational health services keep records on examinations. c) workers with special symptoms (for example, asthmatic reactions) are taken into the diagnostic process as early as possible. epidemiological evidence is a critical prerequisite for recognizing causal relationship between work and disease. epidemiology is dependent on three basic sources of information on work and the work environment: (a) exposure assessment that helps to define the "dose" of risk factor at work, (b) the outcome assumed to occur as a biological (or psychological) response to the exposures involved, and (c) time, which has a complex role in various aspects of epidemiology. all these sources are affected by the current dynamics of work life which has major impact on epidemiological research and its results. exposure assessment is the critical initial step in risk assessment. as discussed in this chapter, accurate exposure assessment will become more difficult and cumbersome than before in spite of remarkable achievements in measurement, analysis and monitoring methods in occupational hygiene, toxicology and ergonomics. great variations in working hours and individu-alization of exposures, growing fragmentation and mobility increase the uncertainties, which are multiplied. structural uncertainty, measurement uncertainty, modelling uncertainty, input data uncertainty and natural uncertainty amplify each other. as a rule, variation in any direction in exposure assessment tends to lead to underestimation of risk, and this has severe consequences to health. personal monitoring of exposures, considering variations in individual doses, and monitoring internal doses using biological monitoring methods help in the control of such variation. a monofactorial exposure situation in the past was ideal in the assessment because of its manageability. it also occurs usually as a constant determinant for long periods of time and can be regularly and continuously measured and monitored. this is very seldom the case today, and exposure assessment in modern work life is affected by discontinuities of the enterprise, of technologies and production methods, and turnover of the workforce, as well as the growing mobility and internationalization of both work and workers. company files that were earlier an important source of exposure and health data no longer necessarily fulfil that function. in addition, the standard -h time-weighted average for exposure assessment can no longer be taken as a standard, as working hours are becoming extremely heterogeneous. assessment of accurate exposure is thus more and more complex and cumbersome, and new strategies and methods for the quantification of exposure are needed. three challenges in particular can be recognized: a) the challenge arising from numerous discontinuities, fragmentation and changes in the company, employment and technology. although in the past company data were collected from all sources that were available, collective workroom measurements were the most valuable source of data. due to the high mobility of workers and variation in the work tasks, personal exposure monitoring is needed that follows the worker wherever he or she works. special smart cards for recording all personal exposures over years have been proposed, but so far no system-wide action has been possible. in radiation protection, however, such a personal monitoring system has long been a routine procedure. b) the complex nature of exposures where dozens of different factors may be involved (such as those in indoor air problems) and acting in combinations. table . gives a list of exposing factors in modern work life, many of which are difficult to monitor. c) new, rapidly spreading and often unexpected exposures that are not well characterized. often their mechanisms of action are not known, or the fast spread of problems calls for urgent action, as in the case of bovine spongiform encephalopathy (bse) in the s, sars outbreak in , and in the new epidemics of psychological stress or musculoskeletal disorders in modern manufacturing. the causes of occupational diseases are grouped into several categories by the type of factor (see table . ). a typical grouping is the one used in ilo recommendation no. . the lists of occupational diseases contain diseases caused by one single factor only, but also diseases which may have been caused by multifactorial exposures. exposure assessment is a crucial step in the overall risk assessment. the growing complexity of exposure situations has led to the development of new methods for assessing such complex exposure situations. these methods are based on construction of model matrices for jobs which have been studied thoroughly for their typical exposures. the exposure profiles are illustrated in job exposure matrices (jem) which are available for dozens of occupations (heikkilä et al. , guo . several factors can cause occupational diseases. the jem is a tool used to convert information on job titles into information on occupational risk factors. jem-based analysis is economical, systematic, and often the only reasonable choice in large retrospective studies in which exposure assessment at the individual level is not feasible. but the matrices can also be used in the practical work for getting information on typical exposure profiles of various jobs. the finnish national job-exposure matrix (finjem) is the first and so far the only general jem that is able to give a quantitative estimation of cumulative exposure. for example, finjem estimates were used for exposure profiling on chemical exposures and several other cancer-risk factors for occupational categories. the jem analysis has been further developed into task specific exposure matrices charting the exposure panorama of various tasks (benke et al. (benke et al. , . as the previous mono-causal, mono-mechanism, mono-outcome setting has shifted in the direction of multicausality, multiple mechanisms and multioutcomes, the assessment of risks has become more complex. some outcomes, as mentioned above, are difficult to define and measure with objective methods and some of them may be difficult to recognize by exposed groups themselves, or even by experts and researchers. for example, the objective measurement of stress reactions is still imprecise in spite of improvements in the analysis of some indicator hormones, such as adrenalin, noradrenalin, cortisol, prolactin, or in physical measurements, such as galvanic skin resistance and heart rate variability. questionnaires monitoring perceived stress symptoms are still the most common method for measuring stress outcomes. thanks to well organized registries, particularly in germany and the nordic countries, data on many of the relevant outcomes of exposure, such as cancer, pneumoconiosis, reproductive health disturbances and cardiovascular diseases can be accumulated, and long-term follow-up of outcomes at the group level is therefore possible. on the other hand, several common diseases, such as cardiovascular diseases, may have a work-related aetiology, but it may be difficult to show at individual level. the long-term data show that due to changes in the structure of economies, types of employment, occupational structures and conditions of work, many of the traditional occupational diseases, such as pneumoconiosis and acute intoxications have almost disappeared. several new outcomes have appeared, however, such as symptoms of physical or psychological overload, psychological stress, problems of adapting to a high pace of work, and uncertainty related to rapid organizational changes and risk of unemployment. in addition, age-related and work-related diseases among the ageing workforce are on the increase (kivimäki et al. , ilmarinen . these new outcomes may have a somatic, psychosomatic or psychosocial phenotype, and they often appear in the form of symptoms or groups of symptoms instead of well-defined diagnoses. practising physicians or clinics are not able to set an icd (international statistical classification of diseases and health-related conditions)-coded diagnosis for them. in spite of their diffuse nature, they are still problems for both the worker and the enterprise, and their consequences may be seen as sickness absenteeism, premature retirement, loss of job satisfaction, or lowered productivity. thus, they may have even a greater impact on the quality of work life and the economy than on clinical health. many such outcomes have been investigated by using questionnaire surveys among either representative samples of the whole workforce or by focusing the survey on a specific sector or occupational group. the combination of data from the surveys of "exposing factors", such as organizational changes, with questionnaire surveys of "outcomes", such as sickness absenteeism, provides epidemiological information on the association between the new exposures and the new outcomes. there are, however, major problems in both the accurate measurement of the exposures and outcomes, and also, the information available on the mechanisms of action is very scarce. epidemiology has expanded the focus of our observations from crosssectional descriptions to longitudinal perspectives, by focussing attention on the occurrence of diseases and finding associations between exposure and morbidity. such an extension of vision is both horizontal and vertical, looking at the causes of diseases. time is not only a temporal parameter in epidemiology, but has also been used for the quantification of exposure, measurement of latencies, and the detection of acceleration or slowing of the course of biological processes. as the time dimension in epidemiology is very important, the changes in temporal parameters of the new work life also affect the methods of epidemiological research. the time dimension is affected in several ways. first, the fragmentation and discontinuities of employment contracts, as described above, break the accumulation of exposure time into smaller fragments, and continuities are thus difficult to maintain. collecting data on cumulative exposures over time becomes more difficult. the time needed for exposure factors to cause an effect becomes more complex, as the discontinuities typical to modern work life allow time for biological repair and elimination processes, thus diluting the risk which would get manifested from continuous exposure. the dosage patterns become more pulse-type, rather than being continuous, stable level exposures. this may affect the multi-staged mechanisms of action in several biological processes. the breaking up of time also increases the likelihood of memory bias of respondents in questionnaire studies among exposed workers, and thus affects the estimation of total exposures. probably the most intensive effect, however, will be seen as a consequence of the variation in working hours. for example, instead of regular work of hours per day, hours per week and months per year, new time schedules and total time budgets are introduced for the majority of workers in the industrial society. the present distribution of weekly working hours in finland is less than hours per week for one third of workers, regular - hours per week for one third, and - hours per week for the remaining third. thus the real exposure times may vary substantially even among workers in the same jobs and same occupations, depending on the working hours and the employment contract (temporary, seasonal, part-time, full-time) (härmä , piirainen et al. . such variation in time distribution in "new work life" has numerous consequences for epidemiological studies, which in the past "industrial society" effectively utilized the constant time patterns at work for the assessment of exposures and outcomes and their interdependencies. the time dimension also has new structural aspects. as biological processes are highly deterministic in terms of time, the rapid changes in work life cannot wait for the maturation of results in longitudinal follow-up studies. the data are needed rapidly in order to be useful in the management of working conditions. this calls for the development of rapid epidemiological methods which enable rapid collection of the data and the making of analyses in a very short time, in order to provide information on the effects of potential causal factors before the emergence of a new change. often these methods imply the compromising of accuracy and reliability for the benefit of timeliness and actuality. as occupational epidemiology is not only interested in acute and short-term events, but looks at the health of workers over a - -year perspective, the introduction of such new quick methods should not jeopardize the interest and efforts to carry out long-term studies. epidemiology has traditionally been a key tool in making a reliable risk assessment of the likelihood of the adverse outcomes from certain levels of exposure. the new developments in work life bring numerous new challenges to risk assessment. as discussed above, the new developments in work life have eliminated a number of possibilities for risk assessment which prevailed in the stable industrial society. on the other hand, several new methods and new information technologies provide new opportunities for collection and analysis of data. traditionally, the relationship between exposure and outcome has been judged on the basis of the classical criteria set by hill ( ) . höfler ( ) crystallizes the criteria with their explanations as the following: . strength of association: a strong association is more likely to have a causal component than is a modest association. . consistency: a relationship is observed repeatedly. . specificity: a factor influences specifically a particular outcome or population. . temporality: the factor must precede the outcome it is assumed to affect. ing dose of exposure or according to a function predicted by a substantive theory. . plausibility: the observed association can be plausibly explained by substantive matter (e.g. biological) explanations. . coherence: a causal conclusion should not fundamentally contradict present substantive knowledge. . experiment: causation is more likely if evidence is based on randomized experiments. . analogy: for analogous exposures and outcomes an effect has already been shown. the hill criteria have been subjected to scrutiny, and sven hernberg has analyzed them in detail from the viewpoint of occupational health epidemiology. virtually all the hill criteria are affected by the changes in the new work life, and therefore methodological development is now needed. a few comments on causal inference are made here in view of the critiques by rothman ( ), hernberg ( ) and höfler ( ) : the strength of association will be more difficult to demonstrate due to the growing fragmentation that tends to diminish the sample sizes. the structural change that removes workers from high-level exposures to lower and shorterterm exposures may dilute the strength of effect, which may still prevail, but at a lower level. consistency of evidence may also be affected by the higher variation in conditions of work, study groups, multicultural and multiethnic composition of the workforce, etc. similarly, in the multifactorial, multi-mechanism, multi-outcome setting, the specificity criterion is not always relevant. the temporal dimension has already been discussed. in rapidly changing work life the follow-up times before the next change and before turnover in the workforce may be too short. the outcomes may also be defined by the exposures that have taken place long ago but have not been considered in the study design because historical data are not available. the biological gradient may be possible to demonstrate in a relatively simple exposure-outcome relationship. however, the more complex and multifactorial the setting becomes, the more difficult it may be to show the doseresponse relationship. the dose-response relationship may also be difficult to demonstrate in the cases of relatively ill-defined outcomes which are difficult to measure, but which can be detected as qualitative changes. biological plausibility is an important criterion which in a multimechanism setting may at least in part be difficult to demonstrate. on the other hand, the mechanisms of numerous psychological and psychosocial outcomes lack explanations, even though they undoubtedly are work-related. the missing knowledge of the mechanism of action did not prevent the establishment of causality between asbestos and cancer in a pleural sack or a lung. as many of the new outcomes may be context-dependent, the coherence criterion may be irrelevant. similarly, many of the psychosocial outcomes are difficult to put into an experimental setting, and it can be difficult to make inferences based on analogy. all of the foregoing implies that the new dynamic trends in work life challenge epidemiology in a new way, particularly in the establishment of causality. knowledge of causality is required for the prevention and management of problems. the hill criteria nevertheless need to be supplemented with new ones to meet the conditions of the new work life. similarly, more definitive and specific criteria and indicators need to be developed for the new exposures and outcomes. many of the challenges faced in the struggle to improve health and safety in modern work life can only be solved with the help of research. research on occupational health in the rapidly changing work life is needed more than ever. epidemiology is, and will remain, a key producer of information needed for prevention policies and for ensuring healthy and safe working conditions. the role of epidemiology is, however, expanding from the analysis of the occurrence of well-defined clinical diseases to studies on the occurrence of several other types of exposure and outcome, and their increasingly complex associations. as the baseline in modern work life is shifting in a more dynamic direction, and many parameters in work and the workers' situation are becoming more fragmented, incontinuous and complex, new approaches are needed to tackle the uncertainties in exposure assessment. the rapid pace of change in work life calls for the development of assessment methods to provide up-todate data quickly, so that they can be used to manage these changes and their consequences. many new outcomes which are not possible to register as clinical icd diagnoses constitute problems for today's work life. this is particularly true in the case of psychological, psychosocial and many musculoskeletal outcomes which need to be managed by occupational health physicians. methods for the identification and measurement of such outcomes need to be improved. the traditional hill criteria for causal inference are not always met even in cases where true association does exist. new criteria suitable for a new situation should be established without jeopardizing the original objective of ascertaining the true association. developing the bayesian inference further through utilization of a priori knowledge and a holistic approach may provide responses to new challenges. new neural network softwares may help in the management of the growing complexity. the glory of science does not lie in the perfection of a scientific method but rather in the recognition of its limitations. we must keep in mind the old saying: "absence of evidence is not evidence of absence". instead, it is merely a consequence of our ignorance that should be reduced through further efforts in systematic research, and particularly through epidemiology. and secondly, the ultimate value of occupational health research will be determined on the basis of its impact on practice in the improvement of the working conditions, safety and health of working people. changing conditions of work, new technologies, new substances, new work organizations and working practices are associated with new morbidity patterns and even with new occupational and work-related diseases. the new risk factors, such as rapidly transforming microbials and certain social and behavioural "exposures" may follow totally new dynamics when compared with the traditional industrial exposures (self-replicating nature of microbials and spreading of certain behaviours, such as terrorism) (smolinski et al. , loza . several social conditions, such as massive rural-urban migration, increased international mobility of working people, new work organizations and mobile work may cause totally new types of morbidity. examples of such development are, among others, the following: • mobile transboundary transportation work leading to the spread of hiv/aids. • increased risk of metabolic syndrome, diabetes and cardiovascular diseases aggravated by unconventional working hours. • increased risk of psychological burnout in jobs with a high level of longterm stress. • virtually a global epidemic of musculoskeletal disorders among vdu workers with high work load, psychological stress and poor ergonomics. the incidences of occupational diseases may not decline in the future, but the type of morbidity may change. the direction of trend in industrialized countries is the prominence of work-related morbidity and new diseases, while the traditional occupational diseases such as noise injury, pneumoconiosis, repetitive strain and chemical intoxications may continue to be prevalent in developing countries for long periods in the future. the new ergonomics problems are related to light physical work with a considerable proportion of static and repetitive workload. recent research points to an interesting interaction between unergonomic working conditions and psychological stress, leading to a combined risk of musculoskeletal disorders of the neck, shoulders and upper arms, including carpal tunnel syndrome in the wrist. the muscle tension in static work is amplified by the uncontrolled muscular tension caused by psychological stress. furthermore, there seems to be wide inter-individual variation in the tendency to respond with spasm, particularly in the trapezius muscle of neck, under psychological stress. about % of the health complaints of working-aged people are related to musculoskeletal disorders, of which a substantial part is work-related. the epidemics have been resistant against preventive measures. new regulatory and management strategies may be needed for effective prevention and control measures (westgaard et al. , paoli and merllié ) . the st century will be the era of the brain at work and consequently of psychological stress. between % and % of eu workers in certain occupations report psychological stress due to high time pressure at work (parent-thirion et al. ). the occurrence of work-related stress is most prevalent in occupations with tight deadlines, pressure from clients, or the high level of responsibility for productivity and quality given to the workers. undoubtedly, the threat of unemployment increases the perception of stress as well. as a consequence, for example, in finland some % of workers report symptoms of psychological overload and about % show clinical signs of burn out. these are not the problems of low-paid manual workers only, but also, for example, highly educated and well-paid computer super-experts have an elevated risk of burnout as a consequence of often self-committed workload (kalimo and toppinen ) . unconventional and ever longer working hours are causing similar problems. for example, one third of finns work over hours a week, and of these % work over hours, and % often work - hours per week. it is important to have flexibility in the work time schedules, but it is counterproductive if the biologically determined physiological time rhythms of the worker are seriously offended. over % have a sleep deficit of at least one hour each day, and % are tired and somnolent at work (härmä et al. ) . the toughening global competition, growing productivity demands and continuous changes of work, together with job insecurity, are associated with increased stress. up to - % of workers in different countries and different sectors of the economy report high time pressure and tight deadlines. this prevents them from doing their job as well as they would like to, and causes psychological stress. psychological stress is particularly likely to occur if the high demands are associated with a low degree of self-regulation by the workers (houtman ) . stress, if continuous, has been found to be detrimental to physical health (cardiovascular diseases), mental health (psychological burnout), safety (accident risks), and musculoskeletal disorders (particularly hand-arm and shoulder-neck disorders). it also has a negative impact on productivity, sickness absenteeism, and the quality of products and services. the resulting economic losses due to sickness absenteeism, work disability and lower quality of products and services are substantial. the prevention of stress consists not only of actions targeted at the individual worker. there is also a need for measures directed at the work organization, moderation of the total workload, competence building and collaboration within the workplace (theorell ). the support from foremen and supervisors is of crucial importance in stress management programmes. another type of psychological burden is the stress arising from the threat of physical violence or aggressive behaviour from the part of clients. in finland some % of workers have been subjected to insults or the threat of physical violence, % have experienced sexual harassment, and % mental violence or bullying at work. the risk is substantially higher for female workers than for men. stress has been found to be associated with somatic health, cardiovascular diseases, mental disorders and depression. one of the new and partly re-emerging challenges of occupational health services is associated with the new trends in microbial hazards. there are several reasons for these developments, for instance, the generation of new microbial strains, structural changes in human habitations with high population densities, growing international travel, and changes possibly in our microbiological environment as a consequence of global warming. of the to million species in the world, about million are microbes. the vast majority of them are not pathogenic to man, and we live in harmony and symbiosis with many of them. we also use bacteria in numerous ways to produce food, medicines, proteins, etc. the pathogenic bacteria have been well controlled in the th century; this control had an enormous positive impact on human health, including occupational health. but now the microbial world is challenging us in many ways. new or re-emerging biological hazards are possible due to the transformation of viruses, the increased resistance of some microbial strains (e.g. tuberculosis and some other bacterial agents) and the rapid spread of contaminants through extensive overseas travelling (smolinski et al. ) . the scenarios of health hazards from the use of genetically manipulated organisms have not been realized, but biotechnological products have brought along new risks of allergies. a major indoor air problem is caused by fungi, moulds and chemical emissions from contaminated construction materials. new allergies are encountered as a consequence of the increasingly allergic constitution of the population and of the introduction of new allergens into the work environment. health care personnel are increasingly exposed to new microbial hazards due to the growing mobility of people. evidence of high rates of hepatitis b antigen positivity has been shown among health care workers who are in contact with migrants from endemic areas. along with the growing international interactions and mobility, a number of viral and re-emerging bacterial infections also affect the health of people engaged in health care and the care of the elderly, as well as personnel in migrant and refugee services, in social services and other public services. this section applies the general framework for risk governance (chapter ) to the area of environmental risks. why should we include this topic in a book that is dominantly dealing with occupational health risks and safety issues? there are two major reasons for this decision: . most risks that impact health and safety of human beings are also affecting the natural environment. it is therefore necessary for risk managers to reflect the consequences of risk-taking activities with respect to workers, the public and the environment. these risk consequences are all interconnected. our approach to foster an integral approach to risk and risk management requires the integration of all risk consequences. . environmental risks are characterized by many features and properties that highlight exemplary issues for many generic risk assessment and management questions and challenges. for example, the question of how to balance benefits and risks becomes more accentuated, if not human life, but damage to environmental quality is at stake. while most people agree that saving human lives takes priority over economic benefits, it remains an open question of how much environmental change and potential damage one is willing to trade off against certain economic benefits. this section is divided into two major parts. part will introduce the essentials of environmental ethics and the application of ethical principles to judging the acceptability of human interventions into the environment. part addresses the procedures for an analytic-deliberative process of decision making when using the risk governance framework developed in chapter . it should be noted that this section draws from material that the author has compiled for the german scientific council for global environmental change and that has been partially published in german in a special report of the council (wbgu ). the last section on decision making has borrowed material from an unpublished background document on decision making and risk management that dr. warner north and the author had prepared for the us national academy of sciences. should people be allowed to do everything that they are capable of doing? this question is posed in connection with new technologies, such as nanotubes, or with human interventions in nature, such as the clearance of primaeval forests so that the land can be used for agriculture. intuitively everyone answers this question with a definitive "no": no way should people be allowed to everything that they are capable of doing. this also applies to everyday actions. many options in daily life, from lying to minor deception, from breaking a promise up to going behind a friend's back, are obviously actions that are seen by all well-intentioned observers as unacceptable. however, it is much more difficult to assess those actions where the valuation is not so obvious. is it justified to break a promise when keeping the promise could harm many other people? actions where there are conflicts between positive and negative consequences or where a judgement could be made one way or the other with equally good justification are especially common in risk management. there is hardly anyone who wilfully and without reason pollutes the environment, releases toxic pollutants or damages the health of individuals. people who pursue their own selfish goals on the cost and risk of others are obviously acting wrongly and every legislator will sanction this behaviour with the threat of punishment or a penalty. but there is a need for clarification where people bring about a benefit to society with the best intentions and for plausible reasons and, in the process, risk negative impacts on others. in ethics we talk about "conflicting values" here. most decisions involving risks to oneself or others are made for some reason: the actors who make such interventions want to secure goods or services to consumers, for example, to ensure long-term jobs and adequate incomes, to use natural resources for products and services or to use nature for recycling waste materials from production and consumption that are no longer needed. none of this is done for reasons of brotherly love, but to maintain social interests. even improving one's own financial resource is not immoral mere for this reason. the list of human activities that pose risks onto others perpetrated for existential or economic reasons could be carried on into infinity. human existence is bound to taking opportunities and risks. here are just a few figures: around , years ago about million people lived on the earth. under the production conditions those days (hunter-gatherer culture) this population level was the limit for the human species within the framework of an economic form that only interfered slightly with man's natural environment. the neolithic revolution brought a dramatic change: the carrying capacity of the world for human beings increased by a factor of and more. this agrarian pre-industrial cultural form was characterized by tightly limited carrying capacity, in around the earth was capable of feeding approx. million people. today the world supports billion people -and this figure is rising. the carrying capacity in comparison to the neolithic age has thus increased thousand-fold and continues to grow in parallel to new changes in production conditions (fritsch ; kesselring ; mohr ) . the five "promethean innovations" are behind this tremendous achievement of human culture: mastering fire, using the natural environment for agriculture, transforming fossil fuels into thermal and mechanical energy, industrial production and substituting material with information (renn ) . with today's settlement densities and the predominantly industrial way of life, the human race is therefore dependent on the technical remodelling of nature. without doubt, it needs this for survival, especially for the well-being of the innumerable people, goods and services that reduce the stock of natural resources. with regard to the question of the responsibility of human interventions in nature, the question cannot be about "whether" but -even better -about "how much", because it is an anthropological necessity to adapt and shape existing nature to human needs. for example, the philosopher klaus michael meyer-abich sees the situation as follows: ". . . we humans are not there to leave the world as though we had never been there. as with all other life forms, it is also part of our nature and our lives to bring about changes in the world. of course, this does not legitimise the destructive ways of life that we have fallen into. but only when we basically approve of the changes in the world can we turn to the decisive question of which changes are appropriate for human existence and which are not" (meyer-abich ). therefore, to be able to make a sensible judgement of the balance between necessary interventions into the environment and the risks posed by these interventions to human health and environmental quality, the range of products and services created by the consumption of nature has to be considered in relation to the losses that are inflicted on the environment and nature. with this comparison, it can be seen that even serious interventions in nature and the environment did not occur without reflection, but to provide the growing number of people with goods and services; these people need them to survive or as a prerequisite for a "good" life. however, at the same time it must be kept in mind that these interventions often inflict irreversible damage on the environment and destroy possible future usage potentials for future generations. above and beyond this, for the human race, nature is a cradle of social, cultural, aesthetic and religious values, the infringement of which, in turn, has a major influence on people's well-being. on both sides of the equation, there are therefore important goods that have to be appreciated when interventions in nature occur. but what form should such an appreciation take? if the pros and cons of the intervention in nature have to be weighed against each other, criteria are needed that can be used as yardsticks. who can and may draw up such criteria, according to which standards should the interventions be assessed and how can the various evaluative options for action be compared with each other for each criterion? taking risks always involves two major components: an assessment of what we can expect from an intervention into the environment (be it the use of resources or the use of environments as a sink for our waste). this is the risk and benefit assessment side of the risk analysis. secondly, we need to decide whether the assessed consequences are desirable. whereas the estimate of consequences broadly falls in the domain of scientific research and expertise, with uncertainties and ambiguities in particular having to be taken into account (irgc , klinke and renn ) , the question about the foundations for evaluating various options for action and about drawing up standards guiding action is a central function of ethics (taylor ) . ethics can provide an answer to the question posed in the beginning ("should people be allowed to do everything that they are capable of doing?") in a consistent and transparent manner. in section . . , environmental ethics will be briefly introduced. this review is inspired by the need for a pragmatic and policy-oriented approach. it is not a replacement for a comprehensive and theoretically driven compendium of environmental ethics. environmental ethics will then be applied to evaluate environmental assets. in this process, a simple distinction is made between categorical principles -that must under no circumstances be exceeded or violated -and compensatory principles, where compensation with other competing principles is allowed. this distinction consequently leads to a classification of environmental values, which, in turn, can be broken down into criteria to appreciate options for designing environmental policies. in section . . , these ideas of valuation will be taken up and used to translate the value categories into risk handling guidelines. at the heart of the considerations here is the issue of how the aims of ethically founded considerations can be used to support and implement risk-based balancing of costs and benefits. for this purpose, we will develop an integrative risk governance framework. the concept of risk governance comprises a broad picture of risk: not only does it include what has been termed "risk management" or "risk analysis", it also looks at how risk-related decision making unfolds when a range of actors is involved, requiring co-ordination and possibly reconciliation between a profusion of roles, perspectives, goals and activities. indeed, the problem-solving capacities of individual actors, be they government, the scientific community, business players, ngos or civil society as a whole, are limited and often unequal to the major challenges facing society today. then the ideas of the operational implementation of normative and factual valuations are continued and a procedure is described that is capable of integrating ethical, risk-based and work-related criteria into a proposed procedural orientation. this procedure is heavily inspired by decision analysis. answering the question about the right action is the field of practical philosophy, ethics. following the usual view in philosophy, ethics describes the theory of the justification of normative statements, i.e. those that guide action (gethmann , mittelstraß , nida-rümelin a , revermann . a system of normative statements is called "morals". ethical judgements therefore refer to the justifiability of moral instructions for action that may vary from individual to individual and from culture to culture (ott ) . basically, humans are purpose-oriented and self-determined beings who act not only instinctively, but also with foresight, and are subject to the moral standards to carry out only those actions that they can classify as good and justifiable (honnefelder ) . obviously, not all people act according to the standards that they themselves see as necessary, but they are capable of doing so. in this context, it is possible for people to act morally because, on the one hand, they are capable of distinguishing between moral and immoral action and, on the other, are largely free to choose between different options for action. whether pursuing a particular instruction for action should be considered as moral or immoral is based on whether the action concerned can be felt and justified to be "reasonable" in a particular situation. standards that cross over situations and that demand universal applicability are referred to as principles here. conflicts may arise between competing standards (in a specific situation), as well as between competing principles, the solution of which, in turn, needs justification (szejnwald-brown et al. ) . providing yardsticks for such justification or examining moral systems with respect to their justifiability is one of the key tasks of practical ethics (gethmann ) . in ethics a distinction is made between descriptive (experienced morality) and prescriptive approaches, i.e. justifiable principles of individual and collective behaviour (frankena , hansen . all descriptive approaches are, generally speaking, a "stock-taking" of actually experienced standards. initially, it is irrelevant whether these standards are justified or not. they gain their normative force solely from the fact that they exist and instigate human action (normative force of actual action). most ethicists agree that no conclusions about general validity can be drawn from the actual existence of standards. this would be a naturalistic fallacy (akademie der wissenschaften , ott ) . nevertheless, experienced morality can be an important indicator of different, equally justifiable moral systems, especially where guidance for cross-cultural behaviour is concerned. this means that the actual behaviour of many people with regard to their natural environment reveals which elements of this environment they value in particular and which they do not. however, in this case, too, the validity of the standards is not derived from their factuality, but merely used as a heurism in order to find an adequate (possibly culture-immanent) justification. but given the variety of cultures and beliefs, how can standards be justified inter-subjectively, i.e. in a way that is equally valid to all? is it not the case that science can only prove or disprove factual statements (and this only to a certain extent), but not normative statements? a brief discourse on the various approaches in ethics is needed to answer this question. first of all, ethics is concerned with two different target aspects: on the one hand, it is concerned with the question of the "success" of one's own "good life", i.e. with the standards and principles that enable a person to have a happy and fulfilled life. this is called eudemonistic ethics. on the other hand, it is concerned with the standards and principles of living together, i.e. with binding regulations that create the conditions for a happy life: the common good. this is called normative ethics (galert , ott . within normative ethics a distinction is made between deontological and teleological approaches when justifying normative statements (höffe ) . deontological approaches are principles and standards of behaviour that apply to the behaviour itself on the basis of an external valuation criterion. it is not the consequences of an action that are the yardstick of the valuation; rather, it is adhering to inherent yardsticks that can be used against the action itself. such external yardsticks of valuation are derived from religion, nature, intuition or common sense, depending on the basic philosophical direction. thus, protection of the biosphere can be seen as a divine order to protect creation (rock , schmitz , as an innate tendency for the emotional attachment of people to an environment with biodiversity (wilson ) , as a directly understandable source of inspiration and joy (ehrenfeld ) or as an educational means of practising responsibility and maintaining social stability (gowdy ) . by contrast, teleological approaches refer to the consequences of action. here, too, external standards of valuation are needed since the ethical quality of the consequences of action also have to be evaluated against a yardstick of some kind. with the most utilitarian approaches (a subset of the teleological approaches) this yardstick is defined as an increase in individual or social benefit. in other schools of ethics, intuition (can the consequence still be desirable?) or the aspect of reciprocity (the so-called "golden rule": "do as you would be done by") play a key role. in the approaches based on logical reasoning (especially in kant), the yardstick is derived from the logic of the ability to generalize or universalize. kant himself is in the tradition of deontological approaches ("good will is not good as a result of what it does or achieves, but just as a result of the intention"). according to kant, every principle that, if followed generally, makes it impossible for a happy life to be conducted is ethically impermissible. in this connection, it is not the desirability of the consequences that captures kant's mind, but the logical inconsistency that results from the fact that the conditions of the actions of individuals would be undermined if everyone were to act according to the same maxims (höffe ) . a number of contemporary ethicists have taken up kant's generalization formula, but do not judge the maxims according to their internal contradictions; rather, they judge them according to the desirability of the consequences to be feared from the generalization (jonas or zimmerli should be mentioned here). these approaches can be defined as a middle course between deontological and teleological forms of justification. in addition to deontological and teleological approaches, there is also the simple solution of consensual ethics, which, however, comprises more than just actually experienced morality. consensual ethics presupposes the explicit agreement of the people involved in an action. everything is allowed provided that all affected (for whatever reason) voluntarily agree. in sexual ethics at the moment a change from deontological ethics to a consensual moral code can be seen. the three forms of normative ethics are shown in figure . . the comparison of the basic justification paths for normative moral systems already clearly shows that professional ethicists cannot create any standards or des- ignate any as clearly right, even if they play a role in people's actual lives. much rather it is the prime task of ethics to ensure on the basis of generally recognized principles (for example, human rights) that all associated standards and behaviour regulations do not contradict each other or a higher order principle. above and beyond this, ethics can identify possible solutions that may occur with a conflict between standards and principles of equal standing. ethics may also reveal interconnections of justification that have proved themselves as examination criteria for moral action in the course of their disciplinary history. finally, many ethicists see their task as providing methods and procedures primarily of an intellectual nature by means of which the compatibility or incompatibility of standards within the framework of one or more moral systems can be completed. unlike the law, the wealth of standards of ethics is not bound to codified rules that can be used as a basis for such compatibility examinations. every normative discussion therefore starts with the general issues that are needed in order to allow individuals a "good life" and, at the same time, to give validity to the principles required to regulate the community life built on common good. but how can generally binding and inter-subjectively valid criteria be made for the valuation of "the common good"? in modern pluralistic societies, it is increasingly difficult for individuals and groups of society to draw up or recognize collectively binding principles that are perceived by all equally as justifiable and as self-obliging (hartwich and wewer , zilleßen ) . the variety of lifestyle options and subjectiernization. with increasing technical and organizational means of shaping the future, the range of behaviour options available to people also expands. with the increasing plurality of lifestyles, group-specific rationalities emerge that create their own worldviews and moral standards, which demand a binding nature and validity only within a social group or subculture. the fewer cross-society guiding principles or behaviour orientations are available, the more difficult is the process of agreement on collectively binding orientations for action. however, these are vital for the maintenance of economic cooperation, for the protection of the natural foundations of life and for the maintenance of cohesion in a society. no society can exist without the binding specification of minimum canons of principles and standards. but how can agreement be reached on such collectively binding principles and standards? what criteria can be used to judge standards? the answers to this question depend on whether the primary principles, in other words, the starting point of all moral systems, or secondary principles or standards, i.e. follow-on standards that can be derived from the primary principles, are subjected to an ethical examination. primary principles can be categorical or compensatory (capable of being compensated). categorical principles are those that must not be infringed under any circumstances, even if other prin- fication of meaning (individualization) are accompanying features of mod-ciples would be infringed as a result. the human right to the integrity of life could be named here as an example. compensatory principles are those where temporary or partial infringement is acceptable, provided that as a result the infringement of a principle of equal or higher ranking is avoided or can be avoided. in this way certain freedom rights can be restricted in times of emergency. in the literature on ethical rules, one can find more complex and sophisticated classifications of normative rules. for our purpose to provide a simple and pragmatic framework, the distinction in four categories (principles and standards; categorical and compensatory) may suffice. this distinction has been developed from a decision-analytical perspective. but how can primary principles be justified as equally valid for all people? although many philosophers have made proposals here, there is a broad consensus today that neither philosophy nor any other human facility is capable of stating binding metacriteria without any doubt and for all people, according to which such primary principles should be derived or examined (mittelstraß ) . a final justification of normative judgements cannot be achieved by logical means either, since all attempts of this kind automatically end either in a logical circle, in an unending regression (vicious cycle) or in a termination of the procedure and none of these alternatives is a satisfactory solution for final justification (albert ). the problem of not being able to derive finally valid principles definitively, however, seems to be less serious than would appear at first glance. because, regardless of whether the basic axioms of moral rules are taken from intuition, observations of nature, religion, tradition reasoning or common sense, they have broadly similar contents. thus, there is broad consensus that each human individual has a right to life, that human freedom is a high-value good and that social justice should be aimed at. but there are obviously many different opinions about what these principles mean in detail and how they should be implemented. in spite of this plurality, however, discerning and well-intentioned observers can usually quickly agree, whether one of the basic principles has clearly been infringed. it is more difficult to decide whether they have clearly been fulfilled or whether the behaviour to be judged should clearly be assigned to one or several principles. since there is no finally binding body in a secular society that can specify primary principles or standards ex cathedra, in this case consensus among equally defendable standards or principles can be used (or pragmatically under certain conditions also majority decisions). ethical considerations are still useful in this case as they allow the test of generalization and the enhancement of awareness raising capabilities. in particular, they help to reveal the implications of such primary principles and standards. provided that primary principles are not concerned (such as human rights), the ethical discussion largely consists of examining the compatibility of each of the available standards and options for action with the primary principles. in this connection, the main concerns are a lack of contradictions (consistency), logical consistency (deductive validity), coherence (agreement with other principles that have been recognized as correct) and other, broadly logical criteria (gethmann ) . as the result of such an examination it is entirely possible to reach completely different conclusions that all correspond to the laws of logic and thus justify new plurality. in order to reach binding statements or valuations here the evaluator can either conduct a discussion in his "mind" and let the arguments for various standards compete with each other (rather like a platonic dialogue) or conduct a real discussion with the people affected by the action. in both cases the main concern is to use the consensually agreed primary principles to derive secondary principles of general action and standards of specific action that should be preferred over alternatives that can be equally justified. a plurality of solutions should be expected especially because most of the concrete options for action comprise only a gradual fulfilment and infringement of primary principles and therefore also include conflicting values. for value conflicts at the same level of abstraction there are, by definition, no clear rules for solution. there are therefore frequently conflicts between conserving life through economic development and destroying life through environmental damage. since the principle of conserving life can be used for both options a conflict is unavoidable in this case. to solve the conflicts, ethical considerations, such as the avoidance of extremes, staggering priorities over time or the search for third solutions can help without, however, being able to convincingly solve this conflict in principle to the same degree for all (szejnwald-brown et al. ) . these considerations lead to some important conclusions for the matter of the application of ethical principles to the issue of human action with regard to the natural environment. first of all, it contradicts the way ethics sees itself to develop ethics of its own for different action contexts. just as there can be no different rules for the logic of deduction and induction in nomological science, depending on which object is concerned, it does not make any sense to postulate an independent set of ethics for the environment (galert ) . justifications for principles and moral systems have to satisfy universal validity (nida-rümelin b). furthermore, it is not very helpful to call for a special moral system for the environment since this -like every other moral system -has to be traceable to primary principles. instead, it makes sense to specify the generally valid principles that are also relevant with regard to the issue of how to deal with the natural environment. at the same time standards should be specified that are appropriate to environmental goods and that reflect those principles that are valid beyond their application to the environment. as implied above, it does not make much sense to talk about an independent set of environmental ethics. much rather, general ethics should be transferred to issues relating to the use of the environment (hargrove ) . three areas are usually dealt with within the context of environmental ethics (galert ): • environmental protection, i.e. the avoidance or alleviation of direct or indirect, current or future damage and pollution resulting from anthropogenic emissions, waste or changes to the landscape, including land use, as well as the long-term securing of the natural foundations of life for people and other living creatures (birnbacher a ). • animal protection, i.e. the search for reasonable and enforceable standards to avoid or reduce pain and suffering in sentient beings (krebs , vischer ). • nature conservation, i.e. the protection of nature against the transforming intervention of human use, especially all measures to conserve, care for, promote and recreate components of nature deemed to be valuable, including species of flora and fauna, biotic communities, landscapes and the foundations of life required there (birnbacher a) . regardless which of these three areas are addressed we need to explore which primary principles be applied to them. when dealing with the environment, the traditional basic and human rights, as well as the civil rights that have been derived from them, should be just as much a foundation of the consideration as other areas of application in ethics. however, with regard to the primary principles there is a special transfer problem when addressing human interventions into nature and the environment: does the basic postulate of conservation of life apply only to human beings, to all other creatures or to all elements of nature, too? this question does not lead to a new primary principle, as one may suspect at first glance. much rather, it is concerned with the delineation of the universally recognized principle of the conservation of life that has already been specified in the basic rights canon. are only people included in this principle (this is the codified version valid in most legal constitutions today) or other living creatures, too? and if yes, which ones? should non-living elements be included as well? when answering this question, two at first sight contradictory positions can be derived: anthropocentrism and physiocentrism (taylor , ott , galert . the anthropocentric view places humans and their needs at the fore. nature's own original demands are alien to this view. interventions in nature are allowed if they are useful to human society. a duty to make provisions for the future and to conserve nature exists in the anthropocentric world only to the extent that natural systems are classed as valuable to people today and subsequent generations and that nature can be classed as a means and guarantor of human life and survival (norton , birnbacher b . in the physiocentric concept, which forms an opposite pole to the anthropocentric view, the needs of human beings are not placed above those of nature. here, every living creature, whether humans, animals or plants, have intrinsic rights with regard to the chance to develop their own lives within the framework of a natural order. merit for protection is justified in the physiocentric view by an inner value that is unique to each living creature or the environment in general. nature has a value of its own that does not depend on the functions that it fulfils today or may fulfil later from a human society's point of view (devall and sessions , callicott , rolston , meyer-abich . each of these prevailing understandings of the human-nature relationship has implications that are decisive for the form and extent of nature use by humans (elliot , krebs . strictly speaking, it could be concluded from the physiocentric idea that all human interventions in nature have to be stopped so that the rights of other creatures are not endangered. yet, not even extreme representatives of a physiocentric view would go so far as to reject all human interventions in nature because animals, too, change the environment by their ways of life (e.g. the elephant prevents the greening of the savannah). the central postulate of a physiocentric view is the gradual minimization of the depth of interventions in human use of nature. the only interventions that are permitted are those that contribute to directly securing human existence and do not change the fundamental composition of the surrounding natural environment. if these two criteria were taken to the extreme, neither population development beyond the boundaries of biological carrying capacity nor a transformation of natural land into pure agricultural land would be allowed. such a strict interpretation of physiocentrism would lead to a radical reversal of human history so far and is not compatible with the values and expectations of most people. the same is true for the unlimited transfer of anthropocentrism to dealings with nature. in this view, the use of natural services is subjected solely to the individual cost-benefit calculation. this can lead to unscrupulous exploitation of nature by humans with the aim of expanding human civilization. both extremes quickly lead to counter-intuitive implications. when the issue of environmental design and policy is concerned, anthropocentric and physiocentric approaches in their pure form are found only rarely, much rather they occur in different mixtures and slants. the transitions between the concepts are fluid. moderate approaches certainly take on elements from the opposite position. it can thus be in line with a fundamentally physiocentric perspective if the priority of human interests is not questioned in the use of natural resources. it is also true that the conclusions of a moderate form of anthropocentrism can approach the implications of the physiocentric view. table . provides an overview of various types of anthropocentric and physiocentric perspectives. if we look at the behaviour patterns of people in different cultures, physiocentric or anthropocentric basic positions are rarely maintained consistently (bargatzky and kuschel ; on the convergence theory: birnbacher ) . in the strongly anthropocentric countries in the west, people spend more money on the welfare and health of their own pets than on saving human lives in other countries. in the countries of the far east that are characterized by physiocentrism, nature is frequently exploited even more radically than in the industrialized countries of the west. this inconsistent action is not a justification for one view or the other, it is just a warning for caution when laying down further rules for use so that no extreme -and thus untenable -demands be made. also from an ethical point of view, radical anthropocentrism should be rejected just as much as radical physiocentrism. if, to take up just one argument, the right to human integrity is largely justified by the fact that causing pain by others should be seen as something to avoid, this consideration without a doubt has to be applied to other creatures that are also capable of feeling pain (referred to as: pathocentrism). here, therefore, pure anthropocentrism cannot convince. in turn, with a purely physiocentric approach the primary principles of freedom, equality and human dignity could not be maintained at all if every part of living nature were equally entitled to use the natural environment. under these circumstances people would have to do without agriculture, the conversion of natural land into agricultural land and breeding farm animals and pets in line with human needs. as soon table . different perspectives on nature. adapted from renn and goble ( : ). as physiocentrism is related to species and not to individuals as is done in some biocentric perspectives human priority is automatically implied; because where human beings are concerned, nearly all schools of ethics share the fundamental moral principle of an individual right to life from birth. if this right is not granted to individual animals or plants, a superiority of the human race is implicitly assumed. moderate versions of physiocentrism acknowledge a gradual de-escalation with respect to the claim of individual table . different perspectives on nature (continued). adapted from renn and goble ( : ). life protection. the extreme forms of both physiocentrism and anthropocentrism are therefore not very convincing and are hardly capable of achieving a global consensus. this means that only moderate anthropocentrism or moderate biocentrism should be considered. the image of nature that is used as a basis for the considerations in this section emphasizes the uniqueness of human beings vis-à-vis physiocentric views, but does not imply carte blanche for wasteful and careless dealings with nature. this moderate concept derives society's duty to conserve nature -also for future generations -from the life-preserving and life-enhancing meaning of nature for society. this is not just concerned with the instrumental value of nature as a "store of resources", it is also a matter of the function of nature as a provider of inspiration, spiritual experience, beauty and peace (birnbacher and schicha ) . in this context it is important that human beings -as the addressees of the moral standard -do not regard nature merely as material and as a way towards their own self-realization, but can also assume responsibility for conservation of their cultural and so-cial function, as well as their existential value above and beyond the objective and technically available benefits (honnefelder ) . one of the first people to express this responsibility of human stewardship of nature in an almost poetic way was the american ecologist aldo leopold, who pointed out people's special responsibility for the existence of nature and land as early as the s with the essay "the conservation ethics". his most well-known work "a sand county almanac" is sustained by the attempt to observe and assess human activities from the viewpoint of the land (a mountain or an animal). this perspective was clearly physiocentric and revealed fundamental insights about the relationship between humans and nature on the basis of empathy and shifting perspectives. his point of view had a strong influence on american environmental ethics and the stance of conservationists. although this physiocentric perspective raises many concerns, the idea of stewardship has been one of the guiding ideas for the arguments used in this section (pickett et al. ) . we are morally required to exercise a sort of stewardship over living nature, because nature cannot claim any rights for itself, but nevertheless has exceptional value that is important to man above and beyond its economic utility value (hösle ) . since contemporary society and the generations to come certainly use, or will use, more natural resources than would be compatible with a lifestyle in harmony with the given natural conditions, the conversion of natural land into anthropogenically determined agricultural land cannot be avoided (mohr ) . many people criticized human interventions into natural cycles as infringements of the applicable moral standards of nature conservation (for example, fastened onto the postulate of sustainability). but we should avoid premature conclusions here, as can be seen with the example of species protection. for example, where natural objects or phenomena are concerned that turn out to be a risk to human or non-human living creatures, the general call for nature conservation is already thrown into doubt (gale and cordray ) . not many people would call the eradication of cholera bacteria, hiv viruses and other pathogens morally bad (mittelstraß ) if remaining samples were kept under lock and key in laboratories. also, combating highly evolved creatures, such as cockroaches or rats meets with broad support if we ignore the call for the complete eradication of these species for the time being. an environmental initiative to save cockroaches would not be likely to gain supporters. if we look at the situation carefully, the valuation of human behaviour in these examples results from a conflict. because the conservation of the species competes with the objective of maintaining human health or the objective of a hygienic place to live, two principles, possibly of equal ranking, come face to face. in this case the options for action, which may all involve a gradual infringement of one or more principles, would have to be weighed up against each other. a general ban on eradicating a species can thus not be justified ethically, in the sense of a categorical principle, unless the maintenance of human health were to be given lower priority than the conservation of a species. with regard to the issue of species conservation, therefore, different goods have to be weighed up against each other. nature itself cannot show society what it is essential to conserve and how much nature can be traded for valuable commodities. humans alone are responsible for a decision and the resulting conflicts between competing objectives. appreciation and negotiation processes are therefore the core of the considerations about the ethical justification of rules for interventions. but this does not mean that there is no room for categorical judgements along the lines of "this or that absolutely must be prohibited" in the matter of human interventions into the natural environment. it follows on from the basic principle of conserving human life that all human interventions that threaten the ability of the human race as a whole, or a significant number of individuals alive today or in the future, to exist should be categorically prohibited. this refers to intervention threats to the systemic functions of the biosphere. such threats are one of the guiding principles that must not be exceeded under any circumstances, even if this excess were to be associated with high benefits. in the language of ethics this is a categorical principle, in the language of economics a good that is not capable of being traded. the "club" of categorical prohibitions should, however, be used very sparingly because plausible trade-offs can be thought up for most principles, the partial exceeding of which appears intuitively. in the case of threats to existence, however, the categorical rejection of the behaviour that leads to this is obvious. but what does the adoption of categorical principles specifically mean for the political moulding of environmental protection? in the past, a number of authors have tried to specify the minimum requirements for an ethically responsible moral system with respect to biosphere use. these so-called "safe minimum standards" specify thresholds for the open-ended measurement scale of the consequences of human interventions that may not be ex-ceeded even if there is a prospect of great benefits (randall , randall and farmer ) . in order to be able to specify these thresholds in more detail the breakdown into three levels proposed by the german scientific council for global environmental change is helpful (wbgu ) . these levels are: • the global bio-geochemical cycles in which the biosphere is involved as one of the causes, modulator or "beneficiary"; • the diversity of ecosystems and landscapes that have key functions as bearers of diversity in the biosphere; and • the genetic diversity and the species diversity that are both "the modelling clay of evolution" and basic elements of ecosystem functions and dynamics. where the first level is concerned, in which the functioning of the global ecosystem is at stake, categorical principles are obviously necessary and sensible, provided that no one wants to shake the primary principle of the permanent preservation of the human race. accordingly, all interventions in which important substance or energy cycles are significantly influenced at a global level and where globally effective negative impacts are to be expected are categorically prohibited. usually no stringently causal evidence of the harmful nature of globally relevant information is needed; justified suspicion of such harmfulness should suffice. later in this chapter we will make a proposal for risk valuation and management how the problem of uncertainty in the event of possible catastrophic damage potential should be dealt with (risk type cassandra). on the second level, the protection of ecosystems and landscapes, it is much more difficult to draw up categorical rules. initially, it is obvious that all interventions in landscapes in which the global functions mentioned on the first level are endangered must be avoided. above and beyond this, it is wise from a precautionary point of view to maintain as much ecosystem diversity as possible in order to keep the degree of vulnerability to the unforeseen or even unforeseeable consequences of anthropogenic and nonanthropogenic interventions as low as possible. even though it is difficult to derive findings for human behaviour from observations of evolution, the empirically proven statement "he who places everything on one card, always loses in the long run" seems to demonstrate a universally valid insight into the functioning of systemically organized interactions. for this reason, the conservation of the natural diversity of ecosystems and landscape forms is a categorical principle, whereas the depth of intervention allowed should be specified on the basis of principles and standards capable of compensation. the same can be said for the third level, genetic and species protection. here too, initially the causal chain should be laid down: species conservation, landscape conservation, maintaining global functions. wherever this chain is unbroken, a categorical order of conservation should apply. these species could be termed primary key species. this includes such species that are not only essential for the specific landscape type in which they occur, but also for the global cycles above and beyond this specific landscape type thanks to their special position in the ecosystem. probably, it will not be possible to organize all species under this functional contribution to the surrounding ecosystem, but we could also think of groups of species, for example, humus-forming bacteria. in second place there are the species that characterize certain ecosystems or landscapes. here they are referred to as secondary key species. they, too, are under special protection that is not necessarily under categorical reservations. their function value, however, is worthy of special attention. below these two types of species there are the remaining species that perform ecosystem functions to a greater or lesser extent. what this means for the worthiness for protection of these species and the point at which the precise limit for permitted intervention should be drawn, is a question that can no longer be solved with categorical principles and standards, but with the help of compensatory principles and standards. generally, here, too, as with the issue of ecosystem and landscape protection, the conservation of diversity as a strategy of "reinsurance" against ignorance, global risks and unforeseeable surprises is recommended. it remains to be said that from a systemic point of view, a categorical ban has to apply to all human interventions where global closed loops are demonstrably at risk. above and beyond this, it makes sense to recognize the conservation of landscape variety (also of ecosystem diversity within landscapes) and of genetic variety and species diversity as basic principles, without being able to make categorical judgements about individual landscape or species types as a result. in order to evaluate partial infringements of compensatory principles or standards, which are referred to in the issue of environmental protection, we need rules for decision making that facilitate the balancing process necessary to resolve compensatory conflicts. in the current debate about rules for using the environment and nature, it is mainly teleological valuation methods that are proposed (hubig , ott . these methods are aimed at: • estimating the possible consequences of various options for action at all dimensions relevant to potentially affected people; • recording the infringements or fulfilments of these expected consequences in the light of the guiding standards and principles; and • then weighing them according to an internal key so that they can be weighed up in a balanced way. on the positive side of the equation, there are the economic benefits of an intervention and the cultural values created by use, for example, in the form of income, subsistence (self-sufficiency) or an aesthetically attractive landscape (parks, ornamental gardens, etc.); on the negative side, there are the destruction of current or future usage potentials, the loss of unknown natural resources that may be needed in the future and the violation of aesthetic, cultural or religious attributes associated with the environment and nature. there are therefore related categories on both sides of the equation: current uses vs. possible uses in the future, development potentials of current uses vs. option values for future use, shaping the environment by use vs. impairments to the environment as a result of alternative use, etc. with the same or similar categories on the credit and debit side of the balance sheet the decision is easy when there is one option that performs better or worse than all the other options for all categories. although such a dominant (the best for all categories) or sub-dominant option (the worst for all categories) is rare in reality, there are examples of dominant or sub-dominant solutions. thus, for example, the overfelling of the forests of kalimantan on the island of borneo in indonesia can be classed as a sub-dominant option since the short-term benefit, even with extremely high discount rates, is in no proportion to the long-term losses of benefits associated with a barren area covered in imperata grass. the recultivation of a barren area of this kind requires sums many times the income from the sale of the wood, including interest. apparently there are no cultural, aesthetic or religious reasons for conversion of primary or secondary woodland into grassland. this means that the option of deforestation should be classed as of less value than alternative options for all criteria, including economic and social criteria. at best, we can talk about a habit of leaving rainforests, as a "biotope not worthy of conservation", to short-term use. but habit is not a sound reason for the choice of any sub-optimum option. as mentioned at the start of this chapter, habit as experienced morality, does not have any normative force, especially when this is based on the illusion of the marginality of one's own behaviour or ignorance about sustainable usage forms. but if we disregard the dominant or sub-dominant solutions, an appreciation between options that violate or fulfil compensatory standards and principles depends on two preconditions: best possible knowledge of the consequences (what happens if i choose option a instead of option b?) and a transparent, consistent rationale for weighing up these consequences as part of a legitimate political decision process (are the foreseeable consequences of option a more desirable or bearable than the consequences of option b?) (akademie der wissenschaften ). adequate knowledge of the consequences is needed in order to reveal the systemic connections between resource use, ecosystem reactions to human interventions and socio-cultural condition factors (wolters ) . this requires interdisciplinary research and cooperation. the task of applied ecological research, for example, is to show the consequences of human intervention in the natural environment and how ecosystems are burdened by different interventions and practices. the economic approach provides a benefit-oriented valuation of natural and artificial resources within the context of production and consumption, as well as a valuation of transformation processes according to the criterion of efficiency. cultural and social sciences examine the feedback effects between use, social development and cultural self-perception. they illustrate the dynamic interactions between usage forms, socio-cultural lifestyles and control forms. interdisciplinary, problem-oriented and system-related research contribute to forming a basic stock of findings and insights about functional links in the relationship between human interventions and the environment and also in developing constructive proposals as to how the basic question of an ethically justified use of the natural environment can be answered in agreement with the actors concerned (wbgu ) . accordingly, in order to ensure sufficient environmental protection, scientific research, but especially transdisciplinary system research at the interface between natural sciences and social sciences is essential. bringing together the results of interdisciplinary research, the policy-relevant choice of knowledge banks and balanced interpretation in an environment of uncertainty and ambivalence are difficult tasks that primarily have to be performed by the science system itself. how this can happen in a way that is methodslogically sound, receptive to all reasonable aspects of interpretation and yet subjectively valid will be the subject of section . . . but knowledge alone does not suffice. in order to be able to act effectively and efficiently while observing ethical principles, it is necessary to shape the appreciation process between the various options for action according to rational criteria (gethmann ) . to do this it is, first of all, necessary to identify the dimensions that should be used for a valuation. the discussion about the value dimensions to be used as a basis for valuation is one of the most popular subjects within environmental ethics. to apply these criteria in risk evaluation and to combine the knowledge aspects about expected consequences of different behavioural options with the ethical principles is the task of what we have called risk governance. what contribution do ethics make towards clarifying the prospects and limits of human interventions into the natural environment? the use of environmental resources is an anthropological necessity. human consciousness works reflexively and humans have developed a causal recognition capacity that enables them to record cause and effect anticipatively and to productively incorporate assessed consequences in their own action. this knowledge is the motivating force behind the cultural evolution and the development of technologies, agriculture and urbanization. with power over an ever-increasing potential of design and intervention in nature and social affairs over the course of human history, the potential for abuse and exploitation has also grown. whereas this potential was reflected in philosophical considerations and legal standards at a very early stage with regard to moral standards between people, the issue of human responsibility towards nature and the environment has only become the subject of intensive considerations in recent times. ethical considerations are paramount in this respect. on the one hand, they offer concrete standards for human conduct on the bases of criteria that can be generalized, and, on the other hand, they provide procedural advice about a rational and decision-and policy-making process. a simple breakdown into categorical rules and prohibitions that are capable of being compensated can assist decision makers for the justification of principles and standards on environmental protection. as soon as human activities exceed the guidelines of the categorical principles, there is an urgent need for action. how can we detect whether such an excess has happened and how it can be prevented from the very outset that these inviolable standards and principles be exceeded? here are three strategies of environmental protection to be helpful for the implementation of categor-ical guidelines. the first strategy is that of complete protection with severe restrictions of all use by humans (protection priority). the second strategy provides for a balanced relationship between protection and use, where extensive resource use should go hand in hand with the conservation of the ecosystems concerned (equal weight). the third strategy is based on optimum use involving assurance of continuous reproduction. the guiding principle here would be an intensive and, at the same time, sustainable, i.e. with a view to the long term, use of natural resources (use priority). the following section will present a framework for applying these principles into environmental decision making under risk. the main line of argument is that risk management requires an analytic-deliberative approach for dealing effectively and prudently with environmental risks. assessing potential consequences of human interventions and evaluating their desirability on the basis of subsequent knowledge and transparent valuation criteria are two of the central tasks of a risk governance process. however, the plural values of a heterogeneous public and people's preferences have to be incorporated in this process. but how can this be done given the wealth of competing values and preferences? should we simply accept the results of opinion polls as the basis for making political decisions? can we rely on risk perception results to judge the seriousness of pending risks? or should we place all our faith in professional risk management? if we turn to professional help to deal with plural value input, economic theory might provide us an answer to this problem. if environmental goods are made individual and suitable for the market by means of property rights, the price that forms on the market ensures an appropriate valuation of the environmental good. every user of this good can then weigh up whether he is willing to pay the price or would rather not use the good. with many environmental goods, however, this valuation has to be made by collective action, because the environmental good concerned is a collective or open access good. in this case a process is needed that safeguards the valuation and justifies it to the collective. however, this valuation cannot be determined with the help of survey results. although surveys are needed to be able to estimate the breadth of preferences and people's willingness to pay, they are insufficient for a derivation of concrete decision-making criteria and yardsticks for evaluating the tolerability of risks to human health and the environment. • firstly, the individual values are so widely scattered that there is little sense in finding an average value here. • secondly, the preferences expressed in surveys change much within a short time, whereas ethical valuations have to be valid for a long time. • thirdly, as outlined in the subsection on risk perception, preferences are frequently based on flawed knowledge or ad hoc assumptions both of which should not be decisive according to rational considerations. what is needed, therefore, is a gradual process of assigning trade-offs in which existing empirical values are put into a coherent and logically consistent form. in political science and sociological literature reference is mostly made to three strategies of incorporating social values and preferences in rational decision-making processes (renn ) . firstly, a reference to social preferences is viewed solely as a question of legitimate procedure (luhmann , vollmer . the decision is made on the basis of formal decision-making process (such as majority voting). if all the rules have been kept, a decision is binding, regardless of whether the subject matter of the decision can be justified or whether the people affected by the decision can understand the justification. in this version, social consensus has to be found only about the structure of the procedures; the only people who are then involved in the decisions are those who are explicitly legitimated to do so within the framework of the procedure decided upon. the second strategy is to rely on the minimum consensuses that have developed in the political opinion-forming process (muddling through) (lindbloom (lindbloom , . in this process, only those decisions that cause the least resistance in society are considered to be legitimate. in this version of social pluralism groups in society have an influence on the process of the formation of will and decision making to the extent that they provide proposals capable of being absorbed, i.e. adapted to the processing style of the political system, and that they mobilize public pressure. the proposal that then establishes itself in politics is the one that stands up best in the competition of proposals, i.e. the one that entails the fewest losses of support for political decision makers by interest groups. the third strategy is based on the discussion between the groups involved (habermas , renn ). in the communicative exchange among the people involved in the discussion a form of communicative rationality that everyone can understand evolves that can serve as a justification for collectively binding decisions. at the same time, discursive methods claim to more appropriately reflect the holistic nature of human beings and also to provide fair access to designing and selecting solutions to problems. in principle, the justification of standards relevant to decisions is linked to two conditions: the agreement of all involved and substantial justification of the statements made in the discussion (habermas ) . all three strategies of political control are represented in modern societies to a different extent. legitimation conflicts mostly arise when the three versions are realized in their pure form. merely formally adhering to decisionmaking procedures without a justification of content encounters a lack of understanding and rejection among the groups affected especially when they have to endure negative side effects or risks. then acceptance is refused. if, however, we pursue the opposite path of least resistance and base ourselves on the route of muddling through we may be certain of the support of the influential groups, but, as in the first case, the disadvantaged groups will gradually withdraw their acceptance because of insufficient justification of the decision. at the same time, antipathy to politics without a line or guidance is growing, even the affected population. the consequence is political apathy. the third strategy of discursive control faces problems, too. although in an ideal situation it is suitable for providing transparent justifications for the decision-making methods and the decision itself, in real cases the conditions of ideal discourse can rarely be adhered to (wellmer ) . frequently, discussions among strategically operating players lead to a paralysis of practical politics by forcing endless marathon meetings with vast quantities of points of order and peripheral contributions to the discussion. the "dictatorship of patience" (weinrich ) ultimately determines which justifications are accepted by the participants. the public becomes uncertain and disappointed by such discussions that begin with major claims and end with trivial findings. in brief: none of the three ways out of the control dilemma can convince on its own; as so often in politics, everything depends on the right mixture. what should a mixture of the three elements (due process, pluralistic muddling through and discourse) look like so that a maximum degree of rationality can come about on the basis of social value priorities? a report by the american academy of sciences on the subject of "understanding environmental risks" (national research council ) comes to the conclusion that scientifically valid and ethically justified procedure for the collective valuation of options for risk handling can only be realized within the context of -what the authors coin -an analytic-deliberative process. analytic means that the best scientific findings about the possible consequences and conditions of collective action are incorporated in the negotiations; deliberative means that rationally and ethically transparent criteria for making trade-offs are used and documented externally. moreover, the authors consider that fair participation by all groups concerned is necessary to ensure that the different moral systems that can legitimately exist alongside each other should also be incorporated in the process. to illustrate the concept of analytic-deliberative decision making consider a set of alternative options or choices, from which follow consequences (see basic overview in dodgson et al. ) . the relationship between the choice made, and the consequences that follow from this choice, may be straightforward or complex. the science supporting environmental policy is often complicated, across many disciplines of science and engineering, and also involving human institutions and economic interactions. because of limitations in scientific understanding and predictive capabilities, the consequences following a choice are normally uncertain. finally, different individuals and groups within society may not agree on how to evaluate the consequences -which may involve a detailed characterization of what happens in ecological, economic, and human health terms. we shall describe consequences as ambiguous when there is this difficulty in getting agreement on how to interpret and evaluate them. this distinction has been further explained in chapter (see also klinke and renn ) . environmental assessment and environmental decision making inherently involve these difficulties of complexity, uncertainty, and ambiguity (klinke and renn ) . in some situations where there is lots of experience, these difficulties may be minimal. but in other situations these difficulties may constitute major impediments to the decision-making process. to understand how analysis and deliberation interact in an iterative process following the national research council (nrc) report, one must consider how these three areas of potential difficulty can be addressed. it is useful to separate questions of evidence with respect to the likelihood, magnitude of consequences and related characteristics (which can involve complexity and uncertainty) from valuation of the consequences (i.e. ambiguity). for each of the three areas there are analytical tools that can be helpful in identifying, characterizing and quantifying cause-effect relationships. some of these tools have been described in chapter . the integration of these tools of risk governance into a consistent procedure will be discussed in the next subsections. the possibility to reach closure on evaluating risks to human health or the environment rests on two conditions: first, all participants need to achieve closure on the underlying goal (often legally prescribed, such as prevention of health detriments or guarantee of an undisturbed environmental quality, for example, purity laws for drinking water); secondly, they need to agree with the implications derived from the present state of knowledge (whether and to what degree the identified hazard impacts the desired goal). dissent can result from conflicting values as well as conflicting evidence. it is crucial in environmental risk management to investigate both sides of the coin: the values that govern the selection of the goal and the evidence that governs the selection of cause-effect claims. strong differences in both areas can be expected in most environmental decision-making contexts but also in occupational health and safety and public health risks. so for all risk areas it is necessary to explore why people disagree about what to do -that is, which decision alternative should be selected. as pointed out before, differences of opinion may be focused on the evidence of what is at stake or which option has what kind of consequences. for example: what is the evidence that an environmental management initiative will lead to an improvement, such as reducing losses of agricultural crops to insect pests -and what is the evidence that the management initiative could lead to ecological damage -loss of insects we value, such as bees or butterflies, damage to birds and other predators that feed on insects -and health impacts from the level of pesticides and important nutrients in the food crops we eat? other differences of opinion may be about values -value of food crops that contain less pesticide residue compared to those that contain more, value of having more bees or butterflies, value of maintaining indigenous species of bees or butterflies compared to other varieties not native to the local ecosystem, value ascribed to good health and nutrition, and maybe, value ascribed to having food in what is perceived to be a "natural" state as opposed to containing manufactured chemical pesticides or altered genetic material. separating the science issues of what will happen from the value issues of how to make appropriate trade-offs between ecological, economic, and human health goals can become very difficult. the separation of facts and values in decision making is difficult to accomplish in practical decision situations, since what is regarded as facts includes a preference dependent process of cognitive framing (tversky and kahneman ) and what is regarded as value includes a prior knowledge about the factual implica-tions of different value preferences (fischhoff ) . furthermore, there are serious objections against a clear-cut division from a sociological view on science and knowledge generation (jasanoff ) . particularly when calculating risk estimates, value-based conventions may enter the assessment process. for example, conservative assumptions may be built into the assessment process, so that some adverse effects (such as human cancer from pesticide exposure) are much less likely to be underestimated than overestimated (national research council ) . at the same time, ignoring major sources of uncertainty can evoke a sense of security and overconfidence that is not justified from the quality or extent of the data base (einhorn and hogarth ) . perceptions and world views may be very important, and difficult to sort out from matters of science, especially with large uncertainties about the causes of environmental damage. a combination of analytic and deliberative processes can help explore these differences of opinions relating to complexity, uncertainty, and ambiguity in order to examine the appropriate basis for a decision before the decision is made. most environmental agencies go through an environmental assessment process and provide opportunities for public review and comment. many controversial environmental decisions become the focus of large analytical efforts, in which mathematical models are used to predict the environmental, economic, and health consequences of environmental management alternatives. analysis should be seen as an indispensable complement to deliberative processes, regardless whether this analysis is sophisticated or not. even simple questions need analytic input for making prudent decisions, especially in situations where there is controversy arising from complexity, uncertainty, and ambiguity. in many policy arenas in which problems of structuring human decisions are relevant, the tools of normative decision analysis (da) have been applied. especially in economics, sociology, philosophical ethics, and also many branches of engineering and science, these methods have been extended and refined during the past several decades. (edwards , howard , north , howard et al. , north and merkhofer , behn and vaupel , pinkau and renn , van asselt , jaeger et al. . da is a process for decomposing a decision problem into pieces, starting with the simple structure of alternatives, information, and prefer-ences. it provides a formal framework for quantitative evaluation of alternative choices in terms of what is known about the consequences and how the consequences are valued (hammond et al. , skinner . the procedures and analytical tools of da provide a number of possibilities to improve the precision and transparency of the decision procedure. however, they are subject to a number of limitations. the opportunities refer to: • different action alternatives can be quantitatively evaluated to allow selection of a best choice. such evaluation relies both on a description of uncertain consequences for each action alternative, with uncertainty in the consequences described using probabilities, and a description of the values and preferences assigned to consequences. (explicit characterization of uncertainty and values of consequences) • the opportunity to assure transparency, in that ( ) models and data summarizing complexity (e.g., applicable and available scientific evidence) ( ) probabilities characterizing judgement about uncertainty, and ( ) values (utilities) on the consequences are made explicit and available. so the evaluation of risk handling alternatives can be viewed and checked for accuracy by outside observers. (outside audit enabled of basis for decision) • a complex decision situation can be decomposed into smaller pieces in a formal analytical framework. the level of such composition can range from a decision tree of action alternatives and ensuing consequences that fits on a single piece of paper, to extremely large and complex computerimplemented models used in calculating environmental consequences and ascribing probabilities and values of the consequences. a more complex analysis is more expensive and is less transparent to observers. in principle, with sufficient effort any formal analytical framework can be checked to assure that calculations are made in the way that is intended. (decomposition possible to include extensive detail) on the other hand, there are important limitations: • placing value judgements (utilities) on consequences may be difficult, especially in a political context where loss of life, impairment of health, ecological damage, or similar social consequences are involved. utility theory is essentially an extension of cost-benefit methods from economics to include attitude toward risk. the basic trade-off judgements needed for cost-benefit analysis remain difficult and controversial, and often, inherently subjective. (difficulties in valuing consequences) • assessing uncertainty in the form of a numerical probability also poses difficulties, especially in situations when there is not a statistical data base on an agreed-on model as the basis for the assessment. (difficulty in quantifying uncertainty, assigning probabilities) • the analytical framework may not be complete. holistic or overarching considerations or important details may have been omitted. (analytical framework incomplete) • da is built upon an axiomatic structure, both for dealing with uncertainty (i.e., the axiomatic foundation of probability theory), and for valuing consequences (i.e., the axiomatic basis for von neumann-morgenstern utility theory). especially when the decision is to be made by a group rather than an individual decision maker, rational preferences for the group consistent with the axioms may not exist (the "impossibility" theorem of arrow, ) . so in cases of strong disagreements on objectives or unwillingness to use a rational process, decision analysis methods may not be helpful. decision analytical methods should not be regarded as inherently "mechanical" or "algorithmic", in which analysts obtain a set of "inputs" about uncertainty and valuing consequences, then feed these into a mathematical procedure (possibly implemented in a computer) that produces an "output" of the "best" decision. da can only offer coherent conclusions from the information which the decision maker provides by his/her preferences among consequences and his/her state of information on the occurrence of these consequences. where there is disagreement about the preferences or about the information, da may be used to implore the implications of such disagreement. so in application, there is often a great deal of iteration (sensitivity analysis) to explore how differences in judgement should affect the selection of the best action alternative. da thus merely offers a formal framework that can be effective in helping participants in a decision process to better understand the implications of differing information and judgement about complex and uncertain consequences from the choice among the available action alternatives. insight about which factors are most important in selecting among the alternatives is often the most important output of the process, and it is obtained through extensive and iterative exchange between analysts and the decision makers and stakeholders. the main advantage of the framework is that it is based on logic that is both explicit and checkable -usually facilitated by the use of mathematical models and probability calculations. research on human judgement supports the superiority of such procedures for decomposing complex decision problems and using logic to integrate the pieces, rather than relying on holistic judgement on which of the alternatives is best (this is not only true for individual decisions, see heap et al. : ff., jungermann ; but also for collective decisions, see heap et al. : ff., pettit . one should keep in mind, however, that "superior" is measured in accordance with indicator of instrumental rationality, i.e. measuring means-ends effectiveness. if this rationality is appropriate, the sequence suggested by da is intrinsically plausible and obvious. even at the level of qualitative discussion and debate, groups often explore the rationale for different action alternatives. decision analysis simply uses formal quantitative methods for this traditional and common-sense process of exploring the rationale -using models to describe complexity, probability to describe uncertainty, and to deal with ambiguity, explicit valuation of consequences via utility theory and other balancing procedures, such as cost-benefit or cost-effectiveness analyses. by decomposing the problem in logical steps, the analysis permits better understanding of differences in the participants' perspective on evidence and values. da offers methods to overcome these differences, such as resolving questions about underlying science through data collection and research, and encouraging tradeoffs, compromise, and rethinking of values. based on this review of opportunities and shortcomings we conclude that decision analysis provides a suitable structure for guiding discussion and problem formulation, and offers a set of quantitative analytical tools that can be useful for environmental decisions, especially in conjunction with deliberative processes. da can assist decision makers and others involved in, and potentially affected by, the decision (i.e., participants, stakeholders) to deal with complexity and many components of uncertainty, and to address issues of remaining uncertainties and ambiguities. using these methods promises consistency from one decision situation to another, assurance of an appropriate use of evidence from scientific studies related to the environment, and explicit accountability and transparency with respect to those institutionally responsible for the value judgements that drive the evaluation of the alternative choices. collectively the analytical tools provide a framework for a systematic process of exploring and evaluating the decision alternatives -assembling and validating the applicable scientific evidence relevant to what will happen as the result of each possible choice, and valuing how bad or how good these consequences are based on an agreement of common objectives. yet, it does not replace the need for additional methods and processes for including other objectives, such as finding common goals, defining preferences, revisiting assumptions, sharing visions and exploring common grounds for values and normative positions. the value judgements motivating decisions are made explicit and can then be criticized by those who were not involved in the process. to the extent that uncertainty becomes important, it will be helpful to deal with uncertainty in an orderly and consistent way (morgan and henrion ). those aspects of uncertainty that can be modelled by using probability theory (inter-target variation, systematic and random errors in applying inferential statistics, model and data uncertainties) will be spelled out and those that remain in forms of indeterminacies, system boundaries or plain ignorance will become visible and can then be fed into the deliberation process (van asselt , klinke and renn ) . the term deliberation refers to the style and procedure of decision making without specifying which participants are invited to deliberate (national research council (nrc) , rossi ) . for a discussion to be called deliberative it is essential that it relies on mutual exchange of arguments and reflections rather than decision making based on the status of the participants, sublime strategies of persuasion, or social-political pressure. deliberative processes should include a debate about the relative weight of each argument and a transparent procedure for balancing pros and cons (tuler and webler ) . in addition, deliberative processes should be governed by the established rules of a rational discourse. in the theory of communicative action developed by the german philosopher jürgen habermas, the term discourse denotes a special form of a dialogue, in which all affected parties have equal rights and duties to present claims and test their validity in a context free of social or political domination (habermas (habermas , b . a discourse is called rational if it meets the following specific requirements (see mccarthy , habermas a , kemp , webler . all participants are obliged: • to seek a consensus on the procedure that they want to employ in order to derive the final decision or compromise, such as voting, sorting of positions, consensual decision making or the involvement of a mediator or arbitrator; • to articulate and criticize factual claims on the basis of the "state of the art" of scientific knowledge and other forms of problem-adequate knowledge; (in the case of dissent all relevant camps have the right to be represented); • to interpret factual evidence in accordance with the laws of formal logic and analytical reasoning; • to disclose their relevant values and preferences, thus avoiding hidden agendas and strategic game playing; and • to process data, arguments and evaluations in a structured format (for example a decision-analytic procedure) so that norms of procedural rationality are met and transparency can be created. the rules of deliberation do not necessarily include the demand for stakeholder or public involvement. deliberation can be organized in closed circles (such as conferences of catholic bishops, where the term has indeed been used since the council of nicosia), as well as in public forums. it may be wise to use the term "deliberative democracy" when one refers to the combination of deliberation and public or stakeholder involvement (see also cohen , rossi . what needs to be deliberated? firstly, deliberative processes are needed to define the role and relevance of systematic and anecdotal knowledge for making far-reaching choices. secondly, deliberation is needed to find the most appropriate way to deal with uncertainty in environmental decision making and to set efficient and fair trade-offs between potential over-and under-protection. thirdly, deliberation needs to address the wider concerns of the affected groups and the public at large. why can one expect that deliberative processes are better suited to deal with environmental challenges than using expert judgement, political majority votes or relying on public survey data? • deliberation can produce common understanding of the issues or the problems based on the joint learning experience of the participants with respect to systematic and anecdotal knowledge (webler and renn , pidgeon ). • deliberation can produce a common understanding of each party's position and argumentation and thus assist in a mental reconstruction of each actor's argumentation (warren , tuler . the main driver for gaining mutual understanding is empathy. the theory of communicative action provides further insights in how to mobilize empathy and how to use the mechanisms of empathy and normative reasoning to explore and generate common moral grounds (webler ). • deliberation can produce new options and novel solutions to a problem. this creative process can either be mobilized by finding win-win solutions or by discovering identical moral grounds on which new options can grow (renn ) . • deliberation has the potential to show and document the full scope of ambiguity associated with environmental problems. deliberation helps to make a society aware of the options, interpretations, and potential actions that are connected with the issue under investigation (wynne , de marchi and ravetz ) . each position within a deliberative discourse can only survive the crossfire of arguments and counter-arguments if it demonstrates internal consistency, compatibility with the legitimate range of knowledge claims and correspondence with the widely accepted norms and values of society. deliberation clarifies the problem, makes people aware of framing effects, and determines the limits of what could be called reasonable within the plurality of interpretations (skillington ) . • deliberations can also produce agreements. the minimal agreement may be a consensus about dissent (raiffa ) . if all arguments are exchanged, participants know why they disagree. they may not be convinced that the arguments of the other side are true or morally strong enough to change their own position; but they understand the reasons why the opponents came to their conclusion. in the end, the deliberative process produces several consistent and -in their own domain -optimized positions that can be offered as package options to legal decision makers or the public. once these options have been subjected to public discourse and debate, political bodies, such as agencies or parliaments can make the final selection in accordance with the legitimate rules and institutional arrangements such as majority vote or executive order. final selections could also be performed by popular vote or referendum. • deliberation may result in consensus. often deliberative processes are used synonymously with consensus-seeking activities (coglianese ). this is a major misunderstanding. consensus is a possible outcome of deliberation, but not a mandatory requirement. if all participants find a new option that they all value more than the one option that they preferred when entering the deliberation, a "true" consensus is reached (renn ) . it is clear that finding such a consensus is the exception rather than the rule. consensus is either based on a win-win solution or a solution that serves the "common good" and each participant's interests and values better than any other solution. less stringent is the requirement of a tolerated consensus. such a consensus rests on the recognition that the selected decision option might serve the "common good" best, but on the expense of some interest violations or additional costs. in a tolerated consensus some participants voluntarily accept personal or group-specific losses in exchange for providing benefits to all of society. case studies have provided sufficient evidence that deliberation has produced a tolerated consensus solution, particularly in siting conflicts (one example in schneider et al. ) . consensus and tolerated consensus should be distinguished from compromise. a compromise is a product of bargaining where each side gradually reduces its claim to the opposing party until they reach an agreement (raiffa ) . all parties involved would rather choose the option that they preferred before starting deliberations, but since they cannot find a win-win situation or a morally superior alternative they look for a solution that they can "live with" knowing that it is the second or third best solution for them. compromising on an issue relies on full representation of all vested interests. in summary, many desirable products and accomplishments are associated with deliberation (chess et al. ) . depending on the structure of the discourse and the underlying rationale deliberative processes can: • enhance understanding; • generate new options; • decrease hostility and aggressive attitudes among the participants; • explore new problem framing; • enlighten legal policy-makers; • produce competent, fair and optimized solution packages; and • facilitate consensus, tolerated consensus and compromise. in a deliberative setting, participants exchange arguments, provide evidence for their claims and develop common criteria for balancing pros and cons. this task can be facilitated and often guided by using decision analytic tools (overview in merkhofer ) . decision theory provides a logical framework distinguishing action alternatives or options, consequences, likelihood of consequences, and value of consequences, where the valuation can be over multiple attributes that are weighted based on tradeoffs in multi-attribute utility analysis (edwards ) . a sequence of decisions and consequences may be considered, and use of mathematical models for predicting the environmental consequences of options may or may not be part of the process (humphreys , bardach , arvai et al. ): a) the structuring potential of decision analysis has been used in many participatory processes. it helps the facilitator of such processes to focus on one element during the deliberation, to sort out the central from the peripheral elements, provide a consistent reference structure for ordering arguments and observations and to synthesize multiple impressions, observations and arguments into a coherent framework. the structuring power of decision analysis has often been used without expanding the analysis into quantitative modelling. b) the second potential, agenda setting and sequencing, is also frequently applied in participatory settings. it often makes sense to start with problem definition, then develop the criteria for evaluation, generate options, assess consequences of options, and so on. c) the third potential, quantifying consequences, probabilities and relative weights and calculating expected utilities, is more controversial than the other two. whether the deliberative process should include a numerical analysis of utilities or engage the participants in a quantitative elicitation process is contested among participation practitioners . one side claims that quantifying helps participants to be more precise about their judgements and to be aware of the often painful trade-offs they are forced to make. in addition, quantification can make judgements more transparent to outside observers. the other side claims that quantification restricts the participants to the logic of numbers and reduces the complexity of argumentation into a mere trade-off game. many philosophers argue that quantification supports the illusion that all values can be traded off against other values and that complex problems can be reduced to simple linear combinations of utilities. one possible compromise between the two camps may be to have participants go through the quantification exercise as a means to help them clarify their thoughts and preferences, but make the final decisions on the basis of holistic judgements (renn ) . in this application of decision analytic procedures, the numerical results (i.e. for each option the sum over the utilities of each dimension multiplied by the weight of each dimension) of the decision process are not used as expression of the final judgement of the participant, but as a structuring aid to improve the participant's holistic, intuitive judgement. by pointing out potential discrepancies between the numerical model and the holistic judgements, the participants are forced to reflect upon their opinions and search for potential hidden motives or values that might explain the discrepancy. in a situation of major value conflicts, the deliberation process may involve soliciting a diverse set of viewpoints, and judgements need to be made on what sources of information are viewed as responsible and reliable. publication in scientific journals and peer review from scientists outside the government agency are the two most popular methods by which managers or organizers of deliberative processes try to limit what will be considered as acceptable evidence. other methods are to reach a consensus among the participants up front which expertise should be included in the deliberation or to appoint representatives of opposing science camps to explain their differences in public. in many cases, participants have strong reasons for questioning scientific orthodoxy and would like to have different science camps represented. many stakeholders in environmental decisions have access to expert scientists, and often such scientists will take leading roles in criticizing agency science. such discussions need to be managed so that disagreements among the scientific experts can be evaluated in terms of the validity of the evidence presented and the importance to the decision. it is essential in these situations to have a process in place that distinguishes between those evidence claims that all parties agree on, those where the factual base is shared but not its meaning for some quality criterion (such as "healthy" environment), and those where even the factual base is contested (foster ) . in the course of practical risk management different conflicts arise in deliberative settings that have to be dealt with in different ways. the main conflicts occur at the process level (how should the negotiations be conducted?), on the cognitive level (what is factually correct?), the interest level (what benefits me?), the value level (what is needed for a "good" life?) and the normative level (what can i expect of all involved?). these different conflict levels are addressed in this subsection. first of all, negotiations begin by specifying the method that structures the dialogue and the rights and duties of all participants. it is the task of the chairman or organizer to present and justify the implicit rules of the talks and negotiations. above and beyond this, the participants have to specify joint rules for decisions, the agenda, the role of the chairman, the order of hearings, etc. this should always be done according to the consensus principle. all partners in the negotiations have to be able to agree to the method. if no agreement is reached here the negotiations have to be interrupted or reorganized. once the negotiation method has been determined and, in a first stage, the values, standards and objectives needed for judgement have been agreed jointly, then follows the exchange of arguments and counter arguments. in accordance with decision theory, four stages of validation occur: • in a first stage, the values and standards accepted by the participants are translated into criteria and then into indicators (measurement instructions). this translation needs the consensual agreement of all participants. experts are asked to assess the available options with regard to each indicator according to the best of their knowledge (factual correctness). in this context it makes more sense to specify a joint methodological procedure or a consensus about the experts to be questioned than to give each group the freedom to have the indicators answered by their own experts. often many potential consequences remain disputed as a result of this process, especially if they are uncertain. however, the bandwidth of possible opinions is more or less restricted depending on the level of certainty and clarity associated with the issue in question. consensus on dissent is also of help here in separating contentious factual claims from undisputed ones and thus promotes further discussion. • in a second stage, all participating parties are required to interpret bandwidths of impacts to be expected for each criterion. interpretation means linking factual statements with values and interests to form a balanced overall judgement (conflicts of interests and values). this judgement can and should be made separately for each indicator. in this way, each of the chains of causes for judgements can be understood better and criticized in the course of the negotiations. for example, the question of trustworthiness of the respective risk management agencies may play an important role in the interpretation of an expected risk value. then it is the duty of the participating parties to scrutinize the previous performance of the authority concerned and propose institutional changes where appropriate. • third stage: even if there were a joint assessment and interpretation for every indicator, this would by no means signify that agreement is at hand. much rather, the participants' different judgements about decisionmaking options may be a result of different value weightings for the indicators that are used as a basis for the values and standards. for example, a committed environmentalist may give much more weight to the indicator for conservation than to the indicator of efficiency. in the literature on game theory, this conflict is considered to be insoluble unless one of the participants can persuade the other to change his preference by means of compensation payments (for example, in the form of special benefits), transfer services (for example, in the form of a special service) or swap transactions (do, ut des). in reality, however, it can be seen that participants in negotiations are definitely open to the arguments of the other participants (i.e. they may renounce their first preference) if the loss of benefit is still tolerable for them and, at the same time, the proposed solution is considered to be "conducive to the common good", i.e. is seen as socially desirable in public perception. if no consensus is reached, a compromise solution can and should be reached, in which a 'fair' distribution of burdens and profits is accomplished. • fourth stage: when weighing up options for action formal methods of balancing assessment can be used. of these methods, the cost-benefit analysis and the multi-attribute or multi-criteria decision have proved their worth. the first method is largely based on the approach of revealed "preferences", i.e. on people's preferences shown in the past expressed in relative prices, the second on the approach of "expressed preferences", i.e. the explicit indication of relative weightings between the various cost and benefit dimensions (fischhoff et al. ) . but both methods are only aids in weighing up and cannot replace an ethical reflection of the advantages and disadvantages. normative conflicts pose special problems because different evaluative criteria can always be classified as equally justifiable or unjustifiable as explained earlier. for this reason, most ethicists assume that different types and schools of ethical justification can claim parallel validity, it therefore remains up to the groups involved to choose the type of ethically legitimate justification that they want to use (ropohl , renn . nevertheless, the limits of particular justifications are trespassed wherever primary principles accepted by all are infringed (such as human rights). otherwise, standards should be classed as legitimate if they can be defended within the framework of ethical reasoning and if they do not contradict universal standards that are seen as binding for all. in this process conflicts can and will arise, e.g. that legitimate derivations of standards from the perspective of group a contradict the equally legitimate derivations of group b (shrader-frechette ). in order to reach a jointly supported selection of standards, either a portfolio of standards that can claim parallel validity should be drawn up or compensation solutions will have to be created in which one party compensates the other for giving up its legitimate options for action in favour of a common option. when choosing possible options for action or standards, options that infringe categorical principles, for example, to endangering the systematic ability of the natural environment to function for human use in the future and thus exceeding the limits of tolerability are not tolerable even if they imply major benefits to society. at the same time, all sub-dominant options have to be excluded. frequently sub-dominant solutions, i.e. those that perform worse than all other options with regard to all criteria at least in the long term, are so attractive because they promise benefits in the short term although they entail losses in the long term, even if high interest rates are assumed. often people or groups have no choice other than to choose the sub-dominant solution because all other options are closed to them due to a lack of resources. if large numbers of groups or many individuals act in this way, global risks become unmanageable (beck ) . to avoid these risks intermediate financing or compensation by third parties should be considered. the objective of this last section of chapter was to address and discuss the use of decision analytic tools and structuring aids for participatory processes in environmental management. organizing and structuring discourses goes beyond the good intention to have all relevant stakeholders involved in decision making. the mere desire to initiate a two-way communication process and the willingness to listen to stakeholder concerns are not sufficient. discursive processes need a structure that assures the integration of technical expertise, regulatory requirements, and public values. these different inputs should be combined in such a fashion that they contribute to the deliberation process the type of expertise and knowledge that can claim legitimacy within a rational decision-making procedure (von schomberg ). it does not make sense to replace technical expertise with vague public perceptions, nor is it justified to have the experts insert their own value judgements into what ought to be a democratic process. decision analytic tools can be of great value for structuring participatory processes. they can provide assistance in problem structuring, in dealing with complex scientific issues and uncertainty, and in helping a diverse group to understand disagreements and ambiguity with respect to values and preferences. decision analysis tools should be used with care. they do not provide an algorithm to reach an answer as to what is the best decision. rather, decision analysis is a formal framework that can be used for environmental assessment and risk handling to explore difficult issues, to focus debate and further analysis on the factors most important to the decision, and to provide for increased transparency and more effective exchange of information and opinions among the process participants. the basic concepts are relatively simple and can be implemented with a minimum of mathematics (hammond et al. ) . many participation organizers have restricted the use of decision analytic tools to assist participants in structuring problems and ordering concerns and evaluations, and have refrained from going further into quantitative trade-off analysis. others have advocated quantitative modelling as a clarification tool for making value conflicts more transparent to the participants. the full power of decision analysis for complex environmental problem may require mathematical models and probability assessment. experienced analysts may be needed to guide the implementation of these analytical tools for aiding decisions. skilled communicators and facilitators may be needed to achieve effective interaction between analysts and participants in the deliberative process whose exposure to advanced analytical decision aids is much less, so that understanding of both process and substance, and therefore transparency and trust, can be achieved. many risk management agencies are already making use of decision analysis tools. we urge them to use these tools in the context of an iterative, deliberative process with broad participation by the interested and affected parties to the decision in the context of the risk governance framework. the analytical methods, the data and judgement, and the assumptions, as well as the analytical results should be readily available and understood by the participants. we believe that both the risk management agencies and the interested groups within the public that government agencies interact with on environmental decisions should all gain experience with these methods. improper or premature use of sophisticated analytical methods may be more destructive to trust and understanding than helpful in resolving the difficulties of complexity, uncertainty, and ambiguity. bioterrorism, preparedness, attack and response. elsevier beyond the job exposure matrix (jem): the task exposure matrix (tem) chapter occupational safety and health and environmental risks a task exposure database for use in the alumina and primary aluminium industry work-related neck and upper limb musculoskeletal disorders. european agency for safety and health at work finnish national job-exposure matrix (finjem) in register-based cancer research. people and work toimivat ja terveelliset työajat (well-functioning and healthy working times). finnish institute of occupational health, ministry of social affairs and health and ministry of labour finnish database of occupational exposure measurements (fdoem). finnish institute of occupational health, finland. ioha, pilanesberg introduction to occupational epidemiology the environment and disease: association or causation? the bradford hill considerations on causality: a counterfactual perspective work-related stress. european foundation for the improvement of living and working conditions the ageing workforce -challenges for occupational health recommendation concerning the list of occupational diseases and the recording and notification of occupational accidents and diseases: r . international labour conference. international labour office työuupumus suomen työikäisellä väestöllä (burn-out among the working-aged population in finland). finnish institute of occupational health, helsinki work-related disease risk in different occupations. työterveiset , special issue. information newsletter of the finnish institute of occupational health ammattitaudit (occupational diseases ). finnish institute of occupational health work is related to a substantial portion of adult-onset asthma incidence in the finnish population work stress in the etiology of coronary heart disease -a meta-analysis occupational injury and illness in the united states. estimates of costs, morbidity, and mortality the psychology of extremism and terrorism: a middle-eastern perspective musculoskeletal disorders and the workplace: low back and upper extremities testing a structured decision approach: value focused thinking for deliberative risk communication the eight-step path of policy analysis the invention of nature Ökologische fragen im bezugsrahmen fabrizierter unsicherheiten quick analysis for busy decision makers grundzüge der ökologischen ethik natur" als maßstab menschlichen handelns. zeitschrift für philosophische forschung landschaftsschutz und artenschutz. wie weit tragen utilitaristische begründungen? in: nutzinger hg vorsorge statt nachhaltigkeit -ethische grundlagen der zukunftsverantwortung on the intrinsic value of nonhuman species who should deliberate when? assessing consensus; the promise and performance of negotiated rule making procedure and substance in deliberative democracy risk management and governance: a post-normal science approach deep ecology: living as if nature mattered multi-criteria analysis: a manual. department of the environment, transport and the regions the theory of decision making how to use multi-attribute utility measurement for social decision making beginning again: people and nature in the new millennium confidence in judgment: persistence of the illusion of validity environmental ethics hindsight versus foresight: the effect of outcome knowledge on judgment under uncertainty the experienced utility of expected utility approaches the role of experts in risk-based decision making. hse risk assessment policy unit. web manuscript. www.trustnetgovernance.com/library/pdf/doc Ökologie und konsensfindung: neue chancen und risiken making sense of sustainability: nine answers to "what should be sustained? literaturreview und bibliographie. graue reihe nr. . europäische akademie zur erforschung von folgen wissenschaftlich-technischer entwicklungen ethische aspekte des handelns unter risiko rationale technikfolgenbeurteilung the value of biodiversity decision aiding, not dispute resolution: a new perspective for environmental negotiation towards a theory of communicative competence vorbereitende bemerkungen zu einer theorie der kommunikativen kompetenz theorie des kommunikativen handelns. suhrkamp, frankfurt/m the philosophical discourse of modernity smart choices: a practical guide to making better decisions marketing und soziale verantwortung foundations of environmental ethics systemsteuerung und "staatskunst": theoretische konzepte und empirische befunde. leske und budrich the theory of choice: a practical guide politische gerechtigkeit. grundlegung einer kritischen philosophie von recht und staat. suhrkamp, frankfurt/m welche natur sollen wir schützen? gaia philosophie der ökologischen krise decision analysis: applied decision theory the foundations of decision analysis the decision to seed hurricanes application of multi-attribute utility theory risk governance: towards an integrative appraoch. irgc beyond epistemology: relativism and engagement in the politics of science risk, uncertainty and rational action das prinzip verantwortung. versuch einer ethik für die technologische zivilisation judgment and decision making: an interdisciplinary reader planning, political hearings, and the politics of discourse Ökologie global: die auswirkungen von wirtschaftswachstum, bevölkerungswachstum und zunehmendem nord-süd-gefälle auf die umwelt a new approach to risk evaluation and management: risk-based, precaution-based and discourse-based management naturethik im Überblick the science of muddling through the intelligence of democracy. decision making through mutual adjustment legitimation durch verfahren. suhrkamp, frankfurt/m translator's introduction comparative analysis of formal decision-making approaches sustainable development? wie nicht nur die menschen eine "dauerhafte" entwicklung überdauern können ist biologisches produzieren natürlich? leitbilder einer naturgemäßen technik methodische philosophie umwelt. bemerkungen eines philosophen zum umweltverträglichen wirtschaften committee on institutional means for assessing risk to public health ( ) risk assessment in the federal government: managing the process ethik des risikos angewandte ethik. die bereichsethiken und ihre theoretische fundierung a tutorial introduction to decision theory a methodology for analyzing emission control strategies why preserve natural variety? Ökologie und ethik. ein versuch praktischer philosophie. ethik in den wissenschaften zur ethischen bewertung von biodiversität. externes gutachten für den wbgu. unveröffentlichtes manuskript decision theory and folk psychology towards a comprehensive conservation theory the limits to safety? culture, politics, learning and manmade disasters environmental standards. scientific foundations and rational procedures of regulation with emphasis on radiological risk management the art and science of negotiation what mainstream economists have to say about the value of biodiversity benefits, costs, and the safe minimum standard of conservation decision analytic tools for resolving uncertainty in the energy debate Ökologisch denken -sozial handeln: die realisierbarkeit einer nachhaltigen entwicklung und die rolle der sozial-und kulturwissenschaften ein diskursives verfahren zur bildung und begründung kollektiv verbindlicher bewertungskriterien a model for an analytic-deliberative process in risk management the challenge of integrating deliberation and expertise: participation and discourse in risk management a regional concept of qualitative growth and sustainability -support for a case study in the german state of baden-württemberg was heißt hier bioethik? tab-brief theologie der natur und ihre anthriopologisch-ethischen konsequenzen values in nature and the nature of value ob man die ambivalenzen des technischen fortschritts mit einer neuen ethik meistern kann? participation run amok: the costs of mass participation for deliberative agency decisionmaking ist die schöpfung noch zu retten? umweltkrise und christliche verantwortung experiences from germany: application of a structured model of public participation in waste management planning environmental ethics politics and the struggle to define: a discourse analysis of the framing strategies of competing actors in a "new introduction to decision analysis corporate environmentalism in a global economy respect for nature. a theory of environmental ethics meanings, understandings, and interpersonal relationships in environmental policy discourse. doctoral dissertation designing an analytic deliberative process for environmental health policy making in the u.s. nuclear weapons complex the framing of decisions and the psychology of choice perspectives on uncertainty and risk akzeptanzbeschaffung: verfahren und verhandlungen the erosion of the valuespheres. the ways in which society copes with scientific, moral and ethical uncertainty can participatory democracy produce better selves? psychological dimensions of habermas discursive model of democracy welt im wandel: erhaltung und nachhaltige nutzung der biosphäre. jahresgutachten discourse in citizen participation. an evaluative yardstick the craft and theory of public participation: a dialectical process a brief primer on participation: philosophy and practice system, diskurs, didaktik und die diktatur des sitzfleisches konsens als telos der sprachlichen kommunikation? in: giegel biophilia: the human bond with other species rio" oder die moralische verpflichtung zum erhalt der natürlichen vielfalt risk and social learning: reification to engagement die modernisierung der demokratie im zeichen der umweltproblematik wandelt sich die verantwortung mit technischem wandel? key: cord- -z nf authors: lins filho, p. c.; macedo, t. s. d.; ferreira, a. k. a.; melo, m. c. f. d.; araujo, m. m. s. d.; freitas, j. l. d. m.; caldas, t. u.; caldas, a. d. f. title: assessing the quality, readability and reliability of online information on covid- : aninfoveillance observational study date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: z nf objective: this study aimed to assess the quality, reliability and readability of internet-based information on covid- available on brazil most used search engines. methods: a total of websites were selected through google, bing, and yahoo. the websites content quality and reliability were evaluated using the discern questionnaire, the journal of american medical association (jama) benchmark criteria, and the presence of the health on net (hon) certification. readability was assessed by the flesch reading ease adapted to brazilian portuguese (fre-bp). results: the web contents were considered moderate to low quality according to discern and jama mean scores. most of the sample presented very difficult reading levels and only . % displayed hon certification. websites of governmental and health-related authorship nature showed lower jama mean scores and quality and readability measures did not correlate to the webpages content type. conclusion: covid- related contents available online were considered of low to moderate quality and not accessible. health care is rapidly transitioning from a paternalistic approach to a person-centered model. this the internet offers a large amount of information, although the quality of health and sanitary information offered is highly variable, ranging from scientific and evidence-based data to home remedies or information of very questionable origin that can be dangerous to health (eysenbach et al. ) . the thus, monitoring the quality of information available to the population is of great importance to control the spread of the disease itself and to mitigate its socioeconomic impacts (hua and shaw ). therefore, the who is dedicating tremendous efforts aimed at providing evidence-based information and advice to the population through its social media channels and a new information platform called who information network for epidemics (zarocostas sites on the internet were identified using the three most accessed search engines by internet users in brazil: google (www.google.com), bing (www.bing.com) and yahoo (www.yahoo.com), respectively, . %, . % and . % of accesses in april (statcounter ) . in april , the searches were language. duplicate sites were excluded, as were non-operative sites or sites with denied direct access through password requirements, book review sites, or sites offering journal abstracts, and those sites that did not offer information on covid- . websites that could be modified by the general population were also not considered in this investigation. the quality of website information was assessed by four evaluators, who were previously trained in the analysis tools used. concerning the scientific accuracy and reliability of websites information, who official reports and technical guidelines were used as standards. the websites that were divergently qualified by the examiners were reassessed to the achievement of a consensus score. as this was a study of published information and involved no participants, no ethics approval was required. in order to avoid any changes that may be made to the eligible websites during the period of analysis, the sites were assessed in the same day by the evaluators. the sites were classified in terms of affiliation as commercial, news portal, non-profit organization, university or health center and government. the type of content was classified as corresponding to medical facts, human experiences of interest, questions and answers and socioeconomic related content. the quality of information of the selected websites was assessed using criteria of the journal of the cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . . https://doi.org/ . / . . . doi: medrxiv preprint address the reliability of the publication and help users to decide whether it can be trusted as a source of information relating to treatment choice. questions - address specific details of the information relating to treatment alternatives. in this context, questions - refer to the active treatments described in the publication (possibly including self-care), while the options without treatment are addressed separately in question . in turn, question corresponds to the global quality assessment at the end of the instrument. each question is scored on a scale of - (where , the publication is poor; and , the publication is of good quality). in the present study, only the first section of the questionnaire was used for reliability assessment. the readability (re) of the websites was assessed by the flesch reading ease adapted to brazilian data were submitted to statistical analysis, all tests were applied considering an error of % and the confidence interval of %, and the analyzes were carried out using spss software version . (spss inc. fulfilled the inclusion and exclusion criteria. the search retrieval flow diagram is presented in figure . according to affiliation most websites were from news portals ( . %), followed by government ( . %), commercial sites ( . %), university or health center ( . %) and non-profit organization ( . %). considering the type of content, the majority of the sites displayed medical facts ( . %) followed by the correlation between distinct measures assessed through the instruments was analyzed. besides the quality of information, the amount of it provided for an individual is also a concern. previous research suggests that the vast amount of available information can be confusing, potentially resulting in over-concern and information overload (farooq et al. in addition to ensure the quality and reliability of information, it is important that these quality quality. in the present study such correlation was not found possibly due the growing concern about the quality of health-related information available online (farooq et al. ), however, no positive correlation . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . . https://doi.org/ . / . . . doi: medrxiv preprint was found, which demonstrates the need for further efforts on improving the accessibility of high-quality health related information available online. the internet and search engines are dynamic processes that constantly change. the sites evaluated in this investigation may not necessarily reflect the information available to patients at another time. this was a limitation for this investigation. however, the search engines used for the consultation represent . % of the access of brazilian internet users (statcounter ). in addition, to cover a reasonable amount of data, the first consecutive websites of each search engine were accessed. regarding the present sample of brazilian websites, covid- contents were considered of low to moderate quality and low readability based on the parameters adopted. this pattern just reasonably . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted may , . . https://doi.org/ . / . . . doi: medrxiv preprint . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted may , . . https://doi.org/ . / . . . doi: medrxiv preprint cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted may , . . https://doi.org/ . / . . . doi: medrxiv preprint during the covid- pandemic: cross-sectional study influence of peer support on hiv/sti prevention and safety amongst international migrant sex workers: a qualitative study at the mexico- mental health problems and social media exposure during covid- outbreak hon t health on the net (hon). hon foundation infodemic" and emerging issues through a data lens: the case of china institute of medicine (us) committee on assuring the health of the public in the st century. the future of the public's health in the st century quality and readability of internet-based information on halitosis coronavirus goes viral: quantifying the covid- evaluating the dental caries-related information on brazilian websites: qualitative study person-centered care model in dentistry the quality of internet sites providing information relating to oral cancer relationship between internet health information and patient compliance based on trust: empirical study impact of physician-patient communication in online health communities on patient compliance: cross-sectional questionnaire study spanish in readability of online endodontic information for laypeople quality of information about oral cancer in brazilian the quality and readability of online consumer information about gynecologic cancer search engine market share brazil communicating risk in public health emergencies: a who guideline for emergency risk communication (erc) policy and practice switzerland isbn combining point-of-care diagnostics and internet of medical things (iomt) to combat the covid- pandemic. diagnostics (basel) how to fight an infodemic key: cord- -ygmkul authors: khrennikov, andrei title: social laser model for the bandwagon effect: generation of coherent information waves date: - - journal: entropy (basel) doi: . /e sha: doc_id: cord_uid: ygmkul during recent years our society has often been exposed to coherent information waves of high amplitudes. these are waves of huge social energy. often they are of destructive character, a kind of information tsunami. however, they can also carry positive improvements in human society, as waves of decision-making matching rational recommendations of societal institutes. the main distinguishing features of these waves are their high amplitude, coherence (homogeneous character of social actions generated by them), and short time needed for their generation and relaxation. such waves can be treated as large-scale exhibitions of the bandwagon effect. we show that this socio-psychic phenomenon can be modeled based on the recently developed social laser theory. this theory can be used to model stimulated amplification of coherent social actions. “actions” are treated very generally, from mass protests to votes and other collective decisions, such as, e.g., acceptance (often unconscious) of some societal recommendations. in this paper, we concentrate on the theory of laser resonators, physical vs. social. for the latter, we analyze in detail the functioning of internet-based echo chambers. their main purpose is increasing of the power of the quantum information field as well as its coherence. of course, the bandwagon effect is well known and well studied in social psychology. however, social laser theory gives the possibility to model it by using general formalism of quantum field theory. the paper contains the minimum of mathematics and it can be read by researchers working in psychological, cognitive, social, and political sciences; it might also be interesting for experts in information theory and artificial intelligence. during recent years, the grounds of the modern world have been shocked by coherent information waves of very high amplitude. the basic distinguishing property of such waves is that they carry huge amounts of social energy. thus, they are not just the waves widely distributing some special information content throughout human society. instead, their information content is very restricted. typically, the content carried by a wave is reduced to one (or a few) labels, or "colors": one wave is "green", another is "yellow". at the same time, information waves carry very big emotional charge, a lot of social energy. therefore, they can have strong destructive as well as constructive impact on human society. in this paper, we present a model of the generation of very powerful and coherent information waves; a model based on the recently developed theory of social laser [ ] [ ] [ ] [ ] [ ] [ ] . we stress that social laser theory is part of the extended project on applications of formalism of quantum theory outside of physics, quantum-like modeling (see, e.g., monographs [ ] [ ] [ ] [ ] [ ] and some selection of papers [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] ). this terminology was invented by the author to distinguish this modeling from attempts to reduce human consciousness, cognition, and consequently behavior to genuine quantum physical processes in the brain (see, e.g., penrose [ ] or hameroff [ ] ). we do not criticize • indistinguishability of people. the human gain medium, population exposed to the information radiation, should be composed of social atoms, "creatures without tribe": the role of national, cultural, religious, and even gender differences should be reduced as much as possible. content ignorance. social atoms should process information communications without deep analyzing of their contents; they extract only the basic labels ("colors") encoding the communications. of course, humans are still humans, not social atoms; thus, in contrast to quantum physics, it is impossible to create human gain mediums composed of completely indistinguishable creatures. people still have names, gender, nationality, but such their characteristics are ignored in the regime of social lasing. one of the basic components of lasers, both physical and social, is a resonator [ ] . it plays the double role: • amplification of the beam (of physical vs. information) radiation; • improving coherence of this beam. social laser resonators play a crucial role in generation of coherent information waves of high amplitude. they are established via internet-based echo chambers associated with social networks, blogs, and youtube channels. their functioning is based on the feedback process of posting and commenting, the process that exponentially amplifies the information waves that are initially induced by mass media. echo chambers improve the coherence of the information flow through the statistical elimination of communications that do not match the main stream. this statistical elimination is a consequence of the bosonic nature of the quantum information field (sections . and . ) . although this quantum process of coherence generation dominates in echo chambers, we should not ignore other technicalities increasing coherence (sections . and . ), such as censorship of moderators and the dynamical evaluation system of search engines of, e.g., google, youtube, or yandex. the latter system elevates approachability of posts, comments, and videos depending on the history of their reading (seeing) and reactions to them say in the form of new comments. this is a good place to recall that the quantum-like hilbert space formalism if widely used for the modeling of information processing by internet search engines, and, in particular, for information retrieval [ ] [ ] [ ] [ ] [ ] [ ] . we compare functioning of optical and information mirrors (section . ). the latter represents the feedback process in internet systems such as, e.g., youtube. in contrast to the optical mirror, the information mirror not only reflects excitations of the quantum information field, but also multiplies them. thus, this is a kind of reflector-multiplier (section . ). as the result of this multiplication effect, social resonators are more effective than physical ones. however, as in physics, resonator efficiency depends on a variety of parameters. one of such parameters is the coefficient of reflection-multiplication (section . ). we analyze the multilayer structure of an information mirror and dependence of this coefficient on the layer (section . ). the main output of this paper is presented in section describing the quantum-like mechanism of the generation of big waves of coherent information excitations. we start the paper with compact recollection of the basics of social laser theory distilled from technical details and mathematical formulas. we present the basic notions of this theory such as social energy (section . )) and social atom, human gain medium (section . ), information field (section . ), the energy levels structure of social atoms (section . ), and spontaneous and stimulated emission of information excitations (section . ). finally, we conclude the introduction by the schematic presentation of the functioning of social laser theory (section ). the role of information overload in approaching indistinguishability of information communications, up to their basic labels, quasi-colors, is discussed in section . . this is a good place to mention studies on coupling indistinguishability and contextuality [ ] . finally, we point to coupling of the social laser project with foundations of quantum theory (appendix b). the basic component of a physical laser is a gain medium, an ensemble of atoms. energy is pumped into this medium aimed to approach the state of population inversion, i.e., the state where more than % of atoms are excited [ ] . then, a coherent bunch of photons is injected into the gain medium and this bunch stimulates the cascade process of emission of the coherent photon beam. if the power of pumping is very high, i.e., it is higher than the so-called lasing threshold, all energy of pumping is transferred into the output beam of coherent radiation. to make this beam essentially stronger, the laser is equipped by an additional component, the laser resonator (typically in the form of an optical cavity). the laser resonator also improves the coherence of the output beam, by eliminating from the beam photons that were generated via spontaneous emission in the gain medium [ ] . typically, in physics, coherence is formulated in physical waves terms, as electromagnetic waves going in phase with the same direction of propagation and frequency. for us it is convenient to reformulate this notion by excluding any reference to waves in the physical space, since we want to move to the information space. instead of the wave picture we can use the photon picture, so a propagating wave is represented as a cloud of energy quanta. (this is the fock representation in quantum field theory.) coherence means that they have the same energy (frequency) and the direction of propagation-photon's wave vector. we remark that a photon also has additional characteristics such as polarization, the quantum version of the ordinary polarization of light. for convenience of further considerations, let us call all characteristics of a photon additional to its energy quasi-color. we recall that the usual light's color is determined by photon energy (frequency). therefore, a photon has its color and quasi-color. the notion of social energy is the main novel component of our quantum-like modeling. to justify the use of a social analog of the physical energy, we use the quantum-mechanical interpretation of energy, not as an internal feature of a system, but as an observable quantity. thus, like in the case of an electron, we cannot assign to a human the concrete value of the social energy. there are mental states in the superposition of a few different values of the social energy. however, by designing proper measurement procedures we can measure human energy; see [ , ] for details. social energy is a special form of the psychic energy. we recall that at the end of th/beginning of th century psychology was strongly influenced by physics, classical statistical physics and thermodynamics (in works of james and freud), later by quantum physics (in works of jung). in particular, the leading psychologists of that time have actively operated with the notion of psychic energy [ ] [ ] [ ] [ ] . later psychologists essentially lost interest in the construction of general theories and, in particular, operating with the notion of the social energy. recently, the notion of social energy attracted a lot interest in economics and finance, multi-agent modeling, evolution theory and industrial dynamics [ ] [ ] [ ] . of course, these novel as well as old (freud-jung) studies support our model. however, we emphasize that the application of the quantum (copenhagen) methodology simplifies and clarifies essentially the issue of the social energy. we treat it operationally as an observable on a system, a human being. in contrast to, say, freud, we are not interested in psychic and neurophysiological processes of generation of psychic energy (see appendix a for a brief discussion). the basic component of social laser is a gain medium, an ensemble of people. as already mentioned, to initiate lasing, such a gain medium should consist of indistinguishable people, i.e., without tribe, without cultural, national, religious, and ideally sex differences. such beings are called social atoms. (it is not clear whether they still can be called humans). of course, people still have aforementioned characteristics, in some contexts they remember that they are men or women, or even christian, or swedish. we discuss contexts in which people behave as indistinguishable, as social atoms. creation of such behavioral contexts is the first step towards initiation of social lasing. we recall that in quantum physics the electromagnetic field is treated as a carrier of interactions. in the quantum framework, interaction cannot be represented as it was done classically, by force-functions. quantum interaction is of the information nature. in quantum information theory, excitations of the quantum electromagnetic field, photons, are carriers of information. at the same time, each excitation also carries a quantum of energy. this quantum picture is very useful for general modeling of information fields generated by mass media and the internet. communications emitted by newspapers, journals, tv, social networks, and blogs are modeled as excitations of a quantum information field, as quanta of information and social energy. as we know, the quantum description is operational; this is only the mathematical symbolism used for prediction of probabilities. even the quantum electromagnetic field cannot be imagined as a "real wave" propagating in spacetime. (in the formalism, this is a distribution, generalized function, with operator values. hence, this is a very abstract mathematical structure. it is useful for accounting for the numbers of energy quanta and description of the processes of their emission and absorption.) on one hand, this impossibility of visualization is a disadvantage of the quantum description compared to the classical one (we remark that the visualization of the classical electromagnetic field is also not as straightforward as might be imagined. the electromagnetic waves were invented as the waves propagating in the special media, the aether, similarly to acoustic wave propagating in air. later, einstein removed aether from physics. the picture of a vibrating medium became inapplicable. therefore, electromagnetic waves are vibrations of a vacuum. this is not so natural picture for the visualization of this process.). on the other hand, this is a great advantage, since it provides the possibility for generalizations having no connection with physical spacetime. thus, we model the information field as a quantum field with communications (generated, e.g., by mass media) as quanta carrying social energy and some additional characteristics related to communication content. as was already emphasized, quantum description is applicable to fields with indistinguishable excitations, where indistinguishability is considered to observable characteristics. in addition, "observable" means those characteristics that people assign to communications. these are labels of communications, say "terrorism", "war in syria", "coronavirus" and so on. such labels we shall call quasi-colors of information excitations, these are analogs of photon wave vector and polarization. thus, each communication is endowed with a quasi-color. it also carries a quantum of energy; its value we consider as communication color. thus, allegorically we can speak about red, blue, or violet information. content ignorance (up to communication quasi-color and color) is the crucial feature of the applicability of the quantum formalism. why do social atoms compress contents of communications to quasi-colors? the most important is information overload. the information flows generated by mass media and the internet are so powerful that people are not able to analyze communication content deeply, they just scan its quasi-color and absorb a quantum of the social energy carried by this communication. they simply do not have computational and time resources for such an analysis. it is also crucial that people lose their identity, so they become social atoms. for a social atom, there are no reasons, say cultural or religious, to analyze news; he is fine with just absorption of labels (quasi-color) and social energy (color) assigned to them. consider for simplicity social atoms with just two energy levels, excited and relaxed, e and e . the difference between these levels, is the basic parameter of a social atom, its color. a social atom reacts only to a communication carrying energy e c matching his color: if a communication carries too high-energy charge, e c larger than e a ("a social atom is yellow, but a communication is blue"), then an atom would not be able to absorb it. say a communication carrying social energy e c is a call for an uprising against the government. in addition, an atom is a bank clerk in moscow, who has liberal views and hates the regime, but the energy of his excited state is too small to react to this call. if e c is less than e a ("an atom is blue, but a communication is yellow"), then an atom would not be excited by this communication. the communication would be simply ignored. as well as a physical atom, a social atom cannot collect social energy continuously from communications carrying small portions of energy (compared to e a = e − e ), it either absorbs communication (if the colors of an atom and communication match each other) or it does not pay attention to it. in the same way, a social atom cannot "eat" just a portion of energy carried by too highly charged communication. in physics textbooks, the condition of absorption of energy quantum by atom is written as the precise equality: however, precise equalities are only mathematical idealizations of the real situation. the photon-absorption condition ( ) is satisfied only approximately: the spectral line broadening is always present. the difference between the energies of atom levels is the mean value (average) of the gaussian distribution, a bell centered at this point of the energy axis. the dispersion of the gaussian distribution depends on an ensemble of atoms. ensembles with small dispersion are better as gain mediums for lasing, but deviations from exact law ( ) are possible. it is natural to assume gaussian distribution realization of exact laws even for social systems; in particular, absorption of of excitations of the quantum information field by social atoms. thus, deviations from ( ) are possible. however, a good human gain medium should be energetic homogeneous. therefore, the corresponding gaussian distribution should have very small dispersion. shock news, say a catastrophe, war, killed people, epidemic, terror attack, is very good for energy pumping to a social gain medium. the modern west is characterized by the high degree of excitation, the energy e of the excited level is sufficiently high-otherwise one would not be able to survive: life in the megalopolis, long distances, high intensity of the working day, and so on. on the other hand, the energy e of the relaxation level is very low-one who is living on state support, say, in sweden, has practically zero excitement, often his state is depressive. hence, e a = e − e is high and a social atom would absorb only communications carrying very high energy: as in aforementioned shock news or say in tv shows, people should cry loudly, express highly emotional psychic states. since e a is high (blue), people would not pay attention to plain news (say red colored). even scientific news attracts attention only if it is very energetic, carries big emotional charge (blue or, even better, violet). however, shock news is very good for energy pumping not only because it carries a high charge of social energy, but also because it is very good at peeling communications from content. labels (quasi-colors) such as "coronavirus is a bio-weapon" leads to immediate absorption of communications, and social atoms react immediately to the instinctive feeling of danger. in our quantum-like model (similarly to physical atoms), social atoms can both absorb and emit quanta of the social energy. as in physics, there are two types of emission-spontaneous and stimulated. the spontaneous emission happens without external interaction, a social atom spontaneously emits a quantum of social energy, in the form of some social action. such spontaneous actions are not coherent, different atoms do different things, quasi-colors of social energy quanta emitted spontaneously can be totally different. such emissions generate a social noise in a human media, noise that is unwanted in social lasing. in particular, spontaneous emission noise disturbs functioning of internet echo chambers. on the other hand, the emission of quanta of social energy can be stimulated by excitations of the information field. in the very simplified picture, it looks like this. an excited social atom by interacting with an information excitation emits (with some probability) quantum of social energy. the most important feature of this process is that the quasi-color of the emitted quantum coincides with the quasi-color of stimulating communication. this is the root of the coherence in output beam of lasers, both social and physical. (the colors also coincide; see section . ). in reality, the process of stimulated emission is more complicated. it is important that the information field (similarly to the quantum electromagnetic field) satisfies bose-einstein statistics. this is a thermodynamic consequence [ ] of indistinguishability of excitations: two excitations with the same social energy and quasi-color are indistinguishable. as was shown in [ ] , by using the gibbs' approach based on consideration of virtual ensembles of indistinguishable systems (or any origin) we obtain the standard quantum classification of possible statistics, bose-einstein, fermi-dirac, and parastatistics. indistinguishability is up to energy (for the fixed quasi-color). hence, by taking into account that the number of communications carrying the same charge of social energy can be arbitrary, we derive the bose-einstein statistics for the quantum information field (see [ ] for derivation's details). interaction of atomic-like structures with bosonic fields are characterized by the following property: probability of stimulated emission from an atom increases very quickly with increasing of the power of a bosonic field. an excited social atom reacts rather weakly to the presence of a few information excitations. however, if they are many, then it cannot stay indifferent. in fact, this is just a socio-physical expression of the well-known bandwagon effect in humans' behavior [ ] . in contrast to psychology, we can provide the mathematical field-theoretical model for such an effect. we consider the fixed energy (frequency) mode of the quantum electromagnetic field. for fixed quasi-color mode α, n-photon state |n, α , can be represented in the form of the action of the photon creation operator a α corresponding to this mode on the vacuum state | : this representation gives the possibility to find that the transition probability amplitude from the state |n, α to the state |n + , α equals to (n + ). on the other hand, it is well known that the reverse process of absorption characterized by the transition probability amplitude from the state |n, α to the state |(n − ), α equals to √ n . generally, for a quantum bosonic field increasing the number of its quanta leads to increasing the probability of generation of one more quantum in the same state. this constitutes one of the basic quantum advantages of laser-stimulated emission showing that the emission of a coherent photon is more probable than the absorption. since, as shown in [ ] , indistinguishability, up to energy (color) and quasi-color, of information excitations leads to the bose-einstein statistics, we can use the quantum operational calculus for bosonic fields even for the quantum information field and formalize in this way the bandwagon effect in psychology [ ] . this is the good place to recall that in our considerations the notion "social action" is treated very widely, from a purely information action, as posting a communication at facebook or commenting one of already posted communications, to a real physical action, as participating in a demonstration against putin or trump, or supporting government's policy on "self-isolation". the previous works on social laser [ ] [ ] [ ] [ ] emphasized external representation of social actions, say in the well-known color revolutions. in this paper, we are more interested in their representation in information spaces, e.g., spaces of social networks. however, we are even more interested in internal representation of some social actions as decision makings. in addition, a decision can have different forms, not only "to do"-decisions, but also "not to do"-decisions. the decisions of the latter type also consume energy and social atoms transit from the excited state to the relaxed one. it is also important to point to the unconscious character of many (or may be majority) of our decisions. for example, people can support (or not support) societal policies totally unconsciously. to make such decisions, they consume social energy. mass media and internet pump social energy into a gain medium composed of social atoms to approach the population inversion-to transfer most atoms into excited states. then a bunch of communications of the same quasi-color and energy (color) matching with the resonant energy of social atoms is injected in the gain medium. in the simplified picture, each communication stimulates a social atom to emit a quantum of social energy with the same quasi-color as its stimulator. resulting two excitations stimulate two social atoms to emit two quanta, the latter two quanta generate four and so on, after say steps there are , approximately one million of information excitations of the same (quasi-)color. in reality, the process is probabilistic: an atom reacts to stimulating information excitation only with some probability. the later increases rapidly with increasing of the density of the quantum information field. now, we discuss the basic counterparts of social lasing in more detail: • each information communication carries a quantum of social energy. the corresponding mathematical model is of the quantum field type, the information field. quanta of social energy are its excitations. each social atom is characterized by the social energy spectrum; in the simplest case of two levels, this is the difference between the energies of the excitation and relaxation states, besides of social energy, the excitations of the information field are characterized by other labels, quasi-colors. coherence corresponds to social color sharpness; ideal social laser emits a single mode of quasi-color, denoted say by the symbol α. humans in the excited state interacting with α-colored excitations of the information field also emit α-colored excitations. the amount of the social energy carried by communications stimulating lasing should match with resonance energy e a of social atoms in the human gain medium. to approach the population inversion, the social energy is pumped into the gain medium. this energy pumping is generated by the mass media and the internet sources. the gain medium should be homogeneous with respect to the social energy spectrum. in the ideal case, all social atoms in the gain medium should have the same spectrum, e a . however, in reality, it is impossible to create such a human gain medium. as in physics, the spectral line broadening must be taken into account. for example, a gain medium consisting of humans in the excited state and stimulated by the anti-corruption colored information field would "radiate" a wave of anti-corruption protests. the same gain medium stimulated by an information field carrying another social color would generate the wave of actions corresponding this last color. the general theory of resonators for social lasers is presented in [ ] . here we shall consider in more detail special, but at the same very important type of social resonators, namely internet-based echo chambers. we recall that an echo chamber is a system in that some ideas and behavioral patterns are amplified and sharped through their feedback propagation inside this system. in parallel to such amplification, communications carrying (as quasi-color) ideas and behavioral patterns different from those determined by the concrete echo chamber are suppressed. in our terms, an echo chamber is a device for transmission and reflection of excitations of the quantum information field. its main purpose is amplification of this field and increasing its coherence via distilling from "social noise". the latter function will be discussed later in more detail. the echo chamber is also characterized by the resonance social energy e a of its social atoms. for simplicity, it is assumed that all social atoms have the same resonance energy e a . (in reality, resonance energy of social atoms is a gaussian random variable with mean value e a .) we underline that in this paper an echo chamber is considered to be a component of the social laser, its resonator. compared to physics we can say that this is an analog of an optical cavity of the physical laser, not optical cavity by itself. the coherent output of an echo chamber, the quasi-color of this output, is determined not only by the internal characteristics of the echo chamber, but also by the quasi-color of stimulating emission. let us consider functioning of some internet-based echo chamber; for example, one that is based on some social group in facebook (or its russian version "vkontakte") and composed of social atoms. the degree of their indistinguishability can vary depending on the concrete echo chamber. say, names are still present in facebook, but they have some meaning only for the restricted circle of friends; in instagram or snapchat, even names disappear and social atoms operate just with nicknames. by a social group we understand some sub-network of say facebook, for example, social group "quantum physics". the main feature of a social group is that all posts and comments are visible for all members of this social group. thus, if one from the group puts a post, then it would be visible for all members of this social group, and they would be able to put their own comments or posts related to my initiation post. this is simplification of the general structure of posting in facebook, with constraints that are set by clustering into "friends" and "followers". we assume that the ensemble of social atoms of this echo chamber approached population inversion, so most of them are already excited. a bunch of communications of the same quasi-color α and carrying quanta of social energy e c = e a is injected in the echo chamber. excited social atoms interact with the stimulating communications and emit (with some probability) information excitations of the same quasi-color as the injected stimulators. these emitted quanta of social energy are represented in the form of new posts in echo chamber's social group. each post plays the role of a mirror, it reflects the information excitation that has generated this post. however, the analogy with the optics is not straightforward. in classical optics, each light ray is reflected by a mirror again as one ray. in quantum optics, each photon reflected by a mirror is again just one photon. an ideal mirror reflects all photons (the real one absorbs some of them). in contrast, "the mirror of an echo chamber", the information mirror, is a multiplier. a physical analog of such a multiplier mirror would work in the following way. each light ray is reflected as a bunch of rays or in the quantum picture (matching better the situation), each photon by interacting with such a mirror generates a bunch of photons. of course, the usual physical mirror cannot reflect more photons than the number of incoming ones, due to the energy conservation law. hence, the discussed device is hypothetical. this is a good place to remark that as mentioned, a photon should not be imagined as a metal ball reflecting from mirror's surface. a photon interacts with the macro-system, the mirror, and the latter emits a new photon that is identical to the incoming one, up to the direction of spatial propagation. it seems to be possible to create a kind of a mirror with the complex internal structure (composed of special materials) such that it would generate emission of a bunch of photons. of course, such a multiplier mirror cannot function without the energy supply. the internet-based system of posting news and communications works as a multiplier mirror. each posted news or communication emits a bunch of "information rays" directed to all possible receivers-the social atoms of echo chamber's social group. in the quantum model, each post works as an information analog of photon's emitter. it emits quanta of social energy; the power of the information field increases. consequently, excited social atoms emit their own posts and comments with higher probability. we repeat that new posts have the same quasi-color as the initiating information excitations that were injected in the echo chamber. it is also important to remind that the process of stimulated emission is probabilistic. members of the social group would react to newly posted message only with some probability. in addition, resulting from the bosonic nature of the quantum information field, this probability increases rapidly with increasing of field's power. by reaction we understood emission of a new message, say a comment. if a social atom simply reads a posted communication, but does not emit its own information excitation, then we do not consider such reading as a reaction. for the moment, we consider only the process of stimulated emission. later we shall consider absorption as well. in the latter, reaction means transition from the ground state to the excited state; so, not simply reading. (in principle, a relaxed atom can read a post or a comment without absorbing a quantum of social energy sufficient for approaching the state of excitement.) the crucial difference from physics is an apparent violation of the energy conservation law (see appendix a for the discussion on this question). each post in a social group works as a social energy multiplier. thus, information excitations in the echo chamber generated by posted communications not only increase the probability of emission of new information excitations by excited atoms, but they also perform the function of additional energy pumping into the gain medium (social group). relaxed social atoms can absorb social energy not only from externally pumped messages from mass media, tv and other social networks, but even from their own echo chamber. then they also emit new posts and so on. the main distinguishing feature of the quantum information field is its bosonic nature. we now emphasize the impact of the bosonic structure to coherence of the information field inside of an echo chamber. as was already noted (section . ), the interaction of a social atom with the surrounding bosonic field depends crucially on the power of this field, the probability of emission of energy quantum by an excited social atom increases very quickly with increasing of field's power. now, we stress again that a social atom (as well as a physical atom) distinguishes the modes of the field corresponding to different quasi-colors. the probability of emission of a quantum of the fixed quasi-color α depends on the power of the field's mode colored by α. thus, if the power of the α-mode essentially higher than the power of the mode colored by β, then with very high probability social atoms would emit α-colored energy quanta (in the form of posts, comments, and videos). social atoms would ignore the β-colored energy quanta, the probability of emission of such quantum (and hence the increase of the power of the β-mode) is practically zero. if a social atom emits a communication, colored by β, then this information excitation would not attract attention of social atoms who are busy with communications colored by α. as was already emphasized, the crucial role is played by indistinguishability, up to the quasi-colors, of the excitations of the information field. social atoms should process information in the regime of label scanning, without analyzing its content. as was discussed, the easiest way to establish the indistinguishability regime of information processing is to generate an information overload in the gain medium composed of social atoms. of course, the loss of individuality by social atoms is also very important, people "without tribe" are better accommodated to perceive information in the label-scanning regime. in this regime, one would never absorb the main information of the β-labeled communication, say statistical data. in this section, we considered the quantum-like nature of coherence of the information waves generated in echo chambers. this indistinguishability of information excitations, the label-scanning regime. the information overload and the loss of individuality by social atoms are the main socio-psychological factors leading to this regime. in following sections . and . , we consider supplementary factors increasing information field's coherence. now, we connect a social resonator, e.g., in the form of an internet-based echo chamber, to the social laser device described in section . as the result of the feedback processing of information in the echo chamber, the power and coherence of the information field increases enormously. one of the ways to consume the huge energy of this information field is to realize it in the form of physical social actions, mass protests, e.g., demonstrations or even a wave of violence. this is the main mechanism of color revolutions and other social tsunamis [ ] [ ] [ ] [ ] . however, in this paper we are more interested in the purely information consumption of the social energy of the coherent information field prepared in an echo chamber, namely for internal decision-making. decision-making on a question that important for society is also a social action; in particular, it consumes social energy. now, suppose that say a government needs some coherent and rational (from its viewpoint) decision on some question. it can use a powerful social laser. this is a good place to remark that an ensemble of echo chambers can be used coherently with stimulation by the same quasi-color α corresponding to the desired decision. by emitting the information excitation, a social atom confirms his-her support of the α-decision. such social action is realized in the mental space of social atoms, but, of course, it has consequences even for associated actions in the physical space. if the wave in the information space generated by a powerful social laser can approach the steady state, then social atoms live in the regime of the repeated confirmation of the internal α-decision: an atom emits and relaxes, then he/she again absorbs another α-excitation and moves to the state of excitement and so on. in this situation of surrounding by the information field of huge power concentrated on the same α-mode, the colors of the energy pumping and stimulated emission coincide. such repeating of the same α-decision is similar to concentration on the idea-fix and can lead to the state of psychosis and panic (see freud [ ] ). as in physical lasing, the above ideal scheme is complicated by a few factors related to losses of social energy in the echo chamber. as is known, not all photons are reflected by mirrors of the optical cavity, a part of them is absorbed by the mirrors. the coefficient of reflection plays the fundamental role. the same problem arises in social lasing. an essential part of posts is absorbed by the information mirror of the echo chamber: for some posts, the probability that they would be read by members of the social group is practically zero. additional (essential) loss of social energy is resulted from getting rid of communications carrying quasi-colors different from the quasi-color α of the bunch of the communications initiating the feedback dynamics in the echo chamber. such communications are generated by spontaneous emission of atoms in the social group. the real model is even more complex. the information mirror is not homogeneous, "areas of its surface" differ by the degree of readability and reaction. the areas can be either rigidly incorporated in the structure of the social group or be formed in the process of its functioning. for example, "quantum physics" group has a few layers that are rigidly incorporated in its structure. one of them is "foundations and interpretations". this sublayer of the information mirror "quantum physics" has rather low visibility, due to a variety of reasons. once, i posted in "quantum physics" a discussion on quantum information and quantum nonlocality. in addition, i discovered that the social group moderators control rigidly the layer structure. the message that my post should be immediately moved to this very special area of the information mirror, "foundations and interpretations", approached me in a few minutes. it looks that even in such a politically neutral social group moderators work in the online regime. as an example of functionally created information layers, we can point to ones which are coupled to the names of some members of the social group, say "area" related to the posts of a nobel prize laureate has a high degree of readability and reaction. however, of course, one need not be such a big name to approach a high level of readability and reaction. for example, even in science the strategy of active following to the main stream speculations can have a very good effect. top bloggers and youtubers create areas of the information mirror with high coefficients of reflection-multiplication (see below ( )) through collecting subscriptions to their blogs and youtube channels. it is clear that the probability of readability and reaction to a post depends heavily on the area of its location in the information space of a social group or generally facebook, youtube, or instagram. the reflection-multiplication coefficient of the information mirror varies essentially. consider first the physical mirror and photons reflected by it. from the very beginning, it is convenient to consider an inhomogeneous mirror with the reflection coefficient depending on mirror's layers. suppose that k-photons are emitted to area x and n of them were reflected, i.e., (k − n) were absorbed. then the probability of reflection by this area p(x) ≈ n/k, for large k. now, for the information mirror, consider a sequence of posts, j = , , ..., k, that were put in its area x. let n j denotes the number of group's members who reacts to post j. each n j varies between and n, where n is the total number of group's members. then coefficient of reflection-multiplication p(x) ≈ ( k ∑ j n j )/kn, for large k, n. if practically all posts generate reactions of practically all members of the group, then n j ≈ n and p(x) ≈ . we have already discussed in detail the multilayer structure of the information mirror of an echo chamber. this is one of the basic information structures giving the possibility to generate inside it the information field of the very high degree of coherence: a very big wave of information excitations of the same quasi-color, the quasi-color of stimulating communications. it is sufficient to stimulate atoms with the potential of posting in the areas of the information surface with the high coefficients of reflection-multiplication. these areas would generate a huge information wave directed to the rest of the social group. spontaneously emitted communications would be directed to areas with the low coefficients of reflection-multiplication. how is this process directed by the internet engines? it is described by the model of the dynamical evaluation of the readability history of a post. we shall turn to this model in section . . although the dynamical evaluation plays the crucial role in generating the coherent information waves, one has not to ignore the impact of straightforward filtering. we again use the analogy with physics. in the process of lasing, the dynamical feedback process in the cavity excludes the excitation of the electromagnetic field propagating in the wrong directions. in this way, laser generates the sharply directed beam of light. however, one may want some additional specification for excitations in the light beam. for example, one wants that all photons would be of the same polarization. it can be easily done by putting the additional filter, the polarization filter that eliminates from the beam all photons with "wrong polarization". of course, the use of an additional filter would weaker the power of the output beam. the latter is the price for coherence increasing. in social lasing, the role of such polarization filters is played by say google, facebook, instagram, or yandex control filtering, e.g., with respect to the political correctness constraints. besides numerous moderators, this filtering system uses the keywords search engines as well as the rigid system of "self-control". in the latter, users report on "wrongly colored posts and comments" of each other; the reports are directed both to the provider and to social groups-to attract the attention to such posts and comments. the dynamical evaluation system used, e.g., by youtube, increases post's visibility based on its reading history, more readings imply higher visibility (at least theoretically). however, the multilayer structure of the information mirror of youtube should also be taken into account. the main internet platforms assign high visibility to biggest actors of the mass media, say bbc, euronews, rt, that started to use actively these platforms. then, and this is may be even more important, these internet platforms assigns high visibility to the most popular topics, say presently the coronavirus epidemic, videos, posts, and comments carrying this quasi-color are elevated automatically in the information mirrors of google, youtube, or yandex. of course, the real evaluation system of the main internet actors is more complicated and the aforementioned dynamical evaluation system is only one of its components, may be very important. we would never get the answer to the question so widely discussed in communities of bloggers and youtubers: how are the claims on unfair policy of internet platforms justified? by unfair policy they understand assigning additional readings and likes to some internet communications or withdraw some of them from other communications. (i can only appeal to my own rather unusual experience from the science field. once, i was a guest editor of a special issue (so a collection of papers about some topic). in particular, my own paper was published in the same issue. this is the open-access journal of v top ranking, a part of nature publishing group. presently, all open-access journals qualify papers by the number of downloads and readings. (therefore, this is a kind of youtubing of science.) my paper was rather highly estimated in these numbers. however, suddenly i got email from the editors that since i put so much efforts to prepare this issue, i shall get as a gift an additional downloads. of course, i was surprised, but i did not act in any way and really received this virtual gift... after this event, i am very suspicious of numbers of downloads and readings that i can see in the various internet systems. if such unfair behavior is possible even in science, then one can suspect that it is really not unusual.) starting with presentation of the basics of social lasing, we concentrated on functioning of one of the most important kinds of social resonators, namely internet-based echo chambers. we analyzed similarities and dissimilarities of optical and information mirrors. the main distinguishing feature of the latter is its ability not only reflect excitations of the quantum information field, but also multiply them in number. the coefficient of reflection-multiplication is the basic characteristic of the information mirror. we point to the layer structure of the information mirror of an echo chamber; the coefficient of reflection-multiplication varies depending on mirror's layer. we emphasized the bosonic nature of the quantum information field. this is a straightforward thermodynamic consequence [ ] of indistinguishability of information excitations, up their quasi-colors. being bosonic, the information field increases tremendously the speed and coherence of stimulated emission of information excitations by excited social atoms. social atoms, "creatures without tribe", form the gain medium of a social laser. in contrast to quantum physics, we cannot treat real humans as totally indistinguishable. this is a good place to remind once again that our social lasing as well as generally decision-making modeling is quantum-like. quantum features are satisfied only approximately. this point is often missed in the presentation of "quantum models" for cognition and decision-making. in sections . and . , we discuss some technicalities related to functioning of internet-based social groups and generally google and youtube. this discussion plays only a supplementary role for this paper. it would be fruitful to continue it and especially to discuss exploring of the quantum-like features of users and information supplied to them by the internet (cf., for example, with studies on quantum-like modeling of information retrieval [ ] [ ] [ ] [ ] [ ] [ ] ). in appendix a, we discussed very briefly interrelation between the psychic energy and the physical energy of cells' metabolism. it is very important to continue this study in cooperation with psychologists and neurophysiologists. the main output of this paper is description of the mechanism of generation of big waves of coherent information carrying huge social energy, a kind of information tsunamis. we especially emphasize listing of the basic conditions on human gain media and the information field generated by mass media and amplified in echo chambers leading to successful generation of such waves. the author recognizes very well that this study is still one of the first steps toward well elaborated theory. motion) interpretation of quantum mechanics [ ] , whereas quite different from other interpretations, such as the many-world interpretation [ , ] and the wise (wave function is the system entity) interpretation [ ] . it should be also pointed out that in recent years, some physics-based social or network models have been studied [ , ] . some new intersection of quantum and operational research are emerging, such as quantum machine learning [ , ] . towards information lasers social laser: action amplification by stimulated emission of social energy social laser model: from color revolutions to brexit and election of donald trump on interpretational questions for quantum-like modeling of social lasing concept of information laser: from quantum theory to behavioural dynamics phase transitions, collective emotions and decision-making problem in heterogeneous social systems information dynamics in cognitive, psychological, social, and anomalous phenomena; series: fundamental theories of physics ubiquitous quantum structure: from psychology to finances quantum models of cognition and decision quantum concepts in the social classical and quantum mechanics on information spaces with applications to cognitive, psychological, social and anomalous phenomena. found. phys on quantum-like probabilistic structure of mental information pilot-wave theory and financial option pricing quantum dynamics of human decision making a quantum probability explanation for violations of 'rational' decision theory a quantum theoretical explanation for probability judgment errors decision theory with prospect interference and entanglement mathematical structure of quantum decision theory quantum-like model of behavioral response computation using neural oscillators a quantum probability explanation in fock space for borderline contradictions an operator-like description of love affairs can quantum probability provide a new direction for cognitive modeling? context effects produced by question orders reveal quantum nature of human judgments are quantum-mechanical-like models possible, or necessary, outside quantum physics? the visualizable, the representable and the inconceivable: realist and non-realist mathematical models in physics and beyond the role of information in a two-traders market conjunction and negation of natural concepts: a quantum-theoretic modeling a generalized probability framework to model economic agents' decisions under uncertainty quantum dynamics applied to cognition: a consideration of available options new fundamental evidence of non-classical structure in the combination of natural concepts the emperor's new mind quantum coherence in microtubules. a neural basis for emergent consciousness? brain and physics of many-body problems dissipation and memory capacity in the quantum brain model my double unveiled: the dissipative quantum model of brain enough! electoral fraud, collective action problems, and post-communist coloured revolutions why it's kicking off everywhere: the new global revolutions the color revolutions mistrust we trust: can democracy survive when we don't trust our leaders? the new digital age: reshaping the future of people why did people vote for brexit? deep-seated grievances lie behind this vote why did people vote for donald trump? voters explain; the guardian the science of structure: synergetics synergetics: an introduction-nonequilibrium phase transitions and self-organisation in physics laser light dynamics lasers and synergetics. a colloquium on coherence and self-organization in nature subjective probability: a judgment of representativeness prospect theory: an analysis of decision under risk critique des postulats et axiomes de l' cole amricaine risk, ambiguity, and the savage axioms ambiguity, and the dark-dependence axioms the geometry of information retrieval what is quantum information retrieval? how quantum theory is developing the field of information retrieval? supporting polyrepresentation in a quantum-inspired geometrical retrieval framework introduction to information retrieval and quantum mechanics deriving a quantum information retrieval basis the principles of psychology the standard edition of complete psychological works of sigmund freud on the nature of the psyche atom and archetype: the pauli/jung letters - competition between collective and individual dynamics from the financial crisis to sustainability? potsdam, european climate forum. available online: www.europeanclimate-forum.net/index.php?id=ecfreports interaction ritual chains oxford dictionary of psychology interpreting quantum mechanics in terms of random discontinuous motion of particles relative state" formulation of quantum mechanics many-worlds interpretation of quantum mechanics general quantum interference principle and duality computer modeling viral diffusion using quantum computational network simulation mechanics-an endless frontier quantum machine learning recent advances in quantum machine learning. quant. eng. , , e . [crossref] c by the author funding: this research received no external funding. the authors declare no conflict of interest. above we wrote about an "apparent violation" of the law of conservation of the social energy. we briefly discuss this point. the social energy is the special form of the psychic energy. hence, by discussing the conservation law we cannot restrict consideration solely to the social energy. the detailed analysis of transformation of different forms of the psychic energy and its origin in neurophysiological processes and finally the physical energy generated by cells' metabolism was presented by freud [ ] . we do not plan to discuss here the freud's hydrodynamical model for psychic energy transformations. we want to elevate the crucial difference of the energy transfer from the information field to social atoms from the energy transfer from the electromagnetic field to physical atoms. in physics, energy is assigned to photons carriers of information, an atom by absorbing a photon receives its energy. in our social model, an excitation of the information field just carries the social energy label e c . a social atom absorbs this label and generate the corresponding portion of energy by itself, by transforming its psychical energy into the social energy. in addition, the former is generated by neurophysiological activity in the brain and the nervous system from the physical metabolic energy. thus, by taking into account the psychic energy, we understand that even for cognitive systems the law of energy conservation is not violated. we remark that development of the social laser model has also some relevance to the interpretations of quantum mechanics. in this model, the quantum nature is apparent, because the smallest unit in a society is a human person. this is quite like the rdm (random discontinuous key: cord- -gw ldhu authors: wang, bing; han, yuexing; tanaka, gouhei title: interplay between epidemic spread and information propagation on metapopulation networks date: - - journal: journal of theoretical biology doi: . /j.jtbi. . . sha: doc_id: cord_uid: gw ldhu abstract the spread of an infectious disease has been widely found to evolve with the propagation of information. many seminal works have demonstrated the impact of information propagation on the epidemic spreading, assuming that individuals are static and no mobility is involved. inspired by the recent observation of diverse mobility patterns, we incorporate the information propagation into a metapopulation model based on the mobility patterns and contagion process, which significantly alters the epidemic threshold. in more details, we find that both the information efficiency and the mobility patterns have essential impacts on the epidemic spread. we obtain different scenarios leading to the mitigation of the outbreak by appropriately integrating the mobility patterns and the information efficiency as well. the inclusion of the impacts of the information propagation into the epidemiological model is expected to provide an support to public health implications for the suppression of epidemics. infectious diseases are transmitted through social contacts between individuals. the modeling of epidemic spreading among human beings has been extensively studied in mathematical epidemiology and network science. the developments of transportation system have enabled people to travel more globally. consequently, epidemics starting from a local patch can spread to the entire network in a very short time. recently, the metapopulation modeling approach has been broadly applied to study infectious disease spreading among the spatial structure of populations with well-defined social units (colizza et al., ; colizza and vespignani, ; balcan et al., ) . then the metapopulation network model has been greatly developed by considering a number of factors such as the network structure (watts et al., ; wang et al., ) , human mobility patterns (belik et al., a (belik et al., , b , human behavior (meloni et al., ; wang et al., ) , and human contact patterns yang et al., ; iribarren, ) . it has been shown that the substrate network structure (watts et al., ; wang et al., ) plays an essential role in the spatial spread of epidemics. in real-world networks, human mobility patterns vary in a very complicated way, e.g., recurrent visits of patches (belik et al., a (belik et al., , b , diverse staying period in patches , etc. human behavioral responses to the epidemics have also been found to be able to delay the epidemic spread (meloni et al., ; wang et al., ) . with regard to human contact patterns, location-specific contact patterns have been investigated . recently, since human contact patterns are temporal, the nature of burstiness and heterogeneity in human activities has been found in empirical studies, and it has striking effects on the speed of spreading (yang et al., ; iribarren, ; masuda and holme, ) . for instance, heterogeneity of human activity is responsible for the slow dynamics of information propagation (iribarren, ) . human beings often react to the presence of an infectious disease by changing their behavior. the perception of the risk associated with the infection and countermeasures are usually accompanied with the behavior like cutting the connection with infectious contacts to form adaptive rewiring (gross et al., ; wang et al., ; belik et al., ) , accepting vaccination (bauch and earn, ) , wearing face-masks, reducing travel range (lima et al., ) , etc. within the epidemicrelated game, the change of human behavior such as tradeoff between cost and risk often results in the decision-making process like vaccination via a game-theoretic framework (basu et al., ; perisic and bauch, ; zhang et al., ) . many works have focused on the impact of information propagation on separating a non-epidemic state and an epidemic state. with the progression of the outbreak, messages on the epidemics, such as fears of the disease and self-initiated awareness, may be passed from one individual to another (epstein et al., ; perra et al., ) . following the seminal work in ref. (funk et al., ), the source of information (e.g., local or global awareness) and the pattern of information dissemination have been widely studied (funk et al., ; wu et al., ; granell et al., ; sahneh et al., ; yuan et al., ; zhang et al., ) . depending on the path of information propagation, there are several types of information. for instance, people obtain information from broadcasting and the internet, which could be taken as a kind of global information. people can also exchange information by face-to-face contacts, which is a kind of contact-based information (funk et al., ) . so far, the study of the impacts of information propagation on the epidemic spread has been restricted to individual-based networks, where one node corresponds to one individual (zhang et al., ) . under the framework of metapopulation model, people may get infected by contacting with infectious individuals within the same patch; they may exchange information related to the presence of an infectious disease through face-to-face contacts (funk et al., (funk et al., , . information carriers may pass the message of the epidemic situation to uninformed individuals, which may potentially alter their future mobility patterns, thus, affecting the epidemic spread. this has been observed in real-world situations, where people are usually reluctant to visit infected areas (camitz and liljeros, ; bajardi et al., ) . the diameter of human mobility during the h n epidemic has been found to reduce significantly with the progression of alert campaign, which verifies the fact that human beings indeed alter their movements when being exposed to the presence of the information during the outbreak of epidemics (bajardi et al., ) . in this paper, we present a metapopulation framework to explore the interplay between epidemic dynamics and information dynamics based on diverse mobility patterns. with a mean-field approximation of the metapopulation model, we find that both the information efficiency and the mobility patterns jointly affect the epidemic spread in terms of both the outbreak size and the epidemic threshold. when the information efficiency is low, mobility to the patch with more healthy individuals facilitates the epidemic spread with an increased outbreak size and an decreased epidemic threshold, even though more individuals get informed; on the contrary, when the information efficiency is high enough to cause people's attention, mobility to the patch with more healthy individuals, suppresses the epidemic outbreak by informing more individuals. in order to highlight the role of mobility, we apply a simplistic model of information dynamics to that of the disease dynamics; the incorporation of passing messages among mobile individuals in the metapopulation model gives us a new perspective on the countermeasure of epidemics, which is different from the previous studies on the contact-based networks. it suggests a possible way to suppress the epidemic spread by guiding individual mobility patterns in accordance with the evaluation of the risk perception and the information efficiency as well. before introducing the model, we briefly demonstrate the mobility patterns and information propagation, respectively. usually, a random mobility pattern is often used for the convenience of theoretical analysis. to be more realistic, we consider that the mobility pattern is driven by the safety level at destination. in facing the outbreak, people usually prefer to visit safer patches in order to avoid infection, i.e., the safer the destination is, the higher the probability that individuals move to it is . in the context of information, we regard it to be only accessible by contacting with information carriers as opposed to the general knowledge obtained through multi-media (global awareness) and selfinitiated awareness as well. thus, we focus on the interplay between mobility patterns and information propagation. the role of information in reducing the infection risk is described by the information efficiency, which is classified into two major types: (i) the information that is highly efficient to warn people to take measures in the face of a fatal flu, such as severe acute respiratory syndrome (sars); (ii) the information that cannot cause people's sufficient attention to take measures in the face of an infectious disease, such as seasonal influenza. as a result, the exchange of information on the risk perception may potentially alter human behavior, e.g., contact structures or travel patterns, which in turn influences the spreading process. the metapopulation approach describes the spatially structured interacting patches, which are connected by the movement of individuals. inside each patch, individuals are divided into classes that represent their states according to the infection dynamics. to demonstrate the role of information propagation in the epidemic dynamics, we couple a mathematical model similar to the susceptible-infectioussusceptible (sis) model for the epidemic dynamics with a model for the information propagation. due to the information propagation, susceptible individuals are further classified into two types: uninformed susceptible (s) and informed susceptible (a) individuals. uninformed susceptible individuals are those who have not yet received the information on the epidemic and may get infected by contacting with infectious individuals at transmission rate β, while informed susceptible individuals (a) may get infected with a reduced transmission rate β a with β β < a . this is supported by the fact that people may reduce the number of contacts as a defensive response (poletti et al., ) and they also may get infected with a reduced infection transmission rate by self-awareness, such as wearing face-masks or washing hands frequently (can et al., ; zhang et al., , . for convenience, we represent the reduced transmission rate as β α β where α denotes the information efficiency for reduction of infection risk. infectious individuals may get recovered at rate μ. here, we assume the nonlimited transmission, where the infection rate is not divided by the total population in the patch. the propagation of information is analogous to that of an infectious disease, often called "information contagion": information is passed from information carriers to uninformed individuals through contact at rate σ, and information carriers may lose the information at rate r as time goes by. we assume that information is passed by contact instead of selfawareness, and for simplicity, we also assume that infectious individuals are ignorant to the information. in fact, it is more realistic to assume that infectious individuals know the infectious status and they may reduce their contact number by being detected or quarantined, however, it is out of the range of this paper and may be investigated in the future. fig. illustrates the interplay between the information propagation and the disease spread on metapopulation networks. connected patches as a force of infection result from the movement of individuals. next, let us consider the diffusion process saldaña, ; juher et al., ). parallel to the contagion process, simultaneously all the individuals move from one patch to another at rate d. in more details, uninformed, informed, and infectious individuals leave the patch at rates d s , d a , and d i , respectively. considering the heterogeneity in the real-world networks, individuals in state θ (θ s a i = , , ) at patch k′ (patch with degree k′ is briefly denoted by patch k′) move to the neighboring patch k with probability d θ k k , ′ . human mobility shows diverse patterns depending on individual's gender, age, and native or non-native (salon and gulyani, ; yan et al., ; yang et al., ). an explicit expression of d θ k k , ′ relies on the knowledge of the empirical data on traveling patterns of human beings (brockmann et al., ; brockmann and theis, ; gonzález et al., ; song et al., a song et al., , b . in the following, we use d θ k k , ′ as a general expression for mobility probability in the determined reaction-diffusion equations to describe the dynamics of the epidemic and that of the information in the metapopulation system. by incorporating the contagion process and the information propagation into the diffusion processes, the dynamics of the subpopulation of uninformed, informed, and infectious individuals at patch k, ρ s k , , ρ a k , , and ρ i k , , respectively, are approximated with the mean-field approximation as follows: for k k k ≤ ≤ min max , where k min and k max are the minimum and maximum degrees of the patches, respectively; p k k ( ′| ) is the conditional probability that a patch k connects with a patch k′. for simplicity of calculation, we assume that the patches connect in an uncorrelated way in the sense that they connect at random, i.e., p k k is the average degree of the network (newman, ) and p(k) is the degree distribution of the network. since the information propagation affects the epidemic spread by informing more individuals of the disease and thereby reducing their risk of infection, mobility patterns of the informed susceptible individuals play a fundamental role in the effectiveness of the information propagation, and thus, affect the epidemic spread. to understand the role of mobility patterns in both the dynamics of epidemic spread and that of the information propagation, we investigate the mobility probability from patch k′ to patch k by assuming the detailed functional form of d θ k k , ′ for θ s a i = , , . intuitively, the more healthy individuals a patch contains, the safer the patch is, and individuals usually prefer to move to safer patches, or in other words, in order to prevent infection they attempt to avoid visiting infected patches. to reflect this effect, following ref. , we assume that all the individuals move in accordance with the safety level at the destination, which is mathematically expressed by for k k k k ≤ , ′ ≤ min max , where the parameter γ θ controls the dependency on the safety level at the patch. by tuning γ θ , diverse mobility patterns can be observed. for instance, if γ > θ , the safer the patch is, the more likely it is that individuals move to the patch; this is consistent with the phenomenon that people attempt to bypass infected areas; in order to deploy a systematic study, we also consider the opposite case with γ < θ , which means that the safer the patch is, the less likely it is that people travel to the patch. this may correspond to the situation that people receive incorrect information. if γ = θ , the model is reduced to the case of random mobility and eq. ( ) is simplified as follows: for k k k ≤ ≤ min max . in the following, we will explore how mobility patterns influence the information propagation and the epidemic process as well. . the invasion thresholds for disease dynamics and information dynamics we investigate the ability that a disease or information can survive in the network by analyzing the stability of the disease-free and informationfree state at the equilibrium point ρ ρ ρ ρ ( , , . by inserting eq. ( ) into eq. ( ), the uninformed susceptible individuals at patch k, ρ s k , , at the equilibrium state is given by as follows: ., s denotes an arbitrary order moment of ρ ( ) s γ s and ρ is the total population density in the network and the parameter γ s controls the susceptible population at the equilibrium. the exact solution of ρ s k , can be numerically solved with the fixed-point iteration with eq. ( ). the linearized matrix of eq. ( ) around the disease-free and information-free equilibrium is given by where each block is a k k ( − ) max min matrix; is the null matrix; i is the identity matrix; diag x ( ) k is a diagonal matrix with kth component as x k ; the matrix c is given by since c is a rank-one matrix, it has an eigenvalue λ = with multiplicity k − . therefore, the sufficient condition for the disease-free equilibrium to be unstable is given by or the sufficient condition for the information-free equilibrium to be unstable is given by: except for the equilibrium point of the disease-and informationfree state ρ ρ ρ ρ ( , , ) = ( , , ) , we have to note that there exists the second equilibrium point ρ ρ ρ ρ ρ ( , , . let us start with a simplistic assumption that individuals in all the states move at random, i.e., γ γ γ = = = s a i and d s =d a . by solving eq. ( ), the population density at patch k, ρ k , at the equilibrium is given by with eq. ( ), the informed populations at the equilibrium state, ρ a k , , is obtained by solving the following equation: where ρ p k ρ = ∑ ( ) , is the total informed population at the equilibrium state in the network. by setting ω ρ = − k k k r d σ 〈 〉 + a , the informed and uninformed susceptible populations at patch k, ρ a k , and ρ s k , , respectively, at the equilibrium are given by where the equilibrium solution ρ a k , can be solved with the fixed point iteration. the equilibrium becomes unstable if the uninformed susceptible and informed susceptible individuals become infected before the infected individuals get recovered, that is, which is rewritten as where α ≤ ≤ , ρ a can be obtained from eq. ( ). in order to trigger an epidemic outbreak, there exists a third invasion threshold, ρ c ai , which is consistent with eq. ( ); when , , it indicates that the population density ρ is large enough to inform a quantitative size of susceptible individuals but it is not large enough to infect them, whereas it is possible that the outbreak occurs by infecting the informed individuals if ρ ρ > c ai , . in order to take the safety movement patterns into account, in the following, we investigate three representative types of mobility patterns: (i) γ > θ : individuals in state θ (θ s a i = , , ) prefer to move to safer patches; (ii) γ < θ : individuals in state θ prefer to move to less safe patches; (iii) γ = θ : individuals in state θ move at random. we take case (iii) as a standard criterion for comparison with cases (i) and (ii). in a more detailed way, we investigate the situation that both the uninformed and informed individuals follow the same mobility patterns with γ γ = s a and the situation that they follow opposite mobility patterns with γ > s and γ < a or vice versa. networks of patches are generated with the configuration network model (molloy and reed ) with size n= following the degree distribution p k k ( ) ∼ − . with k = min . simulation results are based on averaging over more than results for different initial conditions and network structures. without specification, all the infectious individuals are assumed to move at random with γ = i . we randomly seed . % of the total population for each of the dynamics of infectious disease and information propagation. this condition ensures that the outbreak for each dynamics is started separately. time courses of the uninformed, informed, and infectious populations for different combinations of α and mobility patterns γ s (γ a ) are shown in fig. . when α is at a medium level, that is, the informed individuals get infected with a half risk of infection (α = . , fig. (a) ), the informed population (the blue curves) firstly grows and then reduces to zero due to the infection by infectious individuals, leaving the uninformed and infectious populations at the stable state. the final prevalence of infection also depends on mobility patterns. for instance, moving to safer patches (γ γ = = . > s a , the dotted curve) causes a relatively higher prevalence, while, on the contrary, moving to less safe patches (γ γ = = − . < s a , the solid curve) causes a lower prevalence. in the case of an extremely perfect information efficiency with α = . ( fig. (b) ), where the informed individuals become totally immune to the infectious disease, the informed population sustains a non-zero value only if people move to safer patches (γ = . s , the blue dotted curve). in this case, mobility patterns play a role different from that at a lower α (fig. (a) ). since informed individuals get full immunity to the infection, the more individuals get informed, the less individuals get infected. hence, moving to the patch that contains more susceptible individuals will inform more and make them immune to the infectious disease. as a result, mobility patterns with γ γ = = . > s a inform individuals most (the blue dotted curve) and yield the lowest prevalence of infection (the red dotted curve). the final prevalence of infection and that of the informed individuals for different choices of α can be further observed in fig. . the final prevalence depends on the combined role of the information efficiency and mobility patterns. for instance, when α = , the in-formed individuals get the same infection rate as the uninformed individuals do. we firstly observe that moving to the less safe patch can cause a higher epidemic threshold and a lower prevalence irrespective of the information efficiency α (fig. (a ) , the black squares). conversely, moving to the safer patches causes a smaller epidemic threshold and a higher prevalence (fig. (a ) , the red triangles). the informed population firstly grows by informing susceptible individuals, and then it ceases to grow and reduces to zero due to the infection by contacting with infectious individuals (fig. (b ) ). with the increase of the information efficiency such as α = . , the informed individuals get a reduced risk of infection. we find that even if the infection rate is reduced to half, the final prevalence of infection does not obviously decrease for all the mobility patterns that we tested ( fig. (a ) ). with further increase of α, such as α = , the informed individuals get full immunity to the infectious disease. when individuals prefer to move to safer patches with γ = . s , we find that the more susceptible individuals get informed (fig. (b ) ), the smaller the outbreak size will be ( fig. (a ) ). for instance, with γ = . s , infection disappears from the network while keeping a non-zero quantity of individuals being informed. from the above analysis, we find that a medium value of α cannot change the contagion process in terms of both the final prevalence and the epidemic threshold. although moving to the safer patches can inform more individuals, it increases the probability of infection as well. the interaction of individuals' mobility patterns and the information efficiency can be further verified by observing the final prevalences of the infectious and informed individuals at patches k, ρ i k , and ρ a k , , as shown in fig. . it shows that the larger the patch degree k is, the more infectious individuals are contained in it, roughly following a linear increase form. moreover, for a medium level of α, moving to the patches with more healthy population will infect more; while moving to the patches with less healthy population, infect less due to the dissemination of individuals at high degree patches (figs. (a ) and (a )). with the increase in α, moving to the patch with high degrees can inform more individuals (fig. (b ) , the red triangles), thus, less individuals at the patch get infected (fig. (b ) ). the detailed interplay between α and mobility patterns for the epidemic spread is shown in fig. . we find that for all possible values of α, moving to the patches that contain more susceptible individuals, increases the risk of outbreak, except for an extremely information efficiency α (α > . ), where the prevalence of infection can be significantly reduced by increasing the contact probability between information carriers and uninformed susceptible individuals (γ > s ). when α is a medium value, reducing the contact probability between information carries and uninformed susceptible individuals (γ < s ) can efficiently prevent the epidemic spread. in the above analysis, we have assumed that both the informed and uninformed individuals follow the same types of mobility patterns with "γ > s and γ > a " or "γ < s and γ < a ". in order to make the analysis consistent, in the following, we investigate the case that the informed and uninformed susceptible individuals take different types of mobility patterns by tuning the parameters γ > s and γ < a or vice versa, and we explore their impacts on the final prevalence of infection (fig. ) . for a medium value of α, the more uninformed individuals move to the safer patches (γ > s ), the higher the prevalence is. this is irrelevant to the mobility patterns of the informed individuals and is independent of whether they approach the safer patches or not (γ > a or γ < a ). with an extremely high efficiency (α = , fig. (b) ), we find two opposite results. the highest contact probability between information carriers and uninformed susceptible individuals yields the lowest prevalence ("γ > a and γ > s " or "γ < a and γ < s "), while the separation of them promotes the epidemic spread (γ > s and γ < a ). from the above results, we conclude that information propagation is vital to the epidemic spread. the role of information propagation has to be evaluated by taking the information efficiency α into account. on the one hand, information may help mitigate the epidemic spread as long as it is highly efficient enough to reduce the risk of infection (a high value of α). under this circumstance, mobility to safer patches may strengthen the role of information by informing more individuals. on the other hand, when the information on the disease cannot cause people's attention to reduce the risk of infection (a medium or lower value of α), informing more individuals by moving to the safer patches can only promote the epidemic spread by gathering more susceptible individuals in one patch. next, we explore how mobility patterns affect the epidemic threshold ρ c i , with eq. ( ) as shown in fig. , where we assume that the informed individuals follow the same mobility patterns as the uninformed susceptible individuals do. it shows that ρ c i , decreases with γ s , indicating that the more susceptible individuals move to the safer patches, the smaller the epidemic threshold will be. in other words, moving to the safer patches promotes the epidemic spread. to further reveal the impacts of the information efficiency and mobility patterns on the epidemic process, we show the dependence of the epidemic threshold ρ c i , on α and γ s in fig. . it shows that for a lower α, moving to safer patches promotes the disease spread with a reduced epidemic threshold (bottom-right); while for a higher α, moving to the safer patches informs more individuals and thus protects them from infection yielding a higher invasion threshold (top-right). this result is consistent with the analysis of the epidemic prevalence as shown in fig. . the dependence of the third epidemic threshold ρ c ai , on the information efficiency α can be found in fig. . we find that with the increase of information efficiency, α, the informed individuals get immunity to infection and it mitigates the disease spread in the network, yielding a higher critical invasion threshold ρ c ai , . whenever the outbreak of an infectious disease occurs, it is inevitably accompanied with the propagation of the information that is related to the progression of the infectious disease. in this work, we have investigated the interplay between the disease spread and the information propagation by focusing on the role of the information efficiency in reducing the risk of infection, and that of mobility patterns. the mobility pattern is mainly driven by the risk perception expressed by the safety level at the destination patch. the more healthy individuals a patch contains, the safer it is. although the model we have proposed is simplistic and more realistic scenarios with detailed . the other parameters are the same as in fig. . b. wang et al. journal of theoretical biology ( ) - mobility patterns are determined by the availability of data, our model captures basic characteristics of the dynamics of information propagation and that of epidemic spread. we find that appropriately incorporating the knowledge of the information efficiency with the guidance of human mobility may effectively mitigate the epidemic outbreak by decreasing the outbreak size and increasing the epidemic threshold. changing mobility patterns in accordance with the evaluation of the information efficiency could strengthen the role of information propagation in preventing the outbreak. information carriers play a role of double sides of swords. our results suggest that mobility to the patches that contain more healthy individuals can mitigate the epidemic outbreak only if the information can efficiently appeal people's attention to reduce the risk of infection; otherwise, informing more individuals can promote the epidemic spread with a larger outbreak size. thus, in addition to the usual intervention measurements (e.g., vaccinations), guiding mobility patterns or controlling the traffic flow between patches based on the proper evaluation of the information efficiency may be useful in preventing an epidemic. plos one , e proc. natl. acad. sci. usa proc. natl. acad. sci. usa interface interface interface , random struct. algor , physca a , this work is partly supported by the national natural science foundation of china (grant no. ) and science and technology commission of shanghai municipality (grant no. ). key: cord- -nl gsr c authors: tan, chunyang; yang, kaijia; dai, xinyu; huang, shujian; chen, jiajun title: msge: a multi-step gated model for knowledge graph completion date: - - journal: advances in knowledge discovery and data mining doi: . / - - - - _ sha: doc_id: cord_uid: nl gsr c knowledge graph embedding models aim to represent entities and relations in continuous low-dimensional vector space, benefiting many research areas such as knowledge graph completion and web searching. however, previous works do not consider controlling information flow, which makes them hard to obtain useful latent information and limits model performance. specifically, as human beings, predictions are usually made in multiple steps with every step filtering out irrelevant information and targeting at helpful information. in this paper, we first integrate iterative mechanism into knowledge graph embedding and propose a multi-step gated model which utilizes relations as queries to extract useful information from coarse to fine in multiple steps. first gate mechanism is adopted to control information flow by the interaction between entity and relation with multiple steps. then we repeat the gate cell for several times to refine the information incrementally. our model achieves state-of-the-art performance on most benchmark datasets compared to strong baselines. further analyses demonstrate the effectiveness of our model and its scalability on large knowledge graphs. large-scale knowledge graphs(kgs), such as freebase [ ] , yago [ ] and dbpedia [ ] , have attracted extensive interests with progress in artificial intelligence. real-world facts are stored in kgs with the form of (subject entity, relation, object entity), denoted as (s, r, o), benefiting many applications and research areas such as question answering and semantic searching. meanwhile, kgs are still far from complete with missing a lot of valid triplets. as a consequence, many researches have been devoted to knowledge graph completion task which aims to predict missing links in knowledge graphs. knowledge graph embedding models try to represent entities and relations in low-dimensional continuous vector space. benefiting from these embedding models, we can do complicated computations on kg facts and better tackle the kg completion task. translation distance based models [ ] [ ] [ ] [ ] [ ] regard predicting a relation between two entities as a translation from subject entity to tail entity with the relation as a media. while plenty of bilinear models [ ] [ ] [ ] [ ] [ ] propose different energy functions representing the score of its validity rather than measure the distance between entities. apart from these shallow models, recently, deeper models [ , ] are proposed to extract information at deep level. though effective, these models do not consider: . controlling information flow specifically, which means keeping relevant information and filtering out useless ones, as a result restricting the performance of models. . the multi-step reasoning nature of a prediction process. an entity in a knowledge graph contains rich latent information in its representation. as illustrated in fig. , the entity michael jordon has much latent information embedded in the knowledge graph and will be learned into the representation implicitly. however, when given a relation, not all latent semantics are helpful for the prediction of object entity. intuitively, it is more reasonable to design a module that can capture useful latent information and filter out useless ones. at the meantime, for a complex graph, an entity may contain much latent information entailed in an entity, one-step predicting is not enough for complicated predictions, while almost all previous models ignore this nature. multi-step architecture [ , ] allows the model to refine the information from coarse to fine in multiple steps and has been proved to benefit a lot for the feature extraction procedure. in this paper, we propose a multi-step gated embedding (msge) model for link prediction in kgs. during every step, gate mechanism is applied several times, which is used to decide what features are retained and what are excluded at the dimension level, corresponding to the multi-step reasoning procedure. for partial dataset, gate cells are repeated for several times iteratively for more finegrained information. all parameters are shared among the repeating cells, which allows our model to target the right features in multi-steps with high parameter efficiency. we do link prediction experiments on public available benchmark datasets and achieve better performance compared to strong baselines on most datasets. we further analyse the influence of gate mechanism and the length of steps to demonstrate our motivation. link prediction in knowledge graphs aims to predict correct object entities given a pair of subject entity and relation. in a knowledge graph, there are a huge amount of entities and relations, which inspires previous work to transform the prediction task as a scoring and ranking task. given a known pair of subject entity and relation (s, r), a model needs to design a scoring function for a triple (s, r, o), where o belongs to all entities in a knowledge graph. then model ranks all these triples in order to find the position of the valid one. the goal of a model is to rank all valid triples before the false ones. knowledge graph embedding models aim to represent entities and relations in knowledge graphs with low-dimensional vectors (e s , e r , e t ). transe [ ] is a typical distance-based model with constraint formula e s + e r − e t ≈ . many other models extend transe by projecting subject and object entities into relationspecific vector space, such as transh [ ] , transr [ ] and transd [ ] . toruse [ ] and rotate [ ] are also extensions of distance-based models. instead of measuring distance among entities, bilinear models such as rescal [ ] , distmult [ ] and complex [ ] are proposed with multiplication operations to score a triplet. tensor decomposition methods such as simple [ ] , cp-n [ ] and tucker [ ] can also be seen as bilinear models with extra constraints. apart from above shallow models, several deeper non-linear models have been proposed to further capture more underlying features. for example, (r-gcns) [ ] applies a specific convolution operator to model locality information in accordance to the topology of knowledge graphs. conve [ ] first applies -d convolution into knowledge graph embedding and achieves competitive performance. the main idea of our model is to control information flow in a multi-step way. to our best knowledge, the most related work to ours is transat [ ] which also mentioned the two-step reasoning nature of link prediction. however, in transat, the first step is categorizing entities with kmeans and then it adopts a distance-based scoring function to measure the validity. this architecture is not an end-to-end structure which is not flexible. besides, error propagation will happen due to the usage of kmeans algorithm. we denote a knowledge graph as g = {(s, r, o)} ⊆ e × r × e , where e and r are the sets of entities, relations respectively. the number of entities in g is n e , the number of relations in g is n r and we allocate the same dimension d to entities and relations for simplicity. e ∈ r ne * d is the embedding matrix for the schematic diagram of our model with length of step . es and er represent embedding of subject entity and relation respectively. e i r means the query relation are fed into the i-th step to refine information.ẽs is the final output information, then matrix multiplication is operated betweenẽs and embedding matrix of entities e. at last, logistic sigmoid function is applied to restrict the final score between and . entities and r ∈ r nr * d is the embedding matrix for relations. e s , e r and e o are used to represent the embedding of subject entity, relation and subject entity respectively. besides, we denote a gate cell in our model as c. in order to obtain useful information, we need a specific module to extract needed information from subject entity with respect to the given relation, which can be regarded as a control of information flow guided by the relation. to model this process, we introduce gate mechanism, which is widely used in data mining and natural language processing models to guide the transmission of information, e.g. long short-term memory (lstm) [ ] and gated recurrent unit (gru) [ ] . here we adopt gating mechanism at dimension level to control information entailed in the embedding. to make the entity interact with relation specifically, we rewrite the gate cell in multi-steps with two gates as below: two gates z and r are called update gate and reset gate respectively for controlling the information flow. reset gate is designed for generating a new e s or new information in another saying as follows: update gate aims to decide how much the generated information are kept according to formula ( ):ẽ hardmard product is performed to control the information at a dimension level. the values of these two gates are generated by the interaction between subject entity and relation. σ-logistic sigmoid function is performed to project results between and . here means totally excluded while means totally kept, which is the core module to control the flow of information. we denote the gate cell as c. besides, to verify the effectiveness of gate mechanism, we also list the formula of a cell that exclude gates as below for ablation study: with the gate cell containing several gating operations, the overall architecture in one gate cell is indeed a multi-step information controlling way. in fact, a single gate cell can generate useful information since the two gating operations already hold great power for information controlling. however, for a complex dataset, more fine and precise features are needed for prediction. the iterative multi-step architecture allows the model to refine the representations incrementally. during each step, a query is fed into the model to interact with given features from previous step to obtain relevant information for next step. as illustrated in fig. , to generate the sequence as the input for multi-step training, we first feed relation embedding into a fully connected layer: we reshape the output as a sequence [e r , e r , ..., e k r ] = reshape(e r ) which are named query relations. this projection aims to obtain query relations of different latent aspects such that we can utilize them to extract diverse information across multiple steps. information of diversity can increase the robustness of a model, which further benefits the performance. query relations are fed sequentially into the gate cell to interact with subject entity and generate information from coarse to fine. parameters are shared across all steps so multi-step training are performed in an iterative way indeed. our score function for a given triple can be summarized as: where c k means repeating gate cell for k steps and during each step only the corresponding e i r is fed to interact with output information from last step. see fig. for better understanding. after we extract the final information, it is interacted with object entity with a dot product operation to produce final score. in previous rnn-like models, a cell is repeated several times to produce information of an input sequence, where the repeating times are decided by the length of the input sequence. differently, we have two inputs e s and e r with totally different properties, which are embeddings of subject entity and relation respectively, which should not be seen as a sequence as usual. as a result, a gate cell is used for capturing interactive information among entities and relations iteratively in our model, rather than extracting information of just one input sequence. see fig. for differences more clearly. training. at last, matrix multiplication is applied between the final output information and embedding matrix e, which can be called -n scoring [ ] to score all triples in one time for efficiency and better performance. we also add reciprocal triple for every instance in the dataset which means for a given (s, r, t), we add a reverse triple (t, r − , s) as the previous work. we use binary crossentropy loss as our loss function: we add batch normalization to regularise our model and dropout is also used after layers. for optimization, we use adam for a stable and fast training process. embedding matrices are initialized with xavier normalization. label smoothing [ ] is also used to lessen overfitting. in this section we first introduce the benchmark datasets used in this paper, then we report the empirical results to demonstrate the effectiveness of our model. analyses and ablation study are further reported to strengthen our motivation. language system) are biomedical concepts such as disease and antibiotic. • kinship [ ] contains kinship relationships among members of the alyawarra tribe from central australia. the details of these datasets are reported in table . the evaluation metric we use in our paper includes mean reciprocal rank(mrr) and hit@k. mrr represents the reciprocal rank of the right triple, the higher the better of the model. hit@k reflects the proportion of gold triples ranked in the top k. here we select k among { , , }, consistent with previous work. when hit@k is higher, the model can be considered as better. all results are reported with 'filter' setting which removes all gold triples that have existed in train, valid and test data during ranking. we report the test results according to the best performance of mrr on validation data as the same with previous works. table . link prediction results on umls and kinship. for different datasets, the best setting of the number of iterations varies a lot. for fb k and umls the number at provides the best performance, however for other datasets, iterative mechanism is helpful for boosting the performance. the best number of iterations is set to for wn , for wn rr, for fb k- and for kinship. we do link prediction task on benchmark datasets, comparing with several classical baselines such as transe [ ] , distmult [ ] and some sota strong baselines such as conve [ ] , rotate [ ] and tucker [ ] . for smaller datasets umls and kinship, we also compare with some non-embedding methods such as ntp [ ] and neurallp [ ] which learn logic rules for predicting, as well as minerva [ ] which utilizes reinforcement learning for reasoning over paths in knowledge graphs. the results are reported in table and table . overall, from the results we can conclude that our model achieves comparable or better performance than sota models on datasets. even with datasets without inverse relations such as wn rr, fb k- which are more difficult datasets, our model can still achieve comparable performance. to study the effectiveness of the iterative multi-step architecture, we list the performance of different number of steps on fb k- in table . the model settings are all exactly the same except for length of steps. from the results on fb k- we can conclude that the multi-step mechanism indeed boosts the performance for a complex knowledge graph like fb k- , which verify our motivation that refining information for several steps can obtain more helpful information for some complex datasets. we report the convergence process of tucker and msge on fb k- dataset and wn rr dataset in fig. . we re-run tucker with exactly the same settings in table , we report the parameter counts of conve, tucker and our model for comparison. our model can achieve better performance on most datasets with much less parameters, which means our model can be more easily migrated to large knowledge graphs. as for tucker, which is the current sota method, the parameter count is mainly due to the core interaction tensor w , whose size is d e * d r * d e . as the grow of embedding dimension, this core tensor will lead to a large increasing on parameter size. however, note that our model is an iterative architecture therefore only a very few parameters are needed apart from the embedding, the complexity is o(n e d + n r d). for evaluating time efficiency, we re-run tucker and our model on telsa k c. tucker needs s/ s to run an epoch on fb k- /wn rr respectively, msge needs s/ s respectively, which demonstrate the time efficiency due to few operations in our model. to further demonstrate our motivation that gate mechanism and multi-step reasoning are beneficial for extracting information. we do ablation study with the following settings: • no gate: remove the gates in our model to verify the necessity of controlling information flow. • concat: concatenate information extracted in every step together and feed them into a fully connected layer to obtain another kind of final information, which is used to verify that more useful information are produced by the procedure of multi-step. • replicate: replicate the relation to gain k same query relations for training. this is to prove that extracting diverse information from multi-view query relations is more helpful than using the same relation for k times. the experiment results are reported in table . all results demonstrate our motivation that controlling information flow in a multi-step way is beneficial for link prediction task in knowledge graphs. especially a gated cell is of much benefit for information extraction. in this paper, we propose a multi-step gated model msge for link prediction task in knowledge graph completion. we utilize gate mechanism to control information flow generated by the interaction between subject entity and relation. then we repeat gated module to refine information from coarse to fine. it has been proved from the empirical results that utilizing gated module for multiple steps is beneficial for extracting more useful information, which can further boost the performance on link prediction. we also do analysis from different views to demonstrate this conclusion. note that, all information contained in embeddings are learned across the training procedure implicitly. in future work, we would like to aggregate more information for entities to enhance feature extraction, for example, from the neighbor nodes and relations. freebase: a collaboratively created graph database for structuring human knowledge yago : a knowledge base from multilingual wikipedias dbpedia: a nucleus for a web of open data translating embeddings for modeling multi-relational data knowledge graph embedding by translating on hyperplanes learning entity and relation embeddings for knowledge graph completion knowledge graph embedding on a lie group rotate: knowledge graph embedding by relational rotation in complex space a three-way model for collective learning on multi-relational data embedding entities and relations for learning and inference in knowledge bases complex embeddings for simple link prediction simple embedding for link prediction in knowledge graphs tensor factorization for knowledge graph completion convolutional d knowledge graph embeddings modeling relational data with graph convolutional networks reasonet: learning to stop reading in machine comprehension gated-attention readers for text comprehension knowledge graph embedding via dynamic mapping matrix canonicaltensor decomposition for knowledge base completion translating embeddings for knowledge graph completion with relation attention mechanism long short-term memory learning phrase representations using rnn encoder-decoder for statistical machine translation rethinking the inception architecture for computer vision observed versus latent features for knowledge base and text inference statistical predicate invention end-to-end differentiable proving differentiable learning of logical rules for knowledge base reasoning go for a walk and arrive at the answer: reasoning over paths in knowledge bases using reinforcement learning key: cord- -tsm zoe authors: slaughter, laura; keselman, alla; kushniruk, andre; patel, vimla l. title: a framework for capturing the interactions between laypersons’ understanding of disease, information gathering behaviors, and actions taken during an epidemic date: - - journal: j biomed inform doi: . /j.jbi. . . sha: doc_id: cord_uid: tsm zoe this paper provides a description of a methodological framework designed to capture the inter-relationships between the lay publics’ understanding of health-related processes, information gathering behaviors, and actions taken during an outbreak. we developed and refined our methods during a study involving eight participants living in severe acute respiratory syndrome (sars)-affected areas (hong kong, taiwan, and toronto). the framework is an adaptation of narrative analysis, a qualitative method that is used to investigate a phenomenon through interpretation of the stories people tell about their experiences. from our work, several hypotheses emerged that will contribute to future research. for example, our findings showed that many decisions in an epidemic are carefully considered and involve use of significant information gathering. having a good model of lay actions based on information received and beliefs held will contribute to the development of more effective information support systems in the event of a future epidemic. there is a great deal of current interest in preparing for outbreaks of infectious disease. both national and international efforts are aimed at developing strategies for rapid containment in the event of an outbreak [ ] . government officials seek to contain an infectious disease through the cooperative efforts of the public by providing information either directly (pamphlets, web sites, posters, etc.) or through mass media news. the public also finds information concerning an outbreak from television programs and newspaper articles as well as from various ''non-official'' sources (e.g., socially through word-of-mouth). recent initiatives [ ] show support for research that concerns tailoring public health messages during an outbreak disaster to the lay public ''with care so that information reported is easy to understand, is appropriate and is relevant. '' in this paper, we propose a qualitative methodological framework that characterizes human behavior during epidemics. this methodological framework, based on narrative analysis, is a tool for learning about how laypersons use information to build representations of an epidemic situation and how the results of this process influence their decisions to act. two factors, social influences and emotional triggers, are considered as mediators of actions and are therefore also of concern to this work. the methods we use allow insights into actions taken that are interpreted within the partici-pantÕs situation (i.e., from the participantÕs own viewpoint). this type of work is essential for tailoring health messages that are useable by people during an epidemic and effective at changing risky behaviors. we outline our methods using illustrative examples from data collected during the sars epidemic of [ ] . section discusses theoretical background with a specific focus on lay information gathering, lay understanding of disease, and lay health decision actions in a naturalistic setting. in section , we outline the basis for our methodological approach and describe our techniques used for capturing the interactions between the informational influences on lay actions. the data analysis techniques are also presented in section . an illustrative example using data collected during the sars outbreak of is discussed in section . our methodology was refined through application of the framework to the analysis of these data. in section , we summarize the lessons learned and provide direction for future research. the authors of this paper argue that a qualitative approach is necessary in order to obtain a macroscopic view of lay reactions to an epidemic crisis and to map out variables for large-scale studies. the related work we cite in support of this study does not fit neatly into the boundaries of a single field. health-related information seeking theories [ ] and guidelines for constructing health messages for the public [ ] have not been explicitly connected to lay explanations of illness and their behaviors during an epidemic. the main purpose of presenting our methodology is to demonstrate how the data analysis techniques can characterize the relationships between lay information gathering, understanding of information received, and actions taken. we briefly review literature related to information seeking, perceiving information needs [ , ] , and emotional effects that play into lay perceptions of information [ ] . we also point to cognitive theories related to lay comprehension and reasoning about illness [ , ] in addition to theories of naturalistic decision-making [ , ] . we have defined ''information gathering'' in the broad sense, to include both passive reception of informational messages in the environment and active searching [ ] . general models of active information seeking [ , ] describe the process of how a person seeks information, from an emerging awareness of a ''gap'' in current knowledge to communication of this information need as a query that will locate the missing information. within the domain of health information seeking, the models that relate to a gap in knowledge alone were found to be insufficient because they focus exclusively on rational processes. they cannot explain information-seeking behavior when patients do not seek medical information even though they are aware of gaps in their knowledge [ ] . for health-related information seeking, theories of stress and coping have been integrated with two cognitive states that have been proposed as central to understanding an individualÕs response to an adverse health-related situation: orientation towards a threat (referred to as monitoring) and turning attention away from the threat (referred to as blunting) [ ] [ ] [ ] . the first studies that incorporated stress/coping theories with monitoring/blunting divided people according to personality types, as measured by the miller behavioral style scale (mbss). van zuuren and wolfs [ ] studied undergraduate students using the miller scale to assess whether a person is a monitor or a blunter. they found a direct association between problem-focused coping and monitoring. this study also showed that monitoring is related to unpredictability. they concluded that monitoring was positively related to the perceived degree of threat in a situation, that is, the higher the perceived threat the more information would be sought. using van zuuren and wolfÕs techniques of dividing a sample based on the mbss, researchers such as baker [ ] studied the information preferences of health consumers (in bakerÕs study using women with multiple sclerosis). her results indicated, ''monitors were more interested in information about ms than were blunters and further, that their interest occurred earlier in the disease than did the interest of blunters.'' in recent articles, the notion of personality type influencing information seeking has been challenged. rees and bath [ ] studied information seeking behaviors of women with breast cancer with results indicating that individuals may fluctuate between seeking and avoiding information, with the process being dependent on situational variables, such as how controllable the threat is perceived to be. due to the results given above, our data analysis attends to participantÕs discussions about risk perceptions that are connected with information seeking acts. our expectation is that narrative data will reveal how information-seeking behaviors are affected by emotional stress and fear of infection (high risk perception). this idea is further supported by the literature from the field of public health that examines how heath information disseminated during a crisis is interpreted by laypersons [ , ] . after information is gathered, it must be understood in order to be useful. gaining an understanding, or mental representation, occurs through the process of comprehension [ ] . research on lay mental representations has shown that knowledge about disease consists of a combination of representations constructed from both informal social channels (i.e., traditional remedies learned from extended family) and formal instruction of scientific knowledge [ ] . these two types of knowledge may be partly overlapping and contradictory when they deal with different aspects of the same disease. in cases where more than one model is used, a person tries to satisfy the requirements of each model even though this results in some redundant activity. research by keselman et al. [ ] concerning adoles-centsÕ reasoning about hiv provides an example of contradictory and overlapping models. they found that middle and high school students often relied on practical knowledge of disease, rather than on known facts about hiv. for example, one student acknowledged known facts about hiv, such as the fact that it is incurable. however, other explanations of the disease process later came into play while reasoning through a scenario about hiv. the scenario asked the student to discuss whether it was possible to expel hiv from the body. the same student who acknowledged the fact that hiv was incurable also believed that by drinking and exercising heavily it would be possible to expel hiv from the body. this student stated, ''cause people can stop it like that. by exercising, like they said. like that lady, like i told you, she exercised her way out of cancer, so i think this is true, you can exercise your way out of hiv probably.'' informally learned remedies and cultural beliefs sometimes play a major role in determining lay interpretations of epidemic events. for instance, raza et al. [ ] examined lay understanding of the plague with persons belonging to the economically weaker sections of society in delhi and gurgaon india. in identifying the factors that influence the public understanding of science, they found significant amount of lay understanding based on ''extra-scientific belief systems'' (e.g., sins committed by people contributed to the outbreak), which were prevalent in the context of the plague epidemic. other studies have also looked specifically at the use of mental representations in relationship to decision-making. patel and colleagues [ ] showed how physicians with different levels of expertise construct dissimilar problem representations on the basis of the same source information, which leads them to differing diagnostic decisions. as another example, sivaramakrishnan and patel [ ] showed how understanding of pediatric illnesses influenced mothersÕ choice of treatment for their children. their study showed how mothers interpreted concepts related to biomedical theories of nutritional disorders. they found that traditional knowledge and beliefs played an important role in interpretation and reasoning, which lead to decisions that were influenced by nonscientific traditional ideas. our work focuses on comprehended representations that are constructed by laypersons based on the information received during the epidemic (e.g., mass media, conversations with friends). we are interested in how these resulting representations lead to actions. the naturalistic approach to decision-making investigates decisions made in constantly changing environments, with ill-structured problems and multiple players [ ] . these studies are conducted in real-life settings and investigate high-stakes decisions by looking at how people assess the situations they face, determine the problems they need to address, then plan, make choices, and take actions. this approach is related to our work since the actions taken by laypersons during an outbreak occur in an uncertain environment, with information changing over time, and with a highly personal threat. blandford and wong [ ] has taken the key features of naturalistic decisions and combined them to form his integrated decision model. he used this framework for investigating decision processes and strategies of ambulance dispatchers. the ambulance dispatcher study used retrospective narrative data to build a model of decision-making in a dynamic, high-stakes environment. in summary, this model is based on the decision features: ( ) situation assessment is important to decision making, ( ) feature matching and story building are key to situation assessment because of missing information and uncertainty about available information, ( ) piecing together the situational information is difficult because it arrives over a period of time and not in the most optimal manner, and ( ) analytically generating and simultaneously evaluating all possible actions does not occur in dynamic environments. instead decision makers seek to identify the actions that best match the patterns of activities recognized in the situation assessment, one option at a time. although the framework provided by this study was formed from investigations of medical per-sonnelÕs decisions, we believe that these observations will apply to our work and can be extended to layper-sonsÕ actions in a high stress health-related situation. in our study, the infectious disease we are concerned with is severe acute respiratory syndrome (sars), which is a highly contagious respiratory illness that emerged in the guangdong province of china during the winter of . the sars virus caused widespread public concern as it spread with exceptional speed to countries in asia, the americas, and europe. there were confirmed cases in march when the world health organization (who) officially announced the global threat of a sars epidemic. in one month, the number of cases jumped to with more than deaths. the sars epidemic had been halted by july , with a total of cases reported to who and deaths [ ] . the countries affected by the outbreak of sars launched mass media campaigns to educate the public. information about what actions to take, the symptoms of sars, and other essential news changed on a daily basis as scientists and doctors treating sars cases reported new findings. the publicÕs ability to understand and react to the information they obtained played a key role in stopping the spread of sars. the multifaceted control efforts including quarantine, tracing the contacts of sars patients, travel restrictions, and fever checks were essential to containment. all of these required that the public understand the sars-related information being conveyed and take actions to help protect themselves and others from spreading the disease. the sars epidemic provided a useful test case for our work and provided us with an opportunity to refine our narrative analysis-based methodology. there have been various methodologies used to study human behavior during epidemics and these offer different perspectives on lay response patterns. one approach is to study historical accounts [ ] [ ] [ ] . these are usually told from a single personÕs perspective on a groupÕs response. while conclusions drawn from these types of texts might lend themselves to a global picture of an epidemic situation, there are potential drawbacks to using these as a basis for our present day outbreak preparation efforts. for instance, the occurrence describes peo-pleÕs reactions during a different time period (not living in our current culture) and these texts might be biased towards the view of the author. another method of understanding lay reaction is through questionnaires/ surveys following a real-life episode [ ] or a simulation of an outbreak [ , ] . these questionnaires use pre-defined categories leading to large amounts of data on select variables of interest (e.g., whether or not people believe they would follow quarantine restrictions). the questionnaires impose a pre-determined structure onto the respondent and force the respondent to reply using the investigatorsÕ categories. in our work, participants discuss events occurring during a recent outbreak that directly affected their life (occurring in close proximity to their home). a ''storybased'' interview allows data analysis of what information was obtained through the environment (e.g., media, web searches, rumors, and conversations with friends), reactions to the information, and the actions taken in the context of the participant who experienced it. these personal accounts, or narratives, of first-hand experience are a valuable source of data that can offer insight into a situation as it unfolded over time. by studying a sequence of events told as a retrospective narrative, an investigator can see how individuals temporally and causally link events (episodes) together. this approach falls under the qualitative paradigm and is referred to as narrative analysis [ ] ; these methods have to do with the ''systematic interpretation of interpretation [ ] .'' narrative analysis is commonly used as a methodological tool in health psychology [ , ] and anthropology [ ] . stories people tell are based on their mental representations of illness or ''learned internalized patterns of thought-feeling that mediate both the interpretation of ongoing experience and the reconstruction of memories'' [ ] . narratives tell us something about how individuals understand events and construct meaning out of a situation. illnesses are often explained by reconstructing events in a cohesive story-like manner. for example, garro [ ] discusses how people talk about illness; they link their experiences from the past with present concerns and future possibilities. narrative analysis is best used for exploratory purposes, for helping a researcher understand how individuals view a particular situation, and also for illustrating (but not by itself validating) theory. it is based on inductive techniques, rather than deductive hypothesis testing. this means that the researcher outlines top-level questions, which will be elaborated on and altered as the process of data analysis proceeds. personal narratives tell us a great deal about social, cultural, and other beliefs that cannot be accounted for at the onset of the study. the questions used to begin a study are broadly stated and are used, along with background literature, to focus on the initial set of boundaries delimiting the research. we ask how and why (i.e., meaning we are asking for a description) rather than what (e.g., a list of factors) questions. the main idea is to characterize the inter-relationships between laypersonÕs information needs/gathering, comprehension of information received, and actions during an epidemic. participants in qualitative studies, narrative analysis included, are usually selected based on specifically defined criteria. in a study of lay reactions to an epidemic, it is therefore important to select participants who lived in close proximity to the outbreak. the participants should tell their stories to the researcher as close in time to the actual experience as possible (perhaps while the outbreak is still ongoing). depending on whether the researcher wants to do comparative analyses, participants might be selected by and grouped according to various dimensions such as age, socio-eco-nomic status, ethnicity, and/or geographical location. the number of participants selected is dependent on the number of comparisons to be made, and whether the researchersÕ goals are to continue capturing narratives until the majority of the data contain overlapping experiences (almost all of the possible reactions to the epidemic have been uncovered). narrative data are usually collected through the use of an interview. we advocate a semi-structured interview [ ] that lists a pre-determined set of ''loosely ordered'' questions or issues that are to be explored. the guide serves as a checklist during the interview so that the same types of information will be obtained from all participants. the advantage of this approach is that it is both systematic and comprehensive in delimiting the issues to be taken up in the interview. the interviews can remain conversational while at the same time allow the researcher to collect specific data. in many qualitative studies an interview is usually. in many qualitative studies an interview is usually organized around question-answer exchanges, but narrative studies require the use of free openended ''tell me'' questions as the most effective way to elicit a story-like response. it is important to avoid the use of closed, ''yes/no'' questions in order to facilitate narrative, rich descriptions. decisions about how to transcribe data and conventions used for analysis are driven by current theory, the research questions of interest, as well as the personal philosophy of the researcher. our analytic interpretations of epidemic narratives progress through three stages: ( ) thematic coding of the factors emerging from the stories told concerning the epidemic, ( ) organization of various aspects of the story according to chronological order, and ( ) ''influence diagrams'' of factors influencing actions taken. thematic coding [ ] is a type of content analysis and this technique is based on grounded theory (using a bottom-up procedure to identify categories present in the text). narratives may also be coded according to categories deemed theoretically important by the researcher. initially, a set of categories can be derived in conjunction with the semi-structured interviews based on theory from related literature (an initial top-down approach to coding), and this list of categories can grow as the rich descriptive data (a bottom-up procedure) is analyzed. one of the purposes of the ''thematic coding'' stage is to support the subsequent analyses. the categories are used to help untangle the interrelationships between information, understanding, emotions, social factors, and actions taken and systematically map our observations. in addition, an accuracy check can be done to determine whether participants are correct about their understanding of the disease. actions participants stated they have taken can be compared with guidelines from official sources (e.g., cdc). when participants speak freely in telling stories about the events happening this results in quite a bit of ''jumping around'' in time. people do not always begin their stories with the first event that occurred. researchers frequently find it helpful to organize the narrative according to temporal sequence (see labovÕs work [ ] ). for this work, classifying the temporal order of events is a necessary process leading to proper analysis of informational and comprehension-of-situation influences on actions taken. data are temporally coded in order to consolidate events participants expressed as occurring at the same time period (according to the participantÕs perception of events). to illustrate, ''time '' is assigned to link together the segments - shown below and the same code ''time '' is assigned to segments of text lines - found later in the interview. and i spoke to my girlfriend and she said that she was going to leave hong kong and i was really shocked because she was the one that was like myself, just kind of sticking around and saying oh itÕs not a big deal and weÕll manage and uh she um felt that she just got into a panic herself. and what happened around that time at the end of march that there was a rumor that a teenager actually started on the internet, he put it on he like, he said that the airport had been closed and that people couldnÕt get in or out of hong kong and they later deemed of course it was a hoax. but at the time my girlfriend heard this, she didnÕt know, she panicked and she was leaving. . . . later in the interview . . . i did a bit more internet grocery shopping because i did not want to go to the grocery stores. usually, it was really strange because right around, right. . . a week or so before i left i would go to the market. it was actually; it was right around the time that that rumor came out that the hong kong airport had shut down. the store was so packed it was unbelievable. i thought it was a holiday, and i even asked a friend. i said, what is going on with the market? the construction of causal-link maps, sometimes referred to as ''influence diagrams,'' is used for modeling interactions between events [ ] . these types of diagrams formalize laypersonsÕ explanations into connected logical structures that can be examined as an overview of all the events occurring for a subset of time during an epidemic. this process maps the influences on actions taken for each time period identified (following chronological organization of the data described in section . . ). figs. - are each influence diagrams. an example of how we created these diagrams is given using data from the taiwanese participant (see fig. ). we first identify the associative ''participantstated'' relationships between coded themes. in these data segments, we can see that officials are promoting (successfully) washing hands and we can also see that family pressures have an influence on hand washing behaviors. thus, ''informational: officials action: washing hands'' and ''social: family pressure action: washing hands.'' the categories of interaction labels (e.g., ) must be standardized for consistency. this process facilitates asking research questions across conditions (e.g., comparisons by socio-economic status) for numerous participants. eventually, the objective is to be able to make generalizations about the interactions between factors/events that influence layperson reactions during an epidemic. what are laypersonsÕ decisions based on? how does social pressure affect decisions? what motivates active searching of information? the evaluation of the validity (or trustworthiness) of a narrative analysis is a critical issue that does not have an easy solution. one way to test whether the results are valid is related to persuasiveness. an analysis can be said to be persuasive when ''the claims made are supported by evidence from the data and the interpretation is considered reasonable and convincing in light of alternative possible interpretations'' [ ] . another way of checking validity is to conduct what is called a ''member check'' [ ] . during a member check, participants in the study are given copies of the research report and asked to appraise the analysis conducted by the researcher, interpretations made, and conclusions drawn. these two measures of validity assess whether the interpreta- tions of the data reflect the views held by the participant; there is no way to compare the results obtained with an objective truth. however, a community of scientists validates knowledge as they share results obtained across multiple studies. as the sars epidemic unfolded, we were in the process of exploring methodologies that would focus on the interactions among the information requirements of laypeople concerning an outbreak of infectious disease, their understanding of that disease, and their actions taken during an outbreak (or during a simulated outbreak). the sars epidemic was used as a test study to refine the narrative analysis-based methodology. in the following sections, we show how these methods were applied and what we learned in the process. our specific research objective is to characterize lay-personsÕ reactions during an epidemic. specifically, we ask why participants take certain actions (or recommended actions stating what should be done) and how their actions are linked to ( ) their understanding of epidemic/sars infection, ( ) the influential informational events (including information gained from a social situation), and ( ) other factors such as feelings that come into play. other research questions can be answered using the data that report on what happened during the sars outbreak. for example, the interview texts also result in a list of information needs expressed by the lay public concerning an outbreak as well as a general list of actions taken for sars prevention. we interviewed eight residents from three sars outbreak regions. two of them were living in asia ( from hong kong and from taiwan) and six were from toronto, canada. participants recalled events taking place from the time when news began about the ''mysterious'' illness until the time of the interview, a time span of approximately - days. none of the participants had a medical background or a degree in biology, or related fields. all of the participants except for the taiwanese male were native english speakers. the interviews with the residents from asia took place in the late spring of . these interviews were conducted in the united states by the first author. the hong kong resident was an american caucasian female who had been living in hong kong for the past four years. she decided in late march to return to the united states temporarily because of the sars outbreak. she was , married and had a one-year-old daughter at the time of the interview. she was interviewed in her home in washington, dc. the other resident of asia was a taiwanese male, aged , of chinese descent, who was living in taipei, taiwan for the entire duration of the epidemic. the interview took place in new york city while he was vacationing. both participants had an undergraduate degree from a university in the united states. the interviews with the toronto residents took place during the early summer of . these interviews took place in toronto and were conducted by a research assistant from the information technology division, department of mathematics and statistics, york university, toronto. toronto resident was a -year-old male, resident was female (no age recorded), resident was a -year-old female, resident was a -yearold female, resident was a -year-old male, and resident was a -year-old male. toronto residents all had bachelorÕs degrees, and residents and also had a masterÕs degree. the complete semi-structured interview is presented in appendix a. the procedure for using this interview guide is to allow for flexibility when probing the respondent. when interviewing, we stressed the importance of keeping in mind the purpose behind each of the probes. we linked lay understanding to information sources using probes, for example, ''how does sars affect a personÕs body?'' followed by ''can you tell me how you learned about this?'' we asked about the action of seeking information and connected this with the partic-ipantÕs current information needs. the two-part probe ''what questions did you have about sars and what caused you to look for information about [fill in with the topic of the question asked]'' is of this type. other actions concerning self, family, or community protection were connected to understanding of sars using probes such as ''for example, did you buy a facemask, vitamins, or do something else?'' ''why did you decide to [repeat preparation given by respondent]?'' and ''how is [repeat preparation given by respondent] going to help prevent sars?'' this instrument design was based on our research goals to link interactions between understanding, information received, and actions taken. the questions ask about the participantÕs sars experience over time, from the time prior to the outbreak and then ask the participant to project their own scenario about what they believe will happen in the coming weeks following the interview. the arrangement of the interview into time periods (before, during, and upcoming events related to the epidemic) facilitates the data analysis when looking at the interactions and influences between informa-tion received, lay understanding, and actions taken. we believe that this type of interview could be used during any epidemic outbreak. the strategy we used for the reduction and interpretation of our narrative data consisted of three stages. in the first stage, we identified the categories of sars epidemic-related events and concerns discussed by our participants. we did thematic coding, in which we ''let categories emerge from the data rather than assign them from a pre-defined list'' [ ] . using the qualitative software program, nvivo [ ] , we iteratively coded emergent categories by marking segments of text that are instances of actions the participant said they took, explanations given concerning sars, information sources used, and information needs expressed. the second stage consisted of reorganizing the events in the narrative into the correct chronological order. this was necessary because participantÕs stories and anecdotes were not always told in the same order that they occurred during the epidemic. the last stage used the time-ordered data from phase two in order to evaluate influences on actions taken. for each action identified during thematic coding (stage one), we looked at partic-ipantÕs explanations, reasoning process, and emotional state prior to the action. we began the process of coding by constructing a rudimentary coding scheme based on the interview probes and by reading through the first transcript. for example, we could anticipate that participants would discuss ''wearing face masks'' and ''washing hands'' as actions taken and these were included in the initial scheme. this coding scheme changed incrementally as we carefully scrutinized all the interview texts (from hong kong, taiwan, and toronto) resulting in the final version of the coding scheme in appendix b. this process is clarified using an illustrative example. the transcripts, in rough form, are read and the researchers specify codes at the phrase level, sentence level or paragraph level. for example, the code ''risk assessment of sars'' was assigned to the phrase (from participant ): in march, early march i started to hear about the sars virus that was in hong kong but it did not seem very, ahhh quite epidemic mode after seeing similar texts concerning the same type of expression, we assigned a more general code called ''sars risks'' to all similar instances in all interviews, such as in phrase (from participant ): however, it was on the news but they didnÕt put a lot of seriousness into the broadcast. each category was defined for further coding consistency. for example, ''information need: containment status'' was defined as ''an information need or question stated that is about any topic related to control of the outbreak (e.g., ''has the virus peaked?''). multiple codes could be assigned to the same text. spreads because itÕs in a really tight confined area [location: tightly confined area] like a hospital room [location: hospital room]'' the double underline is used when overlapping categories (i.e., [location: tightly confined area]) are assigned to text already coded in another category (i.e. ''really tight confined area''). the process of thematic coding resulted in a list of instances of actions, explanations, and information needs expressed by participants in our data. we also looked for ''new'' categories of entities and events that would help us to make sense out of what people were experiencing during the sars outbreak and what factors influenced their perceptions of the epidemic response. although not initially themes in our work, we also coded policy, location of events, persons involved, and other major categories that emerged from the data. from the codes that emerged, we get a sense of: ( ) the types of actions people took (e.g., avoiding others), ( ) social situations they observed during the epidemic (e.g., people are banned from entering someplace because of elevated temperature), ( ) types of actions the participants recommend doing for sars prevention (e.g., get proper rest), ( ) types of explanations described by participants about sars (e.g., cultural influences and cultural factors), ( ) the information needs participants expressed (e.g., participants wanted to know what the potential outcome was for a person who contracted sars), ( ) what emotions people expressed concerning sars (e.g., ''eerie feeling/freaked out''), and what information sources people consulted (e.g., tv media). we began to see some patterns emerging from our eight personal narratives. within the list of explanations of sars, there are none concerning the physiological processes of a virus within the body (because we did not observe a single instance in the data). the sars participants in the asian cities seemed to have a great knowledge of certain facts (about what actions to take). this is not surprising since they were living in places where the threat is presented on a daily basis (e.g., everyone wearing masks on the street) and they were bombarded with information in the media. what was interesting was that these two people felt a need for further information about the mechanisms behind the viral processes and not just a superficial list of what to do. another pattern observed was the expression of suspicion by participants from canada. many of the emotions that were expressed in the coding scheme inventory are related to fear and anxiety. participants, notably the toronto residents, expressed suspicion about a possible cover-up of outbreaks, (i.e., ''why did this occur in toronto and not in the us?''). these participants had questions and concerns about government policy [ ] . to illustrate how the thematic data can be employed to evaluate current guidelines, we used the coded narrative texts to further conduct a comparison of partici-pantsÕ knowledge of sars symptoms, treatment, transmission, and preventions with cdc web site information [ ] . we made this comparison using guidelines from the same time period as when the interviews occurred. to do this, we coded cdc guidelines using the same methodology described in section . . . we then took each sentence from interviews that were coded as a symptom, transmission mechanism, treatment or prevention, and compared those with sentences from the cdc guidelines. overall, participants had beliefs that were consistent with current understanding of sars, with almost % ( / ) of the sentences from the interviews concerning symptoms, transmission mechanism, treatment, or prevention of sars corresponding to those found in the cdc guidelines. such a high level of consistency would be very unlikely in a larger more diverse sample. reorganization of events was necessary to facilitate data reduction and to assist in the interpretation of the narratives. one result of this time ordering was unexpected and resulted in further separation of each personal narrative into several time segment blocks. for each participant, we found that some events were mentioned multiple times and signaled major changes in the participantÕs emotional state, actions taken, understanding, or informational needs and we marked these as ''trigger'' points for change. for example, as the outbreak in hong kong became more serious and more people were infected, the participantÕs behavior and reaction naturally changed leading to new actions taken, a greater understanding, and more attention focused on the daily news report. the data were separated for each narrative so that the end of each time segment block signals major shifts in thinking about the epidemic (e.g., ''and at that point it really got into my consciousness about this virus and that it was very serious''). examples of the ''triggers'' that propelled the participant to change their viewpoint and signaled the beginning of each new time period are marked in italic type in figs. and shown in the section below. we then performed the analysis described in section . . to capture the connections between the events in order to organize and standardize the relationship between information, lay understanding, and actions. we found that constructing visual representations of interactions was very useful for characterizing each overall causal explanation. to illustrate with an example, we use our hong kong participant. figs. - correspond to time segment blocks for that participant from the start of the epidemic (time period ) until the time the participant decided to leave hong kong (time period ). in fig. , we show that she was passively watching the news and did not feel that there was a heavy emphasis on the ''mysterious illness'' that was occurring in the ''new territories.'' her understanding of the disease was generated from these reports and primarily consisted of a belief that ''it was not affecting us.'' (because she is a westerner in hong kong) she did take one precaution, that is, to wash her hands more often and wipe her childÕs hands. this was based on her concern over her childÕs well being to; nevertheless, take precautions against the illness that she thought was ''like a cold or pneumonia.'' in the second time period, fig. , we began to see her concern increasing. this change was triggered by the the trigger leading to increased concern is shown in italic type. fig. . example of coded interactions from the hong kong participant: time period (from an increased awareness of sars as an epidemic to a decision to leave the region). the trigger leading to increased concern is shown in italic type. realization that her neighborhood bank teller lived at amoy gardens and that it could possibly spread to her family. she builds her understanding of sars, focusing on the model of transmission. all of her actions taken are based on this model. for the most part, she followed the guidelines presented in the media, except that she did not wear a mask, in spite of the fact that there was social pressure ''people wearing masks on the street,'' and that she also avoided walking past people because ''i didnÕt want somebody to sneeze or cough on me and me get caught in the drift of that and get it on me' ' we see major changes in anxiety level, actions taken (precautions), understanding, and informational factors in the third time period, shown in fig. . many interactions were recorded that form larger ''influence interrelationships.'' for instance, she made a decision to leave hong kong. this was strongly related to a friendÕs decision to leave as well. the friendÕs decision increased her anxiety and shock, pushing her to make the same choice. she stated that her friend (but she did not state she felt panic) was panicked due to an internet hoax that stated the airports of hong kong were closed and no one could get out. across all the narrative data, we looked at peopleÕs reasons for actions. the major influences on actions are in table below. one important influence on actions was the vocabulary used by an information source (in particular, mass media), which served as a catalyst for major changes in lay reaction. as an example, use of the word ''epidemic'' caused emotional and behavioral changes. a toronto resident stated that seeing the word ''epidemic'' was key ''uh, i wanted to know how fast it was spreading because if i thought it was an epidemic i would make the decision to stay home and not go to work.'' the hong kong resident provides another example of this. after seeing the headline ''sars an epidemic in hong kong,'' this lead her to conduct web searches, which in turn lead to further learning about transmission. she therefore updated her understanding of the virus through the web searches, learning that the ''virus lives on surfaces up to hours.'' these were considered a specific type of ''trigger'' based on source. these media triggers are ''emotionally loaded'' words that caused participants to experience major shifts in thinking about the epidemic. they were identified while reorganizing the narratives to consolidate all events (according to the participantsÕ perceptions) that happened in the same time period. participants stated certain media events (i.e., ''triggers'') over and over again in separate stories, linking different aspects of their experience. we found that there were several primary types of influences on expression (or realization) of an information need. these are listed in table . although partici- ''it kinda freaked me out'' --concern over family ''but since i did have a child'' perception of safety ''so that leads me to believe that the general public is somewhat safer'' because ''you have to'' (social pressure) ''so you have to wear the mask'' we considered line an ''information need'' statement. types of information need statements in the data were consolidated as table . this methodology leads to emerging hypotheses from our data that can be explored in subsequent work. these suggestions were derived after examination of the participant-stated relationships between coded themes (as described in section . . ). the descriptions of these emerging ideas refer back to figs. and . in fig. , we begin to see that epidemic decisions were not based on quick emotional reactions (panic) or social influences entirely. rather, they were based on strong connections between knowledge building, information gathering, and making decisions (e.g. whether to wear a mask). the participantÕs understanding of disease transmission and epidemic status (containment) influenced decisions either directly, or indirectly (through increasing negative emotions). evidence that many decisions in an epidemic are carefully considered and involve use of significant information gathering prior to the actual decision is consistent with some of the literature in public health [ ] -but not in the literature related to decisions in emergency situations [ ] . we found that as concern increased, participants became more aware of information until they felt the need to move beyond scanning to searching databases for information (passive viewing switched to active as fear increased). thus, we hypothesize ( ) that future studies will show this relationship between reactions of concern and increasing use of information sources to investigate the various actions to take during an epidemic. contrary to the effortful, and systemic information seeking described by participants that related to increasing concern, we saw several occurrences in the data when social factors lead to quick and resolute precautionary actions. for example, the participant described a change in her understanding of risk between time periods two (fig. ) and three (fig. ) ''and i spoke to my girlfriend and she said that she was going to leave hong kong and i was really shocked because she was the one that was like myself, just kind of sticking around and saying oh itÕs not a big deal and weÕll manage . . . and once i heard that she was leaving, then it put me into motion thinking i need to get out of here. this is not the place to be right now.'' social aspects tend to highly ''personalize'' the risk involved and alter thinking ''it can affect me.'' the hypothesis ( ) that emerges from this data is that information that personalizes the epidemic can affect the actions taken, leading to quick decisions to protect against infection. in terms of information seeking, the results we have found related to hypothesis are similar to those of van zuuren and wolfs [ ] and rees and bath [ ] described in section . as the personÕs concern (perceived degree of threat) increases during an epidemic, so does information seeking. however, we did not find any intentional avoidance of information that occurs in patients with a serious illness. in the epidemic situation, lack of information seeking was only observed when there was a perceived need to make a quick decision. the two observations made in the above paragraphs are consistent with dual-process models of social reasoning; the elaboration likelihood model [ ] , and the heuristic-systematic model [ ] . these models predict that there are two routes of information processing. in one route information is processed and decisions are made fast and superficially, and in the other route people engage in more time consuming, effortful, and systematic information processing and problem solving. these models, specifically, chaiken et al. [ ] predict that the fast and superficial (like hypothesis ) information-processing route is used in situations when people are not motivated and/or do not have the ability for making decisions. in the case of people in an epidemic situation, future work may pinpoint the situational factors that result in thoughtful versus hasty actions. we have outlined a methodology for characterizing factors that affect information gathering, comprehension, and preventative behavior by lay people during epidemics. we approached the task using literature from all three areas as a framework, where the cognitive aspects underlying acts (behavior) is given a major focus. this perspective suggests that decisions and actions are largely based on individualsÕ cognitive representations of events, which are in turn shaped by prior knowledge and new information, as well as by social and emotional factors. given the complexity of the influencing factors, and the interconnections among these variables, a structured qualitative approach was considered as most appropriate for gathering data. public health guidelines concerning ways to tailor communication describe aspects of messages that are effective during a crisis event [ ] . the goal of this methodology is to be able to specify ways to increase compliance with guidelines and how to reduce behaviors that increase risk. use of this methodology captures the major themes that emerge related to information needs and actions. this allows officials to address the publicÕs concerns and learn about the actions they are taking. yet, the major contribution of this methodology is related to developing detailed causal/temporal models showing the influences between factors. with this, it is possible to identify problematic situational variables and intervene when they may lead people to make rash decisions. this methodology was applied to study lay reactions to the sars virus outbreak of but might be applied to other viral infectious disease outbreaks, either naturally occurring or through terrorism. having a better understanding of the reactions of the layperson will lead to developing information support systems as well as guidelines for preparedness in the event of a future epidemic. information provided through guidelines or ''just-in-time'' (depending on the needs) could help the lay public to respond appropriately during future epidemics. results from studies using these methods can also be used to educate professionals (e.g., hospital administrators, the media, and government policy makers) by providing models to explain, for example, what strategies laypersons use to assess the situation during outbreaks of an infectious disease. buy supplies (e.g., anti-bacterial wipes) centers for disease control (cdc) press release. cdc announces new goals and organizational design cdc imperative : timely, accurate and coordinated communications outbreak of severe acute respiratory syndrome in hong kong special administrative region: case report on user studies and information needs communication for health and behavior change: a developing country perspective question-negotiation and information seeking in libraries ask for information retrieval models in information behaviour research emr: re-engineering the organization of health information reasoning about childhood nutritional deficiencies by mothers in rural india: a cognitive analysis naturalistic decision making: where are we now situation awareness in emergency medical dispatch looking for information: a survey of research on information seeking, needs, and behavior attention and avoidance: strategies in coping with aversiveness. seattle: hogrefe and huber interesting effects of information and coping style in adapting to gynaecological stress: should a doctor tell all monitoring and blunting coping styles in women prior to surgery styles of information seeking under threat: personal and situational aspects of monitoring and blunting a new method for studying patient information needs and information seeking patterns information-seeking behaviour of women with breast cancer a first look at communication theory the story of the great influenza pandemic of and the search for the virus that caused it the coming plague: newly emerging diseases in a world out of balance how to vaccinate , people in three days: realities of outbreak management kaleidoscoping public understanding of science on hygiene, health, and the plague: a survey in the aftermath of a plague epidemic in india community reaction to bioterrorism: prospective study of simulated outbreak the public and the smallpox threat narrative analysis. newbury park: sage publications acts of meaning introducing narrative psychology, self, trauma, and the construction of meaning narrative psychology narrative and the cultural construction of illness and healing models and motives cultural knowledge as resource in illness narratives: remembering through accounts of illness basics of qualitative research techniques and procedures for developing grounded theory the nvivo qualitative project book some further steps in narrative analysis. the journal of narrative and life history qualitative data analysis. california naturalistic inquiry understanding public response to disasters communication and persuasion; central and peripheral routes to attitude change heuristic and systematic information processing within and beyond the persuasion context you can exercise your way out of hiv and other stories: the role of biological knowledge in adolescentsÕ evaluation of myths cognitive psychological studies of representation and use of clinical practice guidelines this work is supported in part by nlm training grant lm - . we thank david kaufman for reading and reviewing the manuscript. key: cord- -wds e i authors: tejedor, santiago; cervi, laura; tusa, fernanda; portales, marta; zabotina, margarita title: information on the covid- pandemic in daily newspapers’ front pages: case study of spain and italy date: - - journal: int j environ res public health doi: . /ijerph sha: doc_id: cord_uid: wds e i spain and italy are amongst the european countries where the covid- pandemic has produced its major impact and where lockdown measures have been the harshest. this research aims at understanding how the corona crisis has been represented in spanish and italian media, focusing on reference newspapers. the study analyzes front pages of el país and el mundo in spain and italy’s corriere della sera and la repubblica, collecting news items and data evidences employing a mixed method (both qualitative and quantitative) based on content analysis and hemerographic analysis. results show a predominance of informative journalistic genres (especially brief and news), while the visual framing emerging from the photographic choice, tend to foster humanization through an emotional representation of the pandemic. politicians are the most represented actors, showing a high degree of politicization of the crisis. spain and italy, with , and , confirmed cases, respectively, are amongst the european countries where the covid- pandemic has produced its major impact and where lockdown measures have been the harshest. due to the reduced mobility and the imposed lockdown, the internet has proved to play a decisive role in terms of media consumption during the quarantine. social networks have occupied the first position among online platforms most frequently consulted by citizens. according to twitter, the information on the pandemic as well as the conversations related to the topic have caused a % boost in total active daily users, reaching the general level of million users per trimester. news check-up has experienced a prominent growth at that stage. specifically, the peak of media consumption coincided with the first measures of social distancing and has increased in correspondence with governmental communications. these data should be interpreted within the current crisis of journalism and the crisis of media credibility. a recent survey of countries by ipsos global advisor [ ] shows how citizens are rather skeptical towards the information they receive from the media, especially when it comes to online press. in spain, % of the surveyed trust in television, whereas % expressed their preference for traditional media, such as printed press. making reference to the intentions, the research claims that half of the total number of the surveyed believe that printed papers have "good intentions" as opposed to % considering that online newspapers and web pages are the ones with "the worst intentions" [ ] . based on the trust placed on the printed media-as the most credible and rigorous media-this research analyzes a total of front pages of the main daily newspapers in spain and italy ( each) . the research considers the daily newspaper's front page as a fundamental element that synthetizes and prioritizes the contents that the particular medium treats as the most important. at the same time, the front page maintains a direct relation to the digital version of the medium, somehow setting the agenda. in other words, the front page serves a privileged space for public identity construction [ ] . the study, carried out between february and april , collected pandemic-related news pieces and data evidences, aimed at answering the following research questions: • how has the covid- pandemic been covered on the front pages of spain and italy's main daily newspapers? • what types of journalistic genres have been used? • what types of political or social figures and institutions appear the most? • what role has been assigned to an image/photograph in the coronavirus-related information items of the front page? the covid- crisis has posed new challenges to journalism. media play a fundamental role in framing a crisis, since providing the right information from a reliable source is the key issue in this type of pandemic. the world health organization (who) has used the term "infodemic" to define the overabundance of information introduced by coronavirus and to warn the citizens against the risks caused by this information excess, that contain a great amount of hoaxes and rumors. as sylvie briand, director of infectious hazards management at who's health emergencies program notices, this phenomenon is not new "but the difference now with social media is that this phenomenon is amplified, it goes faster and further, like the viruses that travel with people and go faster and further". the role of social media in spreading misleading health information is not new [ ] , but the covid- crisis has shown the critical impact of this new information environment [ ] . many studies have focused on and are still focusing on how the disintermediated role of social media may foster misinformation: scholars studying iran [ ] and spain [ ] , stress how social media spread rumors, others [ ] try to analyze the structure of this infodemic, or concentrate on the effect of media exposure [ ] . within this social media euphoria, very few studies have focused on legacy media, intended as the mass media that predominated prior to the information age-particularly print media, radio broadcasting, and television-even if reality is showing that legacy media still plays an important role [ ] . a study [ ] noticed how cnn has recently anticipated a rumor about the possible lockdown of lombardy (a region in northern italy) to prevent the pandemic, publishing the news hours before the official communication from the italian prime minister. as a result, people overcrowded trains and airports to escape from lombardy toward the southern regions before the lockdown was in place, disrupting the government initiative, aimed to contain the epidemic and potentially increasing contagion. other literature [ ] stresses the importance of looking at mainstream media coverage pointing out the importance of a high quality scientific journalism [ ] . the analysis of the printed daily newspapers' front pages has been object of recurrent studies for the last years. starting with the classical works [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] up to the contemporary researches [ ] [ ] [ ] [ ] , various studies have dealt with content analysis of that essential element of the printed press [ , ] . other studies followed them, concentrating on the comparison between the front pages in printed and digital editions of a medium [ ] . as previously highlighted, daily newspaper's front pages are considered to be the most important page, displaying informative priorities and editorial position in relation to current issues [ ] . other studies [ ] single out three core elements of a daily newspaper's front page: headlines, or visual linguistic set; texts, or visual paralinguistic set and images, or visual non-linguistic set. in this context, the importance of media and information literacy, seen as the citizens' ability to access, use, assess, and create responsible and ethical content [ ] , has become crucial. media and information literacy refers to the vital role that information and media possess in the everyday life of a person, therefore this skill represents an indispensable component to exercise freedom of expression and information. in this vein, numerous studies [ ] [ ] [ ] [ ] stress the significance of a digital literacy development that would exceed studying merely technical or instrumental aspects to embrace the issues of the critical use of media. the research, based on previous studies [ ] , analyzes a total of news items extracted from front pages of the four main daily newspapers of spain and italy ( per country). el país and el mundo of spain alongside with corriere della sera and la repubblica of italy were chosen, based on their relevance and the availability of their front pages. the analysis has been carried out through the use of a template chart consisting of parameters and categories that were obtained mainly in inductive form. the study, possessing descriptive and explanatory character, employs a mixed method (both qualitative and quantitative) based on content analysis and complemented by direct observation and hemerographic analysis as the main techniques. the first technique focused on the analysis of various elements that constitute the front page designs by means of a template chart elaborated during the research process. subsequently, we implemented a hemerographic analysis of texts, headlines and images. the data were processed through descriptive statistics planning with spss software. the analytical chart has considered all the elements displayed in table . coronavirus-related information occupies % of the front pages. precisely, news items out of the total focus on topics related to the covid- pandemic. as for the main journalistic genres, (see figure ), we can observe brief as the most common. this journalistic genre, characterized by its conciseness and brevity, has been defined as brief, a summarized piece of news that solely reflects the most relevant data of the information, missing profound insight and context. the total of pandemic-related units possess the form of short pieces, that could oscillate between or lines. coronavirus-related information occupies % of the front pages. precisely, news items out of the total focus on topics related to the covid- pandemic. as for the main journalistic genres, (see figure ), we can observe brief as the most common. this journalistic genre, characterized by its conciseness and brevity, has been defined as brief, a summarized piece of news that solely reflects the most relevant data of the information, missing profound insight and context. the total of pandemic-related units possess the form of short pieces, that could oscillate between or lines. news occupies the second position in the list of types of the texts about coronavirus at the analyzed front pages. the informative approach towards, in other words, dominates the representation of the crisis. the effort of the daily newspapers to inform their readers on the characteristics, impact and spread of the virus has been detected. nonetheless, it is worth mentioning that opinion articles (with a total number of counted units) surpass other informative journalistic genres. moreover, the importance of the editorial photo with a total number of units, solely accompanied by the footnote, demonstrates a comprehensive approach to the topic through the communicative value of the image. the location of news items at the daily newspapers' front pages can be considered another element of high value when it comes to the detection of importance of each topic. in this sense the studies grant more value to the upper part and the right part from the reader's standpoint. the right part is the most valuable of the odd-numbered page and the left part is the most important for the even-numbered pages. concretely, projecting an imaginary v onto the open double page, the higher the position on the v, the more value the piece has (both in terms of editorial and advertising rates). accordingly, results show that news about coronavirus appear mostly located in the upper part, but in the left zone (see figure ). in this way, it is possible to point out that the newspapers place the news in an important area of their front pages. in addition, in second position, with a total of news items, is the upper right-hand area. in this way, it is possible to point out that the news have been progressively occupying the areas of greatest visual impact of the front page. however, this set of news items is very close to the news items on the pandemic that appear at the bottom of the front page, i.e., the one of least importance. a total of appear at the bottom left and at the bottom right. therefore, the distribution of the news on covid- between the two areas marked by the horizontal division of the first page (top/bottom) is very tight. news occupies the second position in the list of types of the texts about coronavirus at the analyzed front pages. the informative approach towards, in other words, dominates the representation of the crisis. the effort of the daily newspapers to inform their readers on the characteristics, impact and spread of the virus has been detected. nonetheless, it is worth mentioning that opinion articles (with a total number of counted units) surpass other informative journalistic genres. moreover, the importance of the editorial photo with a total number of units, solely accompanied by the footnote, demonstrates a comprehensive approach to the topic through the communicative value of the image. the location of news items at the daily newspapers' front pages can be considered another element of high value when it comes to the detection of importance of each topic. in this sense the studies grant more value to the upper part and the right part from the reader's standpoint. the right part is the most valuable of the odd-numbered page and the left part is the most important for the even-numbered pages. concretely, projecting an imaginary v onto the open double page, the higher the position on the v, the more value the piece has (both in terms of editorial and advertising rates). accordingly, results show that news about coronavirus appear mostly located in the upper part, but in the left zone (see figure ). in this way, it is possible to point out that the newspapers place the news in an important area of their front pages. in addition, in second position, with a total of news items, is the upper right-hand area. in this way, it is possible to point out that the news have been progressively occupying the areas of greatest visual impact of the front page. however, this set of news items is very close to the news items on the pandemic that appear at the bottom of the front page, i.e., the one of least importance. a total of appear at the bottom left and at the bottom right. therefore, the distribution of the news on covid- between the two areas marked by the horizontal division of the first page (top/bottom) is very tight. figure displays the main entities mentioned in the stories, that is to say institution or entities most recurrently mentioned or displayed. of the information, % mentions geographical scenarios (europe, madrid, milan, etc.). in this sense, there is a tendency to depersonalize the information and to extrapolate it to wider scenarios or territories. this aspect is important insofar as the subject is the element of the sentence that carries out the action contained in it. in % of the cases the front page news referred to national entities of non-political nature (the hospital, the emergency unit, the laboratory, the intensive care unit, the sports center, the cultural center, etc.). in particular, there is a notable reference to entities linked to hospitals and healthcare scenarios. national political entities (government, trade unions, spokespersons, minister of health, etc.) occupy the third position in the rank of entities linked to the news, with %. political entities from abroad (the who, the european union, the european parliament, etc.), with %, and non-political entities from abroad (especially universities, research groups or the media), with %, respectively, completed the list of entities. figure displays the main entities mentioned in the stories, that is to say institution or entities most recurrently mentioned or displayed. of the information, % mentions geographical scenarios (europe, madrid, milan, etc.). in this sense, there is a tendency to depersonalize the information and to extrapolate it to wider scenarios or territories. this aspect is important insofar as the subject is the element of the sentence that carries out the action contained in it. in % of the cases the front page news referred to national entities of non-political nature (the hospital, the emergency unit, the laboratory, the intensive care unit, the sports center, the cultural center, etc.). figure displays the main entities mentioned in the stories, that is to say institution or entities most recurrently mentioned or displayed. of the information, % mentions geographical scenarios (europe, madrid, milan, etc.). in this sense, there is a tendency to depersonalize the information and to extrapolate it to wider scenarios or territories. this aspect is important insofar as the subject is the element of the sentence that carries out the action contained in it. in % of the cases the front page news referred to national entities of non-political nature (the hospital, the emergency unit, the laboratory, the intensive care unit, the sports center, the cultural center, etc.). in particular, there is a notable reference to entities linked to hospitals and healthcare scenarios. national political entities (government, trade unions, spokespersons, minister of health, etc.) occupy the third position in the rank of entities linked to the news, with %. political entities from abroad (the who, the european union, the european parliament, etc.), with %, and non-political entities from abroad (especially universities, research groups or the media), with %, respectively, completed the list of entities. in particular, there is a notable reference to entities linked to hospitals and healthcare scenarios. national political entities (government, trade unions, spokespersons, minister of health, etc.) occupy the third position in the rank of entities linked to the news, with %. political entities from abroad (the who, the european union, the european parliament, etc.), with %, and non-political entities from abroad (especially universities, research groups or the media), with %, respectively, completed the list of entities. figure details which kind of people are mostly mentioned within the main characters in the information on covid- . interestingly, national political figures are the most numerous group with % of the total, stressing how the crisis is highly politicized. in second place, we find anonymous citizens who are protagonists in % of news items on the front page. if political figures normally make it to the front pages, within the covid- crisis, anonymous people have become co-protagonists of the front pages. public figures from different countries (with %) making statements about the pandemic outnumbered those affected by or suffering from the virus (with %) and international political figures (with %). finally, health personnel, who have generated important recognition and ovations, have only been the protagonists of % of the front page news about the virus, while researchers and scientists (with %) occupy the last place in percentage of presence in the front pages. headlines are also of great importance. their location on the page, and the type and size of the title contributes to underlining the importance of the information among the set of pieces selected to appear on that page. in relation to this (see figure ), the study has identified a predominance ( ) of appellative headlines, focused on drawing the reader's attention. for this reason, the headlines tend to be non-verbal and have very atomized structures that seek to convey to the reader news about a subject he or she already knows. as an example, figure shows the headlines from italian la repubblica: "tutti a casa" (everybody is at home) and "chiude l'italia" (italy closes) to announce the lockdown measures. headlines are also of great importance. their location on the page, and the type and size of the title contributes to underlining the importance of the information among the set of pieces selected to appear on that page. in relation to this (see figure ), the study has identified a predominance ( ) of appellative headlines, focused on drawing the reader's attention. for this reason, the headlines tend to be non-verbal and have very atomized structures that seek to convey to the reader news about a subject he or she already knows. as an example, figure shows the headlines from italian la repubblica: "tutti a casa" (everybody is at home) and "chiude l'italia" (italy closes) to announce the lockdown measures. headlines are also of great importance. their location on the page, and the type and size of the title contributes to underlining the importance of the information among the set of pieces selected to appear on that page. in relation to this (see figure ), the study has identified a predominance ( ) of appellative headlines, focused on drawing the reader's attention. for this reason, the headlines tend to be non-verbal and have very atomized structures that seek to convey to the reader news about a subject he or she already knows. as an example, figure shows the headlines from italian la repubblica: "tutti a casa" (everybody is at home) and "chiude l'italia" (italy closes) to announce the lockdown measures. and generally opt for the structure of subject, verb and predicate, enunciating a topic related to the pandemic trying to answer the "what" and the "who" of such information. the expressive ones, which have an evocative function on an event known to the reader, are the scarcest (only have been counted). this reduced number emphasizes the existence of a commitment among the media analyzed to avoid sensationalist headlines or those that seek only to externalize moods. another classification of headlines focuses on speech acts. in relation to these, as shown by figure , the headlines with textual quotations-which reproduce, between quotation marks, the declaration of one of the protagonists of the information-predominate ( in total). the majority presence of this type of headlines denotes an interest of the media in presenting the information from a personalized standpoint, thus, bringing the stories closer to the subjects that have generated them. indirectly quoted headlines ( ) and partially direct headlines ( ) accumulate a smaller number of cases. by analyzing the types of verbs used (see figure ), a predominance of strong interpretative verbs, characterized by highlighting the intensity of an action, is detected ( in total), followed by weak interpretive verbs (a total of ) which, although with less intensity, denote a willingness of the journalist to give more intensity to an action. the narrative ones, which are more neutral, add up to the continuity of the news about covid- on the front pages of the media, both in italy and spain, justifies this tendency towards headlines of an appellative nature that allude to facts that are familiar to the citizens. the informative headlines, with a total of units, are the most conventional and generally opt for the structure of subject, verb and predicate, enunciating a topic related to the pandemic trying to answer the "what" and the "who" of such information. the expressive ones, which have an evocative function on an event known to the reader, are the scarcest (only have been counted). this reduced number emphasizes the existence of a commitment among the media analyzed to avoid sensationalist headlines or those that seek only to externalize moods. another classification of headlines focuses on speech acts. in relation to these, as shown by figure , the headlines with textual quotations-which reproduce, between quotation marks, the declaration of one of the protagonists of the information-predominate ( in total). the majority presence of this type of headlines denotes an interest of the media in presenting the information from a personalized standpoint, thus, bringing the stories closer to the subjects that have generated them. indirectly quoted headlines ( ) and partially direct headlines ( ) accumulate a smaller number of cases. and generally opt for the structure of subject, verb and predicate, enunciating a topic related to the pandemic trying to answer the "what" and the "who" of such information. the expressive ones, which have an evocative function on an event known to the reader, are the scarcest (only have been counted). this reduced number emphasizes the existence of a commitment among the media analyzed to avoid sensationalist headlines or those that seek only to externalize moods. another classification of headlines focuses on speech acts. in relation to these, as shown by figure , the headlines with textual quotations-which reproduce, between quotation marks, the declaration of one of the protagonists of the information-predominate ( in total). the majority presence of this type of headlines denotes an interest of the media in presenting the information from a personalized standpoint, thus, bringing the stories closer to the subjects that have generated them. indirectly quoted headlines ( ) and partially direct headlines ( ) accumulate a smaller number of cases. by analyzing the types of verbs used (see figure ), a predominance of strong interpretative verbs, characterized by highlighting the intensity of an action, is detected ( in total), followed by weak interpretive verbs (a total of ) which, although with less intensity, denote a willingness of the journalist to give more intensity to an action. the narrative ones, which are more neutral, add up to by analyzing the types of verbs used (see figure ), a predominance of strong interpretative verbs, characterized by highlighting the intensity of an action, is detected ( in total), followed by weak interpretive verbs (a total of ) which, although with less intensity, denote a willingness of the journalist to give more intensity to an action. the narrative ones, which are more neutral, add up to a total of ; while the perlocutionary ones, which incorporate an effect that is intended to be achieved by means of an action, are the least numerous, with units counted. a total of ; while the perlocutionary ones, which incorporate an effect that is intended to be achieved by means of an action, are the least numerous, with units counted. out of all the pieces about the covid- crisis, , that is to say % of the total content on the pandemic, has some kind of photographic accompaniment. only are in black and white, and in have an artistic quality. the reduced number of photographs on the pandemic on the covers is striking, although it could be justified by the difficulty of obtaining images or doing so from a variety of themes that would allow for dynamism and renovation of the covers. regarding the characters that appear in the photographs, public figures such as pope francis, or celebrities, like sportsmen or writers have taken over the front pages ( . %). the second leading role is played by anonymous citizens who appear in everyday scenes, with a total of %, and national politicians account for %. only % have images of people affected by or patients who have contracted coronavirus. international political figures account for % of the total. finally, health personnel only appear in % of the photographs and scientists and researchers in %. the visual framing also shows a certain level of both spectacularization (with the presence of celebrities) and politicization of the crisis, while health workers and scientists that actually are on the frontline of the fight against the virus are less visible. by comparing italian and spanish news outlets, we can observe how covid- occupies the majority of the information in both countries. nonetheless, while in spain it occupies % of the front page; in italy covid- related pieces cover a striking % of the information (see figure ). italy was the first european country severely hit by the pandemic, so it makes sense to state that this unpleasant surprise somehow engulfed media attention. with regard to the information that presents a predominance of numeric data, the number of pieces is very low in both countries. spain, with %, and italy, with %, reinforce the scarcity of information focused only on figures or percentages. this might seem surprising, due to the overwhelming amount of data information (statistics, evolution of case numbers, etc.) we have received during the pandemic, nonetheless it confirms the interpretative role assumed by the printed press: while on line media can offer on line updating, the printed press offers a more interpretative vision of facts. figure displays the main entities portrayed in the information. geographical names, that is to say cities or regions, are the most numerous in the information on covid- ( % in italy and % in spain), followed by national institutions not linked to politics, which, with % in italy and % in spain, show the prominence that this type of institution has acquired in the framework of this crisis. political institutions are those that occupy the third place with % of the total in italy and % out of all the pieces about the covid- crisis, , that is to say % of the total content on the pandemic, has some kind of photographic accompaniment. only are in black and white, and in have an artistic quality. the reduced number of photographs on the pandemic on the covers is striking, although it could be justified by the difficulty of obtaining images or doing so from a variety of themes that would allow for dynamism and renovation of the covers. regarding the characters that appear in the photographs, public figures such as pope francis, or celebrities, like sportsmen or writers have taken over the front pages ( . %). the second leading role is played by anonymous citizens who appear in everyday scenes, with a total of %, and national politicians account for %. only % have images of people affected by or patients who have contracted coronavirus. international political figures account for % of the total. finally, health personnel only appear in % of the photographs and scientists and researchers in %. the visual framing also shows a certain level of both spectacularization (with the presence of celebrities) and politicization of the crisis, while health workers and scientists that actually are on the frontline of the fight against the virus are less visible. by comparing italian and spanish news outlets, we can observe how covid- occupies the majority of the information in both countries. nonetheless, while in spain it occupies % of the front page; in italy covid- related pieces cover a striking % of the information (see figure ). italy was the first european country severely hit by the pandemic, so it makes sense to state that this unpleasant surprise somehow engulfed media attention. int. j. environ. res. public health , , x for peer review of the characters that appear in the information correspond to very diverse profiles. national political figures are the most numerous, with % in italy and % in spain (see figure ). this aspect contrasts with the reduced presence of institutions, as mentioned above. thus, it is possible to with regard to the information that presents a predominance of numeric data, the number of pieces is very low in both countries. spain, with %, and italy, with %, reinforce the scarcity of information focused only on figures or percentages. this might seem surprising, due to the overwhelming amount of data information (statistics, evolution of case numbers, etc.) we have received during the pandemic, nonetheless it confirms the interpretative role assumed by the printed press: while on line media can offer on line updating, the printed press offers a more interpretative vision of facts. figure displays the main entities portrayed in the information. geographical names, that is to say cities or regions, are the most numerous in the information on covid- ( % in italy and % in spain), followed by national institutions not linked to politics, which, with % in italy and % in spain, show the prominence that this type of institution has acquired in the framework of this crisis. political institutions are those that occupy the third place with % of the total in italy and % in spain. the characters that appear in the information correspond to very diverse profiles. national political figures are the most numerous, with % in italy and % in spain (see figure ). this aspect contrasts with the reduced presence of institutions, as mentioned above. thus, it is possible to point out that politics is personalized as party politics through the representation of figures of the different parties of the country that complies with the seminal findings of hallin and mancini [ ] . the characters that appear in the information correspond to very diverse profiles. national political figures are the most numerous, with % in italy and % in spain (see figure ). this aspect contrasts with the reduced presence of institutions, as mentioned above. thus, it is possible to [ ] . in italy, citizens account for % of the total number of items; in spain, they account for only %. these differences are equally visible in the presence of researchers or scientists, which in italy is % and in spain reaches %. the main characters that appear in the photographs on the covers, displayed by figure , show important differences between the two countries. celebrities or public figures are the ones that absorb the most attention, with % of the total in italy and % in spain. this aspect emphasizes the importance given to this type of profile in the information and awareness of the pandemic. citizens, with % in italy and % in spain, would be in second place. there is, therefore, a prominent role in italy, citizens account for % of the total number of items; in spain, they account for only %. these differences are equally visible in the presence of researchers or scientists, which in italy is % and in spain reaches %. the main characters that appear in the photographs on the covers, displayed by figure , show important differences between the two countries. celebrities or public figures are the ones that absorb the most attention, with % of the total in italy and % in spain. this aspect emphasizes the importance given to this type of profile in the information and awareness of the pandemic. citizens, with % in italy and % in spain, would be in second place. there is, therefore, a prominent role for anonymous people. national politicians, with % and %, respectively, would be in third place. patients (with % and %) and researchers or scientists (with % and %) hardly appear in the cover images. even if health centers are not the most prominent settings in pictures, we observed how, when looking at the physical spaces represented in the news, summed up in figure , we can observe similarities and differences. first, while italian newspapers offer an emotional representation of the crisis by granting an enormous importance to the representation of empty spaces (such as squares or symbolical touristic spots, like the trevi fountain in rome, that in a normal situation would be crowded), spanish news outlets completely avoid this option, that only account for % of the total pictures. in both cases, however, urban spaces are the most recurrent (with % in italy and % in spain). in addition, although their importance is not prominent, the spaces related to political life (congress, etc.) with % in italy and % in spain, have a significant presence in the cover photographs. health centers or health camps, with % and %, are other places that appear next to citizens' homes (with % and %, respectively). even if health centers are not the most prominent settings in pictures, we observed how, when portrayed, these spaces are emotionally charged to dramatize the tragedy. figure shows two examples of spanish newspapers el mundo y el país showing coffins of victims in the middle of the crisis. even if health centers are not the most prominent settings in pictures, we observed how, when portrayed, these spaces are emotionally charged to dramatize the tragedy. figure the covid- crisis has been a shocking reality that took most countries by surprise. italy and spain have been amongst the first in europe to be hit by the pandemic. thus, observing media behavior is especially interesting. to answer our research questions, we can state that the covid- crisis has been covered mainly in an informative way: the analysis of the two main newspapers in spain and italy allows to observe a predominance of informative journalistic genres. in particular, the predominant genre is the brief, short news items, which lack contextual information, and do not offer in depth information to the readership. however, the choice of images, that is to say the visual framing of the stories, seems to suggest an emotional turn. in other words, even if the predominance of informative genres, together with the the covid- crisis has been a shocking reality that took most countries by surprise. italy and spain have been amongst the first in europe to be hit by the pandemic. thus, observing media behavior is especially interesting. to answer our research questions, we can state that the covid- crisis has been covered mainly in an informative way: the analysis of the two main newspapers in spain and italy allows to observe a predominance of informative journalistic genres. in particular, the predominant genre is the brief, short news items, which lack contextual information, and do not offer in depth information to the readership. however, the choice of images, that is to say the visual framing of the stories, seems to suggest an emotional turn. in other words, even if the predominance of informative genres, together with the avoidance of openly emotional headlines might suggest that the analyzed newspapers have avoided sensationalism, both the mentioned visual framing and the increased presence of anonymous citizens and celebrities among the subjects, can be interpreted as an attempt to humanize the information pieces, emotionally charging them. in particular, it is important to stress out the high level of politicization of the crisis: politicians have been the most recurrent actors both in the information and in the pictures. this, as seen, seems contradictory to the scarce presence of institutions. this result, however, should not be of surprise since, as already pointed out by hallin and mancini [ ] , both spain and italy belong to the polarized pluralist model, in which party politics is predominant to institutional politics. moreover, both countries are characterized by high intensity political polarization, therefore the management of the crisis has been the source of harsh political conflicts between government and opposition, that has been reflected by the media. concretely, spanish media outlets are the ones that give a more political vision of the crisis. accordingly, we observed a trend to objectify the different news actions and events through the use of geographical entities. the use of physical enclaves (italy, spain, milan or madrid, for example) lead to a simplification or generalization of reality that can bias the reading and interpretation of what has happened. in the same vein, it is important to stress that both health personnel and researchers directly involved in the fight against the virus, have a negligible presence both in pictures and information. in conclusion, we can sum up that the protagonists of the pandemic are not those affected, or involved in the fight, but rather anonymous citizens, and especially celebrities and politicians. these results cannot be discussed without taking into consideration the general framework of the social responsibility theory. as pointed out by mcquail in his seminal work [ ] , media should accept and fulfil certain obligations to the society and should meet high professional standards of accuracy, truth, objectivity and balance. therefore, journalists and professionals should be accountable to the society reflecting and respecting diversity, pluralism as well as diverse points of view and rights of reply. applying these criteria to the specific field of health communication, defined by sixsmith et al. [ ] as the study and use of communication strategies to inform and influence individual and community decision that enhance health, encompassing health promotion, health protection, disease prevention and treatment, we see how media are pivotal to the overall achievement of the objectives and aims of public health. in this sense, media practitioners and their organizations should be in charge of delivering rigorous health information, aimed at creating awareness about people's health, to prevent diseases and encourage healthy living. one of the main requirements of good health journalism [ ] , thus, is to present evidence-based news with proper perspective, and without giving rise to sensationalism or alarm. as pointed out by many studies [ , ] journalism training, in order to cope with these new challenges, should lay special emphasis on these aspects, providing not only specific health journalism training, but developing specific media and information literacy training devoted to health issues. acknowledging the geographical limitations, our study allows a series of conclusions to be drawn, which, from a diagnostic perspective, may help both scholars and journalism practitioners and deepen on the behavior and reaction capacity of newspapers in front of important and tragic events such as planetary pandemic. from a scholarly perspective, our work is embedded in a stream of literature that considers media to play a crucial role in framing public debates and shaping public perceptions by selecting which issues are reported and how they are represented [ ] [ ] [ ] [ ] . even if, as said, our results are limited to spain and italy, we have shown that printed newspapers avoid the massive use of data or percentages, leaving live updates to on line media, concentrating on more informative and interpretative pieces. this, on the one hand suggest they still play a crucial role in molding public opinion by offering more interpretative content [ ] , on the other they directly and indirectly interact with digital media, in charge of giving more live information. for this reason, our results suggest that legacy media should still be examined to see how they influence/are influenced in their interaction with online media. in addition, we have pointed how in both countries the pandemic has been highly politicized. this result, stressing out the salience of the political factor in representing the pandemic, underlines the need for more comparative research, analyzing media portrayal in different contexts and how different media, embedded in different political and cultural contexts, have reacted. in particular, the current corona crisis, having a global reach and effect, could be an ideal occasion to compare media behavior in different countries to observe the existence of similarities and differences, and to which extent different political cultures and political systems modify media reactions to a pandemic, following and proving hallin and mancini's model [ ] . in addition, even if reference newspapers seem to opt for an informative approach, their visual framing and the choice of images (i.e., empty places) emotionally charges the information. in this sense, besides the need for more comparative research to prove that this is a global trend, from a practical standpoint, our results align with the findings of previous studies [ , ] , stressing the need to promote media and information literacy, not only among citizens, but also among media professionals. as demonstrated, both the predominance of short news items, which lack contextual information, and the visual framing can make the process of processing the essence of information messages difficult, making the susceptibility of many people to misinformation as risky as susceptibility to the virus itself. thus, as previously pointed out, in order to achieve rigorous and responsible health information, journalism training should not only provide specific health journalism training, rather media and information literacy skills should be developed in this field. in the same vein, media and information literacy campaigns geared towards citizenship should focus specifically on health issues, since, as the current crisis has showed, health is one of the most sensitive topic when it comes to quality information to avoid the risk of misinformation. accordingly, the current corona crisis underlines the necessity of a reform of science communication. this pandemic has underlined that the media often does not offer rigorous scientific information, prioritizing a (possibly) misleading humanization of news. first, media professionals should be trained and help to implement fact-checking functions, particularly to debunk fake news, misinformation and disinformation on health subjects. moreover, a renewed cooperation and a greater communication between media, health experts, academia and policy makers is essential for improving quality of health news. for this, academic institutions, health bodies, and organizations engaged in scientific and medical research need to improve their communication with media, understanding the need to explain research findings, policies and trends to media professionals, who are in charge of "translating" and diffusing them to citizens. on the other side, media should rely on competent scholars from a wide range of disciplines, interacting, assessing and dialoguing with journalists in order to provide readers, and ultimately citizenship, with a better understanding of science-related issues such as a pandemic. la prensa sensacionalista y los sectores populares systematic literature review on the spread of health-related misinformation on social media effects of health information dissemination on user follows and likes during covid- outbreak in china: data and content analysis covid- related misinformation on social media: a qualitative study from iran social networks' engagement during the covid- pandemic in spain: health media vs. healthcare professionals the covid- social media infodemic. arxiv the novel coronavirus (covid- ) outbreak: amplification of public health consequences by media exposure new dialogic form of communication, user-engagement technique or free labor exploitation? revista de comunicação dialógica effects of misleading media coverage on public health crisis: a case of the novel coronavirus outbreak in china policy response, social media and science journalism for the sustainability of the public health system amid covid- outbreak: the vietnam lessons diseño total de un periódico diseño y compagisnación de la prensa diaria acontecimientos de nuestro siglo que conmocionaron el mundo color y tecnología en prensa; editorial prensa ibérica las portadas de abc el titular: manual de titulación periodística a estrutura gráfica das primeiras páginas dos jornais "o comércio do porto la invención en el periodismo informativo la formación de las secciones de deportes en los diarios de información general españoles antes de análisis de contenido y superficie de las primeras páginas de los diarios autonómicos análisis de las temáticas y tendencias de periodistas españoles en twitter: contenidos sobre política, cultura, ciencia el diseño periodístico en la prensa diaria media literacy and new humanism fact-checking' vs. 'fakenews': periodismo de confirmación como recurso de la competencia mediática contra la desinformación the challenge of teaching mobile journalism through moocs: a case study análisis de los contenidos de elementos impresos de la portada de diario correo edición región puno cómo se fabrican las noticias. fuentes, selección y planificación la prensa on-line. los periódicos en la www digitizing the news. innovation in on-line newspapers el periodismo y los nuevos medios de comunicación internet para periodistas comparing media systems: three models of media and politics mass communication theory health communication and its role in the prevention and control of communicable diseases in europe: current evidence, practice and future developments health communication: the responsibility of the media in nigeria. spec tejedor calvo, s. analysis of journalism and communication studies in europe's top ranked universities: competencies, aims and courses análisis de los estudios de periodismo y comunicación en las principales universidades del mundo. competencias, objetivos y asignaturas this article is an open access article distributed under the terms and conditions of the creative commons attribution (cc by) license key: cord- - av kl c authors: feldman, sue s.; hikmet, neset; modi, shikha; schooley, benjamin title: impact of provider prior use of hie on system complexity, performance, patient care, quality and system concerns date: - - journal: inf syst front doi: . /s - - -x sha: doc_id: cord_uid: av kl c to date, most hie studies have investigated user perceptions of value prior to use. few studies have assessed factors associated with the value of hie through its actual use. this study investigates provider perceptions on hie comparing those who had prior experience vs those who had no experience with it. in so doing, we identify six constructs: prior use, system complexity, system concerns, public/population health, care delivery, and provider performance. this study uses a mixed methods approach to data collection. from interviews of medical community leaders, a survey was constructed and administered to clinicians. descriptive statistics and analysis of variance was used, along with tukey hsd tests for multiple comparisons. results indicated providers whom previously used hie had more positive perceptions about its benefits in terms of system complexity (p = . ), care delivery (p = . ), population health (p = . ), and provider performance (p = . ); women providers were more positive in terms of system concerns (p = . ); patient care (p = . ), and population health (p = . ); providers age – were more positive than older and younger groups in terms of patient care (p = . ), population health (p = . ), and provider performance (p = . ); while differences also existed across professional license groups (physician, nurse, other license, admin (no license)) for all five constructs (p < . ); and type of organization setting (hospital, ambulatory clinic, medical office, other) for three constructs including system concerns (p = . ), population health (p = . ), and provider performance (p = . ). there were no statistically significant differences found between groups based on a provider’s role in an organization (patient care, administration, teaching/research, other). different provider perspectives about the value derived from hie use exist depending on prior experience with hie, age, gender, license (physician, nurse, other license, admin (no license)), and type of organization setting (hospital, ambulatory clinic, medical office, other). this study draws from the theory of planned behavior to understand factors related to physicians’ perceptions about hie value, serving as a departure point for more detailed investigations of provider perceptions and behavior in regard to future hie use and promoting interoperability. health information exchange (hie) has been described in various ways in the literature, but is generally understood as the act of health information sharing, facilitated by computing infrastructure, that is conducted across affiliated physicians' offices, hospitals, and clinics; or between completely disparate health systems (furukawa et al. ) . hie across disparate systems allows clinical information to follow patients as they move across different care settings, whether or not each organization shares an affiliation. this might include a hospital connected to an hie that is, in turn, connected to other forprofit and not-for-profit hospitals, private practices, and clinics. hie is expected to transform the nation's healthcare system through access to patient data from electronic health records to support care provision and coordination and improve care quality and population health (berwick et al. ) . health information exchange has also been described in terms of the organizational and technical environment facilitating hie. in this paper, hie is the act of exchanging health information facilitated by a health information exchange organizational and technical environment. while expectations and promises are high, still relatively little is known about the real and perceived value of hie by providers, and how to accomplish large-scale acceptance and use. it has been reported that about two-thirds of hospitals and almost half of physician practices are now engaged in some type of hie with outside organizations (rahurkar et al. ) . relatively few of the more than operational u.s. health information exchanges have been the subject of published evaluations (rudin et al. ) . after more than a decade of hie hype, utilization by users is still relatively low. a review of hie past research indicated that most studies reported use of hie in % to % of encounters (rudin et al. ). further, findings from a review of research on hie sustainability suggest that just one quarter of existing hie organizations consider themselves financially stable (rudin et al. ) . a systematic review of hie studies suggests that study stakeholders claim to value hie. yet, the effects on a wide range of outcomes are still unknown (rudin et al. ) , and little generalizable evidence currently exists regarding sustainable benefits attributable to hie (rahurkar et al. ) . some have noted the potential for widespread hie adoption to reduce the utilization and cost of healthcare services richardson ) , though empirical evidence is limited (bailey et al. ; fontaine et al. ) . continued research is needed to understand the factors associated with adoption and use of such a promising, yet underutilized technology. to date, most hie studies have investigated user perceptions of value prior to use, and the intention to use. few studies have assessed factors associated with the value of hie through its actual use. this study investigates provider perspectives on hie comparing those who had prior experience vs those who have only heard of hie, but not yet had experience with it. the purpose of this study is to investigate provider perceptions about hie, comparing those who have used hie to those who have not used hie and how perceptions differ. the objectives of this study are to explore demographic differences in perceptions across different types of providers, assessing several important factors related to the adoption of health it. literature has determined that factors associated with perceived benefits and challenges of health it adoption and use include: ) the extent to which users perceive a system to be complex vs easy to use (davis ; gadd et al. ) , ) technical standards and business concerns that act as barriers to system use (rudin et al. ) , ) the perceived effects that using a system (and its information) has on public or population health (zech et al. ; shapiro et al. ; hincapie and warholak ; hessler et al. ; dobbs et al. ) , ) the perceived effects that using a system (and its information) has on patient care delivery (frisse et al. ; furukawa et al. ; kaelber and bates ) , and ) the perceived effects that using a system (and its information) has on provider performance (davis ) . we review these factors below as well as the literature on prior use of an information system: or the effect of prior use vs nonuse on perceptions and expectations of a system. prior use of an information system has been found to be associated with the intention to use them in the future (jackson et al. ; agarwal and prasad ) . prior it usage behavior tends to be a significant determinant of future usage intention and/or behavior (jasperson et al. ) . prior behavior helps form habit, which induces future behavior in an unthinking, automated, or routinized manner, rather than through a conscious process of cognitive or normative evaluation (eagly and chaiken ; triandis ) . accordingly, future behavior can be viewed as a joint outcome of behavioral intention and habit. though the indirect effects of prior use have seen little investigation in the literature, preliminary evidence to that effect has been reported (taylor and todd a, b) . these authors argued that as individuals gain familiarity with using a given technology (by virtue of prior use), they tend to focus less attention on the amount of effort needed to use the technology, and more on ways in which the technology can be leveraged to improve their job performance and outcomes. the familiarity with the technology gained from prior usage experience and the knowledge gained from learning-by-doing allows users to form a more accurate and informed opinion of the true performance benefits of the target system and its relevance to their job performance. hence, users' performance expectancy tends to become stronger as their prior use of technology increases. we extend this line of reasoning for business information systems to the case of providers' use of health information technology, or in this case, hie. hie is particularly interesting as a unique and differing context from business due to the multi-organizational, distributed, and shared healthcare information technology setting. perceived system complexity, or the degree to which a person believes that using a particular information system would be difficult vs easy or free of effort, has long been used as a construct to assess user acceptance of information technologies (davis ) . studies on ease of use have included a range of health information technologies including electronic health records (ehr) (dünnebeil et al. ; hennington and janz ) , telemedicine (hu et al. ) , clinical information systems (paré et al. ) , and others. many studies on hie ease of use have focused on the perceptions of prospective users not currently using hie. in one such study, . % of physicians interested in hie, but not currently engaging in hie, perceived that using hie would be easy . in contrast, . % of physicians not interested in hie perceived that using hie would be easy ). in the past, ease of use of hie has been positively predictive of system adoption and usage , and has shown to impact successful retrieval of patient information that affected patient care (genes et al. ) . of significant importance for this study is understanding perceived ease of use from users who have actually used hie vs. those who have not. some past studies have found several concerns that providers have in terms of using hie. a meta-analysis of studies showed that stakeholders consider hie to be valuable, but barriers include technical performance, workflow issues, and privacy concerns (rudin et al. ) . concerns also include limits on the amount and type of information that providers want to use , and general fears of information overload from ill designed systems and/or utilization of duplicate or seemingly "competing" information systems (rudin et al. ) . for example, in one study, quality assurance (qa) reports generated by a health information exchange for medical practices was reported as the least valued system function due to skepticism about report validity and concerns that reports would reflect negatively on providers . another study found that hie usage was lower in the face of time constraints, questioning whether hie may be considered an information burden rather than a help to users vest and miller ) . time constraints, especially in primary and emergency care, have also tended to be a cause for concern in engaging in hie. mixed results from hie evaluations have further raised concerns about its utility. for example, one study found that increased hie adoption has been associated with reduced rates of laboratory testing for primary and specialist providers (ross et al. ). yet, in the same study, imputed charges for laboratory tests did not shift downward (ross et al. ) . this evidence led us to include health information exchange system concerns as an important construct for hie provider perceptions and usage. prior studies have indicated the potential for hie to aid in a range of population health and care coordination activities, care quality, and timely health maintenance and screening. in one study, hie was shown to enable identification of specific patient populations, such as homeless (zech et al. ) , and those who have chronic or high risk health conditions ). however, not all studies have showed significant results related to population and patient health outcomes (hincapie and warholak ) . the authors in one study concluded that hie usage was unlikely to produce significant direct cost savings, yet also noted that economic benefits of hie may reside instead in other downstream outcomes such as better informed and higher overall quality care delivery (ross et al. ) . broader hie impacts include positive relationships with public health agencies (hessler et al. ), improved public health surveillance (dobbs et al. ) , and increased efficiency and quality of public health reporting ). thus, we assess user perceptions of the impact of hie on public/population health. the extent to which hie impacts the delivery of patient care has been addressed in prior studies. more generally, the timely sharing of patient clinical information has been shown to improve the accuracy of diagnoses, reduce the number of duplicative tests, prevent hospital readmissions, and prevent medication errors furukawa et al. ; kaelber and bates ; yaraghi ; eftekhari et al. ) . for hie specifically, usage has been associated with decreased odds of diagnostic neuroimaging and increased adherence with evidence-based guidelines (bailey et al. ). these include timelier access to a broader range of patient interactions with the healthcare system (unertl et al. ) , improved coordination of care and patient health outcomes for human immunodeficiency virus patients (shade et al. ) , and positive patient perceptions of the impact on care coordination (dimitropoulos et al. ) . providers continue to engage in hie with the belief that care delivery will improve (cochran et al. ) . further, benefits have been noted for specific care settings. emergency department access to hie has been associated with a cost savings due to reductions in hospital admissions, head and body ct use, and laboratory test ordering . further, hie has been associated with faster outside information access, and faster access was associated with changes in ed care including shorter ed visit length ( . min shorter), lower likelihood of imaging (by . , . , and . percentage points for ct, mri, and radiographs, respectively), lower likelihood of admission ( . %), and lower average charges ($ lower) (p ≤ . for all) (everson et al. ) . provider perceptions about the positive effects hie has on patient care delivery may be the strongest motivating factor for its adoption. for example, in one study, physicians most agreed that the potential benefits of hie lie in care quality and were least worried about the potential for decreases in revenues resulting from the technology (lee et al. ) . a study that looked at perspectives of home healthcare providers indicated a decrease in ed referral rates with hie (vaidya et al. ) . in another study, looking up clinical information (test results, clinic notes, and discharge summaries) on a patientby-patient basis was found to be the most valued function for hie users, followed closely by the delivery of test results, for the care of patients ). as such, perceived impact of hie on care delivery is included as a dependent variable. one expected outcome of using hie is that a performance improvement would occur. perceived usefulness is defined here as "the degree to which a person believes that using a particular system would enhance his or her job performance." (davis ) in sum, when a system is perceived to be useful, users believe positive performance will result. prior hie studies have shown that certain medical information is perceived to be more useful than others. one prior study found that physicians expressed agreement that hie is useful for pathology and lab reports, medication information, and diagnoses with chief complaints (lee et al. ). however, they expressed less agreement regarding the need for patient data items, functional test images and charts, care plans at the referring clinic/hospital, or duration of treatment (lee et al. ) . different studies reported that providers expected hie data would be useful to improve completeness and accuracy of patients' health records, efficiency with which clinical care is delivered, quality and safety of care, communication with other providers and coordination and continuity of care cochran et al. ) . prior studies have also indicated that quality patient information is believed to impact the elimination of duplicated medication as well as lab and imaging tests, prevention of drug-drug interactions, better decision making on the care plan and expedited diagnoses, and better ability to explain care plans to patients (lee et al. ) . physicians have noted that data gaps, such as missing notes, adequacy of training (cochran et al. ) , and timely availability of information (melvin et al. ) may pose a significant challenge to future hie usage (rudin et al. ) . while these perceptions of usefulness are important relative to expected hie use and resulting provider performance, we see expected and actual system use as scenarios that could potentially result in contrasting viewpoints. perceptions of provider performance that result from actual use of hie may provide a more real-to-life assessment. the objectives of this study are to explore demographic differences in perceptions across different types of providers, assessing several important factors related to the adoption of health it. our hypothesis is that provider age, gender, type of licensure (i.e., doctors, nurses), provider organizational setting (hospital, private practice), role in an organization (administration, patient care), and prior experience using hie are factors that affect provider perceptions about hie. for example, lee and colleagues (lee et al. ) found that different physician practice settings significantly influenced individual user perceptions. based on this information, this study is addressing differences in provider perceptions for different hie related constructs described above. the above research question was empirically tested using a field survey of practicing health providers in the state of virginia, usa. the specific technology examined was health information exchange and the action examined was hie. in march , the office of the national coordinator for health it (onc) awarded a state cooperative agreement to the virginia department of health (vdh) to govern statewide hie. in september , community health alliance (cha) was awarded a contract from vdh to build the virginia statewide health information exchange; connectvirginia was subsequently initiated to accomplish this goal. statewide health information exchanges were regarded as an organizational structure to provide a variety of mechanisms to enable hie using standardized technologies, tools, and m e t h o d s . d u r i n g t h e -m o n t h s t u d y p e r i o d , connectvirginia designed, tested, developed, and implemented three technical exchange services: connectvirginia direct messaging (a secure messaging system), connectvirginia exchange (the focus of this study), and a public health reporting pathway. connectvirginia exchange is a query/retrieve service in which a deliberate query passively returns one or more standardized continuity of care documents (ccds) that provide a means of sharing standardized health data between organizations on-boarded and connected to connectvirginia. the health information exchange design was based on a secure means to exchange patient information between providers via direct messaging, a secure means for query and retrieval of patient information via exchange, and a secure means for public health reporting. consistent with other health information exchange developments, a standardized product development lifecycle was used to create and implement the system. connectvirginia's direct and exchange protocols were originally established by onc and used as a method to standardize hie of secure messages and ccds. similar to the original nationwide health information exchange (nwhin) connect's exchange, connectvirginia's exchange involves pulling information by providers from an unfamiliar healthcare facility. connectvirginia's direct is a simplified version of the connect software that allows simple exchange of basic information between providers (dimick ) . the public health reporting pathway was established using secure file transport protocols. a survey questionnaire was developed and administered to physicians, dentists, nurse practitioners, registered nurses, physician assistants, and nurse midwives in virginia. we evaluated user perceptions of the hie using selected survey items. each clinician was given a summary of the study, the study protocol, consent procedure, and were notified that the research had been approved by the institutional review board of the lead researcher's university. each participant consented to participate and was assured their responses would be anonymous. the survey had three sections: ( ) demographics (age, job, gender) and system usage characteristics; ( ) familiarity with technology; and ( ) user perceptions across an author generated scale inclusive of the following constructs: system complexity, health information exchange system concerns, provider performance, patient care, and population care. a panel of experts reviewed items for face validity. we collected responses for all items on a scale of to from strongly agree ( ) to strongly disagree ( ). participants could also leave comments in several sections. purposive sampling was used to invite individuals, key informants, and thought leaders in the medical community to participate in -min interviews. fifteen interviews were conducted. from these interviews, a survey was created and administered to physicians, dentists, nurse practitioners, registered nurses, physician assistants, and nurse midwives. the sample was achieved from multiple sampling sources including two state medical societies and through a virginia medical providers e-rewards panel. e-rewards, a research now company, is one of the leading providers of online sampling in the u.s. product research industry. surveys were conducted via two channels: telephone and internet. e-rewards surveys were conducted by experienced telephone surveyors. in order to avoid survey bias, online and telephone surveys rotated questions. the goal was to achieve usable responses. over a -week period, invitations to participate in the survey and the survey link were distributed in monthly newsletters to qualified members of the medical society of virginia ( , members) and the old dominion medical society. old dominion medical society did not disclose the total number of members. this resulted in surveys, of which were usable. e-rewards panel members were called until completed and usable surveys were accomplished. combined, this yielded in usable surveys. ibm spss data collection was used to create and administer the survey online. data were collected from may through june . all surveys collected through telephone were entered into the online survey system as responses were collected. data were analyzed from the online and telephone versions of the survey together. internal reliability of the scale was calculated using cronbach's alpha and used the statistical package ibm spss v. for quantitative analyses. descriptive statistics were compared across demographics and usage characteristics. data were summarized using mean, median, and sd, and a sign test was used to determine if individual subscale items were significantly different from neutral. to determine the effects of our independent variables (prior use, gender, age, professional license, role in the organization, type of organization) on our dependent outcomes, an analysis of variance (anova) was used along with tukey hsd tests for multiple comparisons. the five constructs of interest to this study were health information exchange system complexity, system concerns, public/ population health, care delivery, and provider performance. each construct was measured using multiple-item survey questions, adapted from prior research, and reworded to reflect the current context of providers hie usage. the complete item scales are provided in table . system complexity was measured using four likert-scaled items adapted from davis' perceived usefulness scale (perceived usefulness is also referred to as performance expectancy in the information technology usage literature) (davis ). perceptions about system concerns were measured using items modified from taylor and todd (taylor and todd a, b) . the effect of hie on patient care delivery, public/population health, and provider performance was measured using investigator developed likert-scaled items guided by literature review. prior hie usage was measured using three items similar to thompson et al. (thompson et al. ) that asked subjects whether they had previously or currently used the system. we did not have access to actual system-recorded usage data, and thus, self-reported usage data was employed as a proxy for actual recorded usage. since the usage items were in the "yes/ no" format, in contrast to likert scales for other perceptual constructs, common method bias was not expected to be significant. we assess demographic factors including participant gender, age, and professional license (physician, nurse, other, administrative); role in the organization (patient care, administration, teaching/research, other); and type of organization (hospital, ambulatory clinic, medical office, other). these were all determined via selectable items in the instrument. each of the five constructs were tested for their reliability, or the extent to which each represents a consistent measure of a concept. cronbach's alpha was used to measure the strength of consistency. results of validation testing were found to be strong for patient care ( . ), provider performance (. ), and population care (. ); and moderate for system complexity (. ) and system concerns (. ). among responses, physicians indicated they had used hie previously and the remaining physicians indicated they had not yet used hie. respondents represented all clinical specialties, including internal medicine, pediatrics, gynecology, pathology, general surgery, anesthesiology, radiology, neurology, oncology, and cardiology. selected sample demographics, along with the population demographics for all providers at this hospital (obtained directly from the hospital administration), are shown in table . a one-way between subject's anova was conducted to compare statistical differences across demographic categories on each previously identified construct. demographic categories included: whether the participant had previously engaged with hie or not, gender, age grouping, professional license, role in the organization, and type of organization. constructs previously identified included health information exchange complexity, health information exchange system concerns, benefits of hie on patient care, population health, and provider performance. results from anova are shown in table . results indicated that there was a statistically significant difference between those that had previously used hie and those that had not used hie for four constructs. these included provider beliefs about the complexity (f( , ) = . , table . statistically significant differences between gender groups were found for three constructs including provider system concerns (f( , ) = . , p = . ); and perceived benefits on patient care (f( , ) = . , p = . ), and on population health (f( , ) = . , p = . ). statistically significant differences between age groups were found for three constructs including provider perceived benefits on patient care (f( , ) = . , p = . ), population health (f( , ) = . , p = . ), and provider performance (f( , ) = . , p = . ). statistically significant differences between professional license groups (physician, nurse, other license, admin (no license)) were found for all five constructs, including provider perceptions about system complexity (f( , ) = . , p = . ), provider concerns about use (f( , ) = . , p = . ), provider perceived benefits on patient care (f( , ) = . , p = . ), population health (f( , ) = . , p = . ), and provider performance (f( , ) = . , p = . ). statistically significant differences between type of organization setting (hospital, ambulatory clinic, medical office, other) were found for three constructs including provider concerns about use (f( , ) = . , p = . ), and provider perceived benefits on population health (f( , ) = . , p = . ), and on provider performance (f( , ) = . , p = . ). there were no statistically significant differences found between groups based on a provider's role in an organization (patient care, administration, teaching/research, other). an anova test is important to assess the significance of results; however, an anova test does not provide information about where the statistically significant differences lie for multiple comparisons (groupings for age, licensure, type of organization). in order to analyze which specific group means are different, tukey's hsd test is conducted for anova results with statistically significant f-values (tukey ) . tukey hsd post hoc comparisons indicated that the mean score for participants between to years old (m = . , sd = . ) was significantly different from participants over years old (m = . , sd = . ) for provider perceived benefits on patient care. these two age groups, respectively, were also significantly different in terms of perceived benefits on population health ( results from tukey hsd post hoc tests on type of organization indicated that participants whose primary work affiliation is with hospitals (m = . , sd = . ) were significantly different than those who identified with medical offices/ private practice (m = . , sd = . ) in terms of their concerns with using hie. results showed these two groups were . ( . )* . ( . )* . ( . ) . ( . )* . (. ) . ( . )* *the mean difference is significant at the . level also significantly different in terms of hie benefits on provider performance ((m = . , sd = . ) vs. (m = . , sd = . ), respectively). hospital based providers were less concerned and more positive towards performance benefits. results also indicated that ambulatory clinics (m = . , sd = . ) were significantly different from medical offices/ private practices (m = . , sd = . ) in their beliefs that hie benefits population health. the primary goal of this study was to identify differences in providers' perceptions related to hie based on their demographics and prior use of hie. a field survey was created based on provider interviews and administered to providers from different disciplines. the findings indicate that there were statistically significant differences in most hie perceptions based on demographics including prior hie use, gender, age, professional license, and type of organization. professional role in an organization yielded no statistically significant differences. the providers that had previously used hie showed more positive responses towards hie in each category except for the system concerns category. these results seem to support the business literature indicating that prior use of an information system has been shown to be associated with the intention to use it in the future, regardless of the amount of effort needed to use the technology (jackson et al. ; agarwal and prasad ) . there was a statistically significant difference in perceptions related to system complexity, system concerns, patient care, and population care between different genders, with females generally showing more positive responses than males for each category. depending on age groups, a statistically significant difference was observed for the patient care, population care, and provider performance constructs. interestingly, the middle group (ages to ) showed more positive responses than both the younger providers (less than years) and older providers (over years) for each category. while other studies have not looked at age by these constructs, this finding differs from literature suggesting that those years and younger are more likely to adopt ehrs (decker et al. ) . it could be that the hie context may be perceived differently than ehrs due to it being connective to and enabling of interoperability between ehrs. it could also be the size of the sample for each age group. in this sample, we have more providers that are in that middle age group compared to those that belong to the younger and older age groups. another potential way to explain this finding could be using the diffusion of innovation theory (kaminski ) . younger providers and older providers could be part of the later majority (conservatives) and may be less than enthusiastic at the beginning, as they could be waiting on statistically significant evidence before adopting and implementing new technology. the middle age group could be part of the visionaries or pragmatists as described in the diffusion of innovation theory and may be willing to be the trail blazers or risk takers (kaminski ) . statistically significant differences for all five constructs were noted across all professional license groups, which indicates that providers have different perceptions about hie depending on their discipline. in general, nurses reported the lowest scores about perceived complexity of hie, yet the highest scores in terms of system concerns. other licensed professionals reported the highest scores (high perceptions about hie task complexity). nurses and administrators reported the most positive scores on the benefits of hie for patient care, population health, and provider performance. doctors reported the least amount of concern with using hie, while nurses reported the highest amount of concern. statistically significant differences were noted for system concerns, population health, and provider performance constructs between different types of organizational groupings. the hospital based and "other organization" type groupings reported the least concerns about hie use and those working for medical offices reported the highest amount of concern. participants who reported working within ambulatory clinics reported the highest perceived benefit of hie on population health while those working within medical offices/private practice reported the lowest perceived benefit and this is consistent with the literature across a variety of ambulatory settings (haidar et al. ) . participants who reported working within hospitals reported the highest perceived benefit of hie on provider performance while those working in medical offices/private practice reported the lowest perceived benefit. the findings of this study should be interpreted in light of its limitations. the first limitation is our measurement of the prior hie usage behavior construct. our self-reported measure of usage was not as accurate, unbiased, or objective as usage data from system logs. we urge future researchers to use system log-based measures of it usage, if available. second, our small sample size, and correspondingly low statistical power, may have contributed to our inability to observe significant effects of prior usage behavior on each construct. we encourage future researchers to consider using larger samples, such as by using pooled observations from two or more hospitals and/or other healthcare facilities. third, there were participants that did not select either male or female gender. thus, the analysis of gender did not include the entire sample. finally, there may be additional factors beyond those examined in this study. our choice of the factors used here was motivated by the hie literature and a first round of interviews with providers. however, there may be other theories, such as innovation diffusion theory or political theory, that may also be relevant to explaining provider perceptions and behavior. future studies can explore those theories for identifying other predictors of provider behavior and/or compare the explanatory ability of those theories discussed herein. the findings of this study have interesting implications for health it practitioners. first, we provide evidence of the perspectives of various types of providers in terms of their beliefs and perceptions about hie. perceptions about system complexity, system concerns, patient care, population health, provider performance, and prior usage have varying effects in terms of influencing provider' perceptions about hie. however, implementing standards of care that incorporate hie is an instance of organizational change, requiring careful planning and orchestrating of change management to influence providers to routinely use the targeted technologies. change management programs designed to enhance provider intentions to use health it should focus on educating users on the expected performance gains from technology usage as well as improving their perceptions of behavioral control by training users to use those technologies appropriately. the significance of prior behavior on future hie usage intention is indicative of the importance of recruiting early adopters to "seed" hie usage in hospitals and healthcare settings. junior practitioners, by virtue of their more recent medical training involving the latest health it, may be viewed as more likely to be such early adopters. however, the results of this study also show that providers age to may provide a strong base of supporters. given their prior usage behavior and correspondingly, higher level of comfort with such technologies, these individuals are likely to continue using hie further in hospital settings, even when other conditions may be less conducive to their usage. with major health it policy efforts focused on interoperability, this study contributes to the perspective that different provider groups may be stronger facilitators of interoperability efforts than others and thus these findings could help managers and policymakers determine strategies for such efforts. this study examined provider perceptions about hie, comparing those who have not used hie with those who have previously used the technology. as such, the role of prior behavior on providers' perceptions and intentions regarding future hie usage provides new insights. one may hypothesize that if individuals use a system that is perceived to provide value, then use of that technology will proliferate and expand throughout the intended user population. findings indicate that this may not always be the case. as systems become more integrated, inter-organizational, and more complex, user perceived value derived from using that system may not be understood by the user. this study contributes to the nascent stage of theorizing in the medical informatics literature by presenting the theory of planned behavior as a referent theory that can not only help us understand providers' usage of hie better, but can also serve as a starting point for more detailed investigations of provider behavior. second, given that the centrality of hie usage to improving healthcare delivery, quality, and outcomes and the uphill battle many states and regions are currently facing to get providers to use hie, our study provides some preliminary suggestions about how providers' behaviors can be influenced using a strong evidence base. we also elaborate the contingent role played by prior hie usage experience in shaping providers' usage patterns. presumably, there may be more such contingent factors that may be the subject of future investigations. in conclusion, we hope that this study will motivate future researchers to examine in further depth provider hie usage behavior and contribute to a cumulative body of research in this area. publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. sue feldman, rn, med, phd is professor and director of graduate programs in health informatics in the school of health professions at the university of alabama at birmingham. dr. feldman is also a senior scientist in the informatics institute and senior fellow in the center for the study of community health. her research focuses on health information systemsfrom development to evaluation. dr. feldman also serves on the health informatics accreditation council for the commission on accreditation for health informatics and information management (cahiim), chairs graduate health informatics accreditation site visit teams, and has studied and developed graduate level health informatics curriculum. as a registered nurse (rn) for over years, she brings a unique clinical and informatics blend to everything she does, grounding policy and theory with practice. dr. feldman has published in a variety of top-tier peer-reviewed journals and conference proceedings, led or co-led the development of several information systems that are grounded in research, and has served as program chair for several national forums. her current work involves leading the development of a substance use, abuse, and recovery data collection system for the state of alabama as well as a statewide covid- symptom and exposure notification system. dr. feldman has a masters degree in education and a phd in education and also in information systems and technology from claremont graduate university. neşet hikmet is professor of health information technology in the college of engineering and computing, university of south carolina where he also serves as the director of applied sciences at center for applied innovation and advanced analytics. his research expertise includes cloud computing, augmented analytics, health informatics, and healthcare internet of things. as an applied scientist he is heavily involved in design, development, deployment, and maintenance of large-scale computing systems. dr. hikmet has significant experience in leading health data analytics projects within academia, including utilization of a wide range of analytics methods and approaches. his research and projects have been funded by the national science foundation, national institutes of health, and other federal and state agencies and private foundations. shikha modi, mba is a graduate of uab's mba program with an emphasis on healthcare services. ms. modi has a bachelor's degree in biology from university of north alabama. ms. modi's current research projects include health information technology and its impact on outcomes, health information exchange and provider performance, health information exchange and patient and population health, the intersection of health informatics, healthcare quality and safety, and healthcare simulation, patient experience evaluation at dermatology clinics, and identifying and mitigating bias in artificial intelligence systems. benjamin schooley, mba, phd is associate professor of health it in the college of engineering and computing, university of south carolina where he also serves as research director at the health information technology consortium. his research expertise includes human-computer interaction, health informatics, and human factors in the design and application of software systems. as a design scientist, his applied and field research in health, wellness and social-benefit initiatives have been funded by the national science foundation, national institutes of health, the centers for medicaid and medicare services, the u.s. department of labor, social security administration, and other federal and state agencies and private foundations. are individual differences germane to the acceptance of new information technologies? decision sciences does health information exchange reduce unnecessary neuroimaging and improve quality of headache care in the emergency department the triple aim: care, health, and cost health care provider perceptions of a querybased health information exchange: barriers and benefits perceived usefulness, perceived ease of use, and user acceptance of information technology physicians in nonprimary care and small practices and those age and older lag in adopting electronic health record systems nhin direct: onc keeps it simple in effort to jumpstart data exchange public attitudes toward health information exchange: perceived benefits and concerns the northwest public health information exchange's accomplishments in connecting a health information exchange with public health determinants of physicians' technology acceptance for ehealth in ambulatory care the psychology of attitudes do health information exchanges deter repetition of medical services? health information exchange associated with improved emergency department care through faster accessing of patient information from outside organizations systematic review of health information exchange in primary care practices the financial impact of health information exchange on emergency department care the financial impact of health information exchange on emergency department care hospital electronic health information exchange grew substantially in - user perspectives on the usability of a regional health information exchange adoption of health information exchange by emergency physicians at three urban academic medical centers association between electronic medical record implementation and otolaryngologist productivity in the ambulatory setting information systems and healthcare xvi: physician adoption of electronic medical records: applying the utaut model in a healthcare context assessing the relationship between health information exchanges and public health agencies the impact of health information exchange on health outcomes examining the technology acceptance model using physician acceptance of telemedicine technology toward an understanding of the behavioral intention to use an information system a comprehensive conceptualization of post-adoptive behaviors associated with information technology enabled work systems health information exchange and patient safety diffusion of innovation theory physicians' perceptions and use of a health information exchange: a pilot program in south korea health information exchange in the ed: what do ed clinicians think? the effects of creating psychological ownership on physicians' acceptance of clinical information systems physicians' potential use and preferences related to health information exchange despite the spread of health information exchange, there is little evidence of its impact on cost, use, and quality of care crossing the quality chasm: a new health system for the st century health information exchange in small-to-medium sized family medicine practices: motivators, barriers, and potential facilitators of adoption effects of health information exchange adoption on ambulatory testing rates what affects clinicians' usage of health information exchange usage and effect of health information exchange: a systematic review health information exchange interventions can enhance quality and continuity of hiv care using health information exchange to improve public health assessing it usage: the role of prior experience understanding information technology usage: a test of competing models influence of experience on personal computer utilization: testing a conceptual model interpersonal behavior comparing individual means in the analysis of variance bridging organizational divides in healthcare: an ecological view of health information exchange perceptions of health information exchange in home healthcare the association between health information exchange and measures of patient satisfaction factors motivating and affecting health information exchange usage an empirical analysis of the financial benefits of health information exchange in emergency departments identifying homelessness using health information exchange data key: cord- - i rsei authors: almomani, hesham; al-qur'an, wael title: l'ampleur de la réaction des gens aux rumeurs et aux fausses nouvelles à la lumière de la crise du virus corona date: - - journal: ann med psychol (paris) doi: . /j.amp. . . sha: doc_id: cord_uid: i rsei résumé contexte: avec la propagation du virus corona à l'échelle mondiale, les effets négatifs ont augmenté à tous les niveaux, en particulier dans les secteurs économique et social. la situation a été aggravée par la propagation de rumeurs et de fausses informations sur ce qu'est ce virus et les moyens de le prévenir. objectif: tester comment les gens interagissent avec différentes informations circulant sur les réseaux sociaux et les plateformes en ligne. méthodes: les données ont été extraites d'une enquête menée en auprès de personnes en quarantaine âgées de à ans. un questionnaire a été créé contenant la plupart des rumeurs et fausses informations diffusées, en plus des informations correctes avec une source fiable. les résultats ont été analysés sous forme de tableaux montrant les proportions de partisans et d'opposants et exprimés en nombre et en pourcentage. résultats: deux mille personnes en quarantaine (âge moyen ( , ± , ans) ont participé à l'étude avec un taux de réponse de %. l'analyse a montré un large pourcentage de soutien aux protections de la santé contre le virus corona et un large rejet de la plupart des fausses informations et rumeurs circulant sur les plateformes internet. conclusion: l'ampleur de la propagation des rumeurs et des fausses informations diminue en raison de l’action des gouvernements et des autorités compétentes à travers leurs plateformes officielles dans le cadre du mécanisme de lutte contre le virus corona, et en prenant appui sur les erreurs actuelles pour faire face à ces crises à l'avenir. abstract background: with the spread of the corona virus globally, the negative effects increased at all levels, especially the economic and social sectors. the situation was made worse by the spread of rumors and false information about what this virus is and ways to prevent it. objective: test how people interact with different information circulating through social media and online platforms. methods: the data was taken from a survey conducted in on quarantined people between the ages - years old. a questionnaire was created containing most of the rumors and false information circulated, in addition to the correct information with a reliable source. the results were analyzed in the form of tables showing the proportions of supporters and opponents and expressed in numbers and percentages. results: a total of quarantined people participated in the study with the mean age ( . ± . years). where the response rate is %. the analysis showed a large percentage of support for health protections against the corona virus, and a large rejection of most of the fake information and rumors circulating across the internet platforms, in addition to their solidarity within the principles of social responsibility. conclusion: the extent of the spread of rumors and false information is decreasing based on the presence of governments and the competent authorities through their official platforms within the mechanism of fighting against the corona virus, and also taking advantage of the current mistakes to be a shield in the future in dealing with such crises. with the imposition of curfew measures to combat the spread of corona virus, social media and online platforms have become the most interactive and prominent tools for social discussions remarkably during a short period and may extend for longer than expected, based on the genetic status of the corona virus and the stability of the epidemiological situation in various countries of the world. rumors of various kinds are spread, whether they include feelings of solidarity or fear, but at the beginning of the spread of this pandemic, despite the emergence of clinical evidence, many rumors spread about the drugs proposed to treat corona's disease without any medical or scientific evidence, especially in societies absent from the genetic development of the status of this virus, and the most prevalent and still the subject of trial and discussion on the medical level, is a common effectiveness of the drug hydroxychloroquine in the treatment of the corona virus, which was used in the fight against malaria, where we can classify these rumors, especially with the absence of medical evidence and evidence as false news [ ] . where one of the social studies (n = ) showed that most people circulate false information and false allegations because they are unable to determine the reliability of this information, whether from a medical or scientific point of view [ ] on the other hand, some researchers considered that the circulation of information and interaction across social media platforms was linked to mental health problems and symptoms, including feelings of fear, depression and anxiety, as a result of social isolation procedures and instructions related to the curfew and the existence of strict penalties that include prison or financial security and that constitute a significant burden under these conditions [ ] . often events of a historical nature are used as a platform for spreading rumors and conspiracy theories that explain the causes of these events and link them to the facts, procedures and instructions followed associated with an event. therefore, it is natural to spread such news and theories in light of the spread of corona virus, especially in the absence of scientific evidence with a reliable source about what this virus is and how to cure and control it, or study the effects at the social and economic levels in the event of the end of the disease, where many conspiracy theories emerged. since the beginning of the spread of this disease, such that this nebulizer is a lie to achieve personal interests, or that this virus is a biological weapon produced in china to control the global economy, or that the g network works to activate the virus and change it physiologically in a way that kills people [ ] . the credibility and spread of this news depends on the extent of the emergence and absence of official government platforms and its communication with the general population, and its success in hiding information that may harm the public interest of the country's policy, as decision-makers enjoy a different perspective from the rest of the people due to their rational, scientific and security foundations to serve the public interest away from feelings and emotions. also, these rumors and fabricated news affect the relationship between the influential people in the government and the general public, especially when people feel dissatisfied with the material welfare and the social effects that precede or track the spread of such news and rumors. where the ceo of snoops company confirmed that "[there are] rumors, barriers and frauds that cause real catastrophic consequences for people at risk ... it is the most severe information crisis we may ever face" [ ] . the existence and validity of these theories support historical crises, whether natural such as hurricane katrina [ ] or political ones that claim to be human-made, such as / [ ] or disease-related events [ , , ] , where emphasizing the role of storytelling in these historical events in cementing principles, concepts, and beliefs around the most commonly discussed conspiracy theories [ , ] . crisis models rely on the foundations of communication in the event of risks on understanding the perception of risks and responding to the general population in addition to the sources and references of information and news circulated in order to ensure the effectiveness of communication [ ] . this communication must be based on evidence and evidence to ensure responders' behavior in a rational and preventive manner [ ] . this is not to affect protective behaviors in the event that the risk is sought by social media pioneers and users [ ] . uncertainty and exaggeration in the news and the spread of fear phobia were linked to a decrease in the implementation of preventive measures and instructions during the / crisis [ ] . intimidation, spreading phobia, and exaggeration of the danger often occur through social media, where information is interacted with emotional information and news, and most of this news is incorrect [ ] . this spread of this information and rumors is always closely related to the number of people using these means and platforms, as one of the studies that included more than , participants showed that internet platforms were the least used as sources of information during the / pandemic [ ] . here, we find the opposite from that in our time, where the number of people using internet and social media platforms doubled dramatically during the past decade [ ] where the number of users of the twitter platform doubled in the last ten years, from million in to million in year [ ] . any knowledge or information acquired from these platforms and means of communication is often incorrect in the absence of official, scientific and medical sources to confirm it. while tedros ghebreyesus, the director-general of the world health organization, declared that "we are not just a pandemic, we are facing a disease", there are no strong scientific sources for the current terminology of information and false news spreading in the fight against the corona virus, as one study showed that using a new model that works to produce new scientific terms on the identification of false information and misunderstanding [ ] . one of the studies that discussed the transformation of the usual real life into digital life taking place through social media during the corona crisis, in addition to that it showed the types of information and rumors spread during this crisis, firstly false news related to the origin of corona virus like eating meat is the cause of corona virus, secondly false news about how the virus is contracted and can be killed such as vinegar kills corona virus [ ] . in addition to one of the common false information, that only the spread of the coruna virus is confined to the chinese because they eat wild animals like bats and pangolin [ ] . we also mentioned earlier that the extent of spreading rumors and false news depends on the level of trust between the official authorities and the general population and plays an important role in following the procedures and preventive instructions and ratification of everything published by these authorities, where one of the research papers revealed at the beginning of the corona crisis published by the general center for disease control and to prevent it, the responsible authorities may have known about the spread of the corona virus among people two weeks before the announcement to the public, and this worked to create a state of mistrust between the chinese people and agencies including the early warning system [ ] . in addition to the concerns that the world health organization fears about the corona virus epidemic, the combination of false information and rumors also contributes to exaggerating the epidemiological situation and the difficulty of combating it, because most users and pioneers of social media are at their best in tracking fake sources and competing to spread misinformation [ , ] . in contrast to the declared negatives of social media, one of the studies that determine the procedures for social separation and considers it the first line of defense in combating the spread of the corona virus, mentioned the positive role of one of the internet platforms, twitter, to share with users the instructions for social separation and show them feelings of support and support for affected workers, in addition to their support for some of them some are the best ways to deal with social isolation [ ] . we also mention the positives of social media and the effective role they play, especially in natural disasters such as floods and hurricanes, when speed of communication and response is required, thus isolating people to safer places, imposing precautionary measures and deploying the workforce to avoid material and human damage [ , , ]. this study analyses data from a survey conducted in may among ( ) quarantined people from different provinces between the ages ( - ) years old with the mean age ( . ± . years). the survey was anonymous and questions were clear and straight and none of the questions made the respondents uncomfortable or personal. respondents are from all provinces of the country. we collect the data through a survey that presented at social media. samples of both male and female were contributed. it is a self-administrated questionnaire that includes items and aims to test the extent of interaction with the most frequently reported news and rumors regarding ways to prevent corona virus. the feedback from this questionnaire contributes to helping increase awareness to properly deal with these rumors and find out what is right from the wrong and thus combat the spread of the disease and reduce the incidence of it. this questionnaire was utilized including information regarding age, sex, resident place, academic level. data was entered and analyzed by using spss v . for windows. the jordanian state is divided into regions, which is the first administrative division in the kingdom. in turn, these regions are divided into governorates, distributed among these three regions. where the time of population indicated to the department of statistics is that the population of jordan reached , people almost % is distributed over governorates, namely the capital, amman, at %, irbid, . %, and zarqa, . %, mafraq . %, balqa . %, jarash . %, karak . %, tafiela %, ma'an . %, aqaba %, madaba % and ajloun . %. (dosweb, ) . from the chosen sample, we find that the percentage of respondents is distributed as shown in the [ a. results in general, respondents interact with most of the rumors related to ways to prevent corona virus as false information, and it is fundamentally incorrect to them, such as taking cocaine and consuming alcohol by ( . %), using ultraviolet lights by ( . %) and the use of hand dryers by ( . %). on the other hand, we find people who believe the rest of the rumors are somewhat close to people who consider them false information with unreliable references, such as spraying the skin with alcohol or chlorine at rates ( . %) & ( . %) respectively, and using nasal spray, mouthwash and garlic every minutes at rates ( . %) & ( . %) [ table ]. insérer ici tableau we note a general consensus on the correct ways to prevent the corona virus with very little opposition, almost negligible on not following these procedures, such as after receiving parcels from affected areas by no more than ( %). on the other hand, it recorded ( . %) and ( . %) respectively the necessity of washing hands with soap and water on a continuous basis and avoiding shaking hands with people and making a safety distance of not less than a meter. in addition to cleaning surfaces using household cleaning fluids with a support rate of ( . %) and an opposition ( . %) [ table ]. insérer ici tableau a percentage of ( . %) ( out of ) respondents who did not circulate news without verifying its authenticity and reliable sources, in addition to the rumors' impact on the compliance of people with curfew instructions and laws, showed ( %) ( of ) , while ( . %) ( out of ) find that this news affects their commitment, which creates a state of non-compliance with the laws and regulations. on the other hand, the analysis shows a rate of ( . %) ( out of ) in favor of counting the concealment of the disease in the case of infection with the coronavirus. most of the respondents gather that these rumors and fake news have a danger to the local community, and their rate is estimated at . % ( of ) [ table ]. insérer ici tableau corona virus is a global crisis unlike previous historical crises, which will have bad effects on the social and economic levels [ ] . as the situation worsens and the number of concerns increases, the state of suspicion will increase among the general public, thus spreading false information and rumors greatly [ ] , in addition to the presence of free times due to curfews, spacing, and social closures, which will make the situation more anxious and thus persistent and pervasive misinformation [ ] , especially with the ease of finding fake news and information about the corona virus [ ] . therefore, it is necessary to work to share only accurate and accurate information, in order for the governments to solidify with the general public and intensify their efforts together to combat the corona virus with minimal damage [ ] . although technological platforms work to reduce the height of fake information that is shared on social media and internet platforms, a recent study has proposed a new conceptual approach that introduces new definitions about false information, rumors and misinformation and thus work to make the analysis of this information more accurate and facilitate the classification of fake information on verification committees and data analysts [ , ] , in addition to helping the public to think more clearly about the nature of the information circulating on these platforms and distinguish the sinner from the correct [ ] . in this study, the first of its kind in terms of targeting the public and testing their interaction with the circulation of rumors and false information, the analysis was limited among subjects to the most harmful rumors related to treatment for corona virus in addition to the correct preventive measures against this virus and the principles of collective social responsibility, with no focus on factors correlations such as gender, age, academic level, and location of residence, due to the fact that the analysis showed that the interaction with these rumors is not related to these factors as we found a state of great awareness among respondents. the extent of false rumors and news depends on the rapid response of governments and the local [ ] , the imposition of sanctions and laws to deter people and unofficial parties from circulating and disseminating false information and creating a state of panic among citizens and creating a gap and a state of distrust between the official authorities and the general public, during which fukushima disaster more than people were arrested and complaints lodged due to the publication of false information [ ] . technology is developing in a way that makes everything easy, so it is necessary to take advantage of this technology to develop and improve the techniques used in monitoring false information and rumors to facilitate the work of the investigation committees and prosecute the authorities responsible for this misinformation, especially in times of crises and disasters, whether natural or man-made. and, because during these crises, everyone is often tense and afraid of the fate of the crisis and the effects on it at a time of widespread disinformation. the corona crisis is not like other crises, and we are not sure it is the last global crisis. all measures and laws that have created a state of suspicion and the promise of stability for citizens must be lessons learned in the future with a focus on supporting infrastructure and developing all means of communication and rapid response in order to deal with such a crisis and others more effectively and flexibly. conflit d'intérêt : à compléter par l'auteur fake news in the time of environmental disaster: preparing framework for covid- defining misinformation, disinformation and malinformation: an urgent need for clarity during the covid- infodemic defining misinformation, disinformation and malinformation: an urgent need for clarity during the covid- infodemic a resistant strain: revealing the online grassroots rise of the antivaccination movement covid- snapshot monitoring (cosmo): monitoring knowledge, risk perceptions, preventive behaviours, and public trust in the current coronavirus outbreak an economic constraint: can we 'expand the frontier' in the fight against covid- ? working paper # covid- on twitter: bots, conspiracies, and social media activism the global grapevine: why rumors of terrorism, immigration, and trade matter whispers on the color line: rumor and race in america everyday life and everyday communication in coronavirus capitalism. triplec: communication misinformation and polarization in a high-choice media environment: how eff ective are political fact-checkers? rumors and realities: making sense of hiv/aids conspiracy narratives and contemporary legends the infodemic: fakenews in the time of c- extracting information nuggets from disaster-related messages in social media coronavirus disease : the harms of exaggerated information and non-evidence-based measures vaccinations and public concern in history: legend, rumor, and risk perception. routledge provoking genocide: a revised history of the rwandan patriotic front global health in a turbulence time: a commentary narrative text summarization legends of hurricane katrina: the right to be wrong, survivor-to-survivor storytelling, and healing fake campaign on social media against flood relief activities: cases registered, arrested fighting misinformation on social media using crowdsourced judgments of news source quality prior exposure increases perceived accuracy of fake news fighting covid- misinformation on social media: experimental evidence for a scalable accuracy nudge intervention the cdc field epidemiology manual crisis and emergency risk communication as an integrative model public perceptions, anxiety, and behaviour change in relation to the swine flu outbreak: cross sectional telephone survey conspiracy in the time of corona: automatic detection of covid- conspiracy theories in social media and the news number of monthly active twitter users worldwide from st quarter to st quarter retweeting for covid- risk perception and self-protective behavior a feasibility analysis of emergency management with cloud computing integration natural language processing to the rescue? extracting" situational awareness" tweets during mass emergency the spread of true and false news online risk perception and informationseeking behaviour during the / influenza a(h n )pdm pandemic in germany covid- in singapore-current experience: critical global issues that require attention and action managing the covid- pandemic in china: managing trust and accountability key: cord- -znefg r authors: ali, muhammad yousuf; bhatti, rubina title: covid- (coronavirus) pandemic: information sources channels for the public health awareness date: - - journal: asia pac j public health doi: . / sha: doc_id: cord_uid: znefg r the main purpose of this paper is to highlight the important information sources of the public health awareness used by the library and information sources in this pandemic situation. social distancing phase information professional used a different medium to connect with their patron and try to serve the best manner. the role of the information professional in health information and health literacy is very vital. information professional public health awareness information with the library patrons and the general public. in this paper, the researchers provide a brief introduction to different information channel support in information dissemination. covid- public health awareness is the most effective tool to protect this crisis. public health awareness helps and reduces the intensity of spreading rate and reduces death rate, and precautionary measures are required to control this pandemic disease. to combat this pandemic condition, the roles of a librarian and information professional are very vital in dimensions: public health awareness for prevention measures; support to research team/researchers and faculty about the latest developments and research and literature; and service to regular library users and/or information seekers. the authors explore how information about public health is sought and used in this emergency/lockdown situation. following information, channels are used by the librarians and information professionals during the pandemic of covid to facilitate public health awerness. covid- (coronavirus) is a contagious disease. mobile apps are used to educate the people to know about the earlystage diagnosis symptoms of covid- and to inform the general public about the disease. health organization, it companies, and universities worldwide have introduced mobile applications, and this will reduce the influx load to the hospital/health care center. artificial intelligence-based chatbots are also one the successful tools used to chat with the general public. this chatbot is designed in different local and international languages by developers, and one can chat / and get information about coronavirus symptoms, diagnosis, and precautionary measures. social media platforms are also one the fastest mode/medium of public health awareness, and twitter # tag information provided about what going on all over the world in the fastest mode. facebook, whatsapp, and instagram are also other renowned forums of message sharing to the public about the latest updates of the situation. patient and their attendant also engage via social media and share their experience to create awareness to the public. in addition to authentic information, some fake news and information are also shared via social media about this pandemic. such types of information create panic in public health. social media or alternative news create some fear and rumors about the pandemic during the lockdown period. , video-based lectures on youtube, vimeo, and dailymotion are other sources where infectious disease experts share video clips about coronavirus symptoms, cure, and possible measure to avoid this pandemic. medical staff, faculty members, researchers, health support organizations, and paramedical staff support disseminating the latest developments regarding the vaccination, diagnosis kits, and latest literature published on the topic. all the renowned databases provide free access to covid- coronavirus literature. renowned and leading publishers, that is, elsevier, oxford, wiley, bmj, nature, sage, emerald, cambridge, and others, provide free access to the latest literature on coronavirus in the fight against coronavirus. in this covid- pandemic, social distance is one of the keys to protecting ourselves. in this information age, public health awareness is key to minimize causalities, and librarian and information professional can play a vital role to disseminate the information with health care workers, society, and communities. maintaining social distance is important during the lockdown phase. these information channels play a vital role in informing and updating public health information to the general public and health care professionals. the author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. the author(s) received no financial support for the research, authorship, and/or publication of this article. muhammad yousuf ali https://orcid.org/ - - - the covid- (coronavirus) pandemic: reflections on the roles of librarians and information professionals retweeting for covid- : consensus building, information sharing, dissent, and lockdown life pandemic populism: facebook pages of alternative news media and the corona crisis-a computational content analysis information typology in coronavirus (covid- ) crisis; a commentary key: cord- - vwybku authors: jung, gyuwon; lee, hyunsoo; kim, auk; lee, uichin title: too much information: assessing privacy risks of contact trace data disclosure on people with covid- in south korea date: - - journal: front public health doi: . /fpubh. . sha: doc_id: cord_uid: vwybku introduction: with the covid- outbreak, south korea has been making contact trace data public to help people self-check if they have been in contact with a person infected with the coronavirus. despite its benefits in suppressing the spread of the virus, publicizing contact trace data raises concerns about individuals' privacy. in view of this tug-of-war between one's privacy and public safety, this work aims to deepen the understanding of privacy risks of contact trace data disclosure practices in south korea. method: in this study, publicly available contact trace data of confirmed patients were collected from seven metropolitan cities in south korea ( th jan– th apr ). then, an ordinal scale of relative privacy risk levels was introduced for evaluation, and the assessment was performed on the personal information included in the contact trace data, such as demographics, significant places, sensitive information, social relationships, and routine behaviors. in addition, variance of privacy risk levels was examined across regions and over time to check for differences in policy implementation. results: it was found that most of the contact trace data showed the gender and age of the patients. in addition, it disclosed significant places (home/work) ranging across different levels of privacy risks in over % of the cases. inference on sensitive information (hobby, religion) was made possible, and . % of the cases exposed the patient's social relationships. in terms of regional differences, a considerable discrepancy was found in the privacy risk for each category. despite the recent release of government guidelines on data disclosure, its effects were still limited to a few factors (e.g., workplaces, routine behaviors). discussion: privacy risk assessment showed evidence of superfluous information disclosure in the current practice. this study discusses the role of “identifiability” in contact tracing to provide new directions for minimizing disclosure of privacy infringing information. analysis of real-world data can offer potential stakeholders, such as researchers, service developers, and government officials with practical protocols/guidelines in publicizing information of patients and design implications for future systems (e.g., automatic privacy sensitivity checking) to strike a balance between one's privacy and the public benefits with data disclosure. with covid- becoming a worldwide pandemic, each country is attempting various ways to stop or slow down the spread of the virus among people, such as social distancing, preventing events that bring many people together, detecting and isolating the confirmed cases, and so on ( ) . in this situation, one of the effective measures is to conduct "contact tracing" ( , ) . contact tracing is defined as "the identification and follow-up of persons who may have come into contact with an infected person, " and involves identifying, listing, and taking follow-up action with the contacts ( ) . it plays an important role in quick isolation of infected persons to prevent potential contact with others. from a stochastic transmission model of the spread of covid- , contact tracing was shown to be effective in controlling a new outbreak in most cases and reducing the effective reproduction number ( ) . however, due to limited human resources for tracing, it could be very difficult to trace the contacts who might be potentially infected, particularly when the number of patients is skyrocketing. therefore, some countries began to proactively open the data of confirmed cases to the public or share it with medical institutions to find close contacts more efficiently. for instance, in singapore, the government discloses the places related to patients, such as residence, workplaces, and other places they had visited ( ) . in taiwan, the authorities utilize the airport immigration database combined with the national medical database to quickly determine whether the patient has visited other countries ( ) . other governments also are sharing the personal information of the patients with similar components of data, including age and gender, nationality, geographical breakdown of patients, and so on ( ) . south korea also disclosed the patients' contact trace data to the public to prevent further spread of the coronavirus. each local government pseudonymizes the patient data, which contains demographics, infection information, and travel logs, and releases it to the public. this information helps the public to self-check whether they were co-located with the confirmed patient. however, there is a potential threat in publicizing the patient's data ( ) . efficiently identifying potential contacts may be advantageous in terms of public safety but revealing personal data would infringe upon the patient's privacy. most of the information disclosed could be personal data and combining a set of data reveals additional information. privacy risks, along with online abuses or rumor-mongering based on somewhat uncertain information, may cause blame and social stigma ( , ) and raise the risk of physical safety ( ) . while it is important to find and isolate close contacts quickly for preventing the spread of infectious diseases, it is also critical to minimize breach of patients' privacy. recently, the national human rights commission of korea claimed that the publicized information is unnecessarily specific and may cause privacy violations ( ) . in response to this, the korea centers for disease control & prevention (hereinafter "kcdc") announced two guidelines ( , ) limiting the scope and the period of the data disclosure and recommended the deletion of outdated information (after days from the table | the korean government guidelines for the scope and detail of the information to be disclosed. mar. • personal information: information that identifies a specific person should be excluded • period: information should be from day before the symptoms occur to the date of quarantine • place and transportation: place and transportation should be disclosed where contacts have occurred with the confirmed cases. the detailed address of residence and workplace should not be disclosed. however, the address may be revealed if there is a risk that covid- has been spread to random people in the workplace. spatial and temporal information (e.g., building, place names, and transportation) should be specified as possible, except for that of identifying certain individuals. apr. • disclosure period: the data should be only released for days from the date that the patient had the last contact. last contact) on march and april , respectively (see table ). although a critical question about the cost-benefit tradeoffs between privacy and public safety still remains, existing studies on location and privacy have not fully reported insights from contact tracing and underlying privacy risks. past studies on location privacy primarily focused on an individual's privacy perceptions and potential risks of leaking current locations to diverse social media ( ) ( ) ( ) . however, these prior works were more of a real-time location sharing of a single spot, rather than sharing one's full mobility data spanning several days to a week or more, as in contact tracing. another key difference to note is that privacy risks regarding contact tracing under special occasions, such as covid- are relatively unaddressed in the literature. it is timely to explore this issue as public disclosure of contact tracing data under covid- raises questions about data sovereignty and privacy of a patient. thus, the present study assessed privacy risks on the contact trace data disclosed in south korea. specifically, the study first examined what kind of personal information is contained in the data, and how much exposure or inference is made from that data. it then examined how much difference in privacy risk levels exists according to region and time when disclosing the data. while no study to the researchers' knowledge has assessed privacy risks on public disclosure of contact tracing data related to covid- , the present study first analyzes the real-world data in south korea and provides possible directions for privacy-preserving data disclosure and presents several policy and technical implications that can possibly lower privacy risks. this section describes the data collection and analysis process used to evaluate privacy issues resulting from data disclosure. to assess potential privacy concerns through real-world examples, the contact trace data of confirmed patients was collected. the data listing confirmed cases date-wise from january to april were released by seven major metropolitan cities in south korea. the contact trace data was collected from various publicly accessible online websites, such as the official website and social media sites of the local government, and its press releases and briefing information. since the data was released to the public by the government and any specific individual cannot be identified with it, there is no critical ethical concern for data analyses. as shown in table , the released contact trace data included ( ) the patient's demographics (i.e., nationality, gender, age, and residence), ( ) infection information (i.e., infection route and confirmation date), and ( ) travel log in time series (e.g., transport modes and visited places). the data is processed by the contact trace officer before it is released online (i.e., excluding places which the patient visited but no contact was made), hence the government may possess more information than the public can access. this study covered seven out of eight metropolitan cities in south korea, namely, seoul, incheon, sejong, daejeon, gwangju, ulsan, and busan. the city of daegu was excluded from the data collection process because it did not disclose patient information since the massive contagion outbreak prevented contact tracing. as the guidelines set by the kcdc recommend the deletion of the outdated information (after days from the last contact), all the sample cases of disclosed patient data mentioned in this study were anonymized by the researchers. for instance, the address and name of a place (e.g., building name) were converted into four character long random strings (e.g., g a -gu, d zdong, bqt building). similarly, the identification number of the patient was also anonymized (e.g., #w p). in this study, a codebook was introduced to evaluate the level of privacy risks. the codebook has an ordinal scale of privacy risk levels and the scale quantifies relative risks from five major categories: demographics (nationality, gender, age), significant places (residence, workplace), sensitive information (hobby, religion, accommodation), social relationships, and routine behavior. the details of the codebook generation are as follows: the collected data were manually examined to evaluate the level of privacy risks. the following types of information were identified: demographics, location information (e.g., significant places and behavioral routines), and social relationships. affinity diagramming on contact trace data was performed to iteratively build a coding scheme ( ) . as a result, the manual examination generated five categories with eight sub-categories, as described in table . for each data category, an ordinal scale of privacy risk levels was introduced. the scale quantifies the relative privacy risks of the patient's trace data; for example, a high level means that detailed information was released. the following section describes the details of each category and its associated risk levels. this codebook was used to evaluate each patient's contact trace from seven metropolitan cities. the "demographics" category included three sub-categories: nationality, gender, and age. for nationality and gender, two scoring criteria were considered: ( ) level for not containing any information for each of the two categories and ( ) level for disclosing that information (e.g., patient # sx is chinese, patient # nw is a woman). in the case of "age, " three criteria were considered: ( ) level for no age information, ( ) level for rough description (e.g., the twenties), and ( ) level for accurate information (e.g., years old, born in ). before describing the methods further, this study explains the administrative divisions in south korea since it could differ from country to country. the administrative divisions can be divided into four levels by their size: province ("do"; the whole country is composed of nine provinces), city ("si"; typically - , km ), sub-city ("gu"; typically -, km ), and district ("dong"; typically - km ) ( ) . people in south korea often use this system when they look for a place or mention a certain location. in the address system of south korea, there are two more detailed steps in describing places: streets (i.e., "ro" or "gil") and the building number. the street is lower level than the "dong, " so a "dong" may contain several "ro"s and "gil"s. the lowest level is the building number and the address provided up to this step would point to the only building throughout the country. a person's home (residence) and workplace are considered significant places. to assess the detailed location information of these places, a two-stage approach was used: ( ) direct location identification and ( ) indirect location inference by combining the breadcrumbs of visited places and transport modes. the second stage was inferring the locations of personal life using nearby places whose full addresses or names were disclosed. even if the information is limited, reasonable inference based on a travel log is possible by examining the surrounding places and transport modes. for example, there was no explicit description of a patient's home, yet the travel log said " min in total to walk from his home to a convenience store, and come back again." and the full address of the store is known (i.e., - , allakdong, dong-gu, ulsan). this log may indicate the approximate location of her house. considering a person's walking speed (e.g., km/h), the area where her home is located could be determined as described in figure . to estimate the time required to travel on foot, the average sizes of the sub-city ("gu"), district ("dong") and street ("ro" or "gil") were used. there were gu, , dong, and , street included in the total for the seven cities. given that the total size of these cities was , km , the average sizes of gu, dong, and street were calculated as . , . , and . km , respectively. for the convenience of calculation, an assumption was made that the shape of each administrative area was circular. as a result, the radius of each division was . , . , and . km for gu, dong, and street, respectively. taking the average walking speed of a person this means it is reasonable to infer that a place is under dong level (i.e., privacy level ) when it takes from to min on foot and street level (i.e., privacy level ) if it takes - min. on the basis of these results, the details of a location were labeled where the address was not shown but could be inferred from a known place. for instance, in the case of patient #pr of bi c-gu who went home from the q eg branch of kjn convenience store (i.e., only one store of its kind in that region) on foot in min, this case was scored as level privacy risk. moreover, some places where it took < min on foot were labeled as . . in this case, it is more specific than level , but it is still not possible to identify the exact place. in some buildings, there is a possibility of revealing sensitive personal information. for instance, if there is information on the travel log that the patient had attended a church service, and its name was disclosed, anyone who reads this could know her religion. this study mainly considered three place categories: ( ) hobbies, such as fitness clubs, dance schools, pc cafes (playing games), and karaoke (singing); ( ) religion, such as a church, cathedral, and temple; and ( ) accommodation, such as hotel and motel. if any of these place categories were described in the travel log, that case was labeled as level ; otherwise, level was given. privacy issues might arise when information about how one person is related to another is revealed. if the travel log indicates that two people are found to have been together at a certain time or moved together to a place, there is privacy leakage of relationships. therefore, patients' travel logs were examined to check whether they included this relationship information. for not describing such information, level was given. level was rated in case of revealing the relationship only (e.g., patient #t in xal-gu is the mother-in-law of patient #rb in the "gu" of the building is disclosed "dong" of the building is disclosed - min on foot taken from a known location "ro" or "gil" of the building is disclosed - min on foot taken from a known location . < min on foot taken from a known location the exact location of the building is disclosed only the relationship is disclosed the location and the relationship are disclosed together routine behavior no place that is visited repeatedly includes places that are visited repeatedly same district). if the relationship was revealed with location (e.g., patient # x in nuw-gu had lunch with her colleague patient #v l in the same district, at a restaurant near their office), that case was rated as level . using information about places that are repeatedly visited in a specific time window (known as behavioral routines) could make it easier to identify a person. if it is revealed that there is a place where a confirmed patient repeatedly visits at a certain time, malicious people may use this information (e.g., robbery). for this reason, it was examined whether the travel log included routine behavior. if there was a place visited more than twice at a specific time, the case was labeled as a level risk, otherwise, a level risk (or no risk at all). this study analyzes cases from seven metropolitan cities in south korea (see table ) and reports ( ) the descriptive statistics of privacy risk levels, and ( ) their differences across regions and time. the five major categories and eight sub-categories of data types that might potentially reveal personal information (e.g., life cycle, social relationships, etc.) were coded in terms of privacy risk levels. here, a detailed description of the result as well as some noteworthy findings from the analysis of the privacy risk of the contact trace data is provided (see table ). demographics included patients' nationality, gender, and age. in reporting nationality, . % of the data do not contain patients' nationality (n = ). these cases could be assumed to be koreans. all cases of confirmed foreign expatriates disclosed their nationality, which accounted for . % (n = ) of the patients. considering that legal foreign expatriates account for only % of south korea's total population ( ) , and the number of confirmed foreign cases is a small proportion, there is a high chance of identifying an individual: it is easier to pinpoint an individual if cases from his/her nationality are relatively few. for example, there was only one confirmed case from gambia, while ∼ gambians resided in south korea. this example shows the potential for easier identification of the suspect when the size of a community is small. all cases reported patients' gender, and cases ( . %) specified the exact age or birth year of a patient (e.g., age , born in ), whereas cases ( . %) only reported the age range of a patient (e.g., the twenties). one thing to note is that age and gender are personal details that make up one's social security numbers ( digits) and collecting such data could be invasion of privacy. significant places refer to the residence and workplace of an individual. in identifying residence, over % (n = ) of the disclosed data ranging from level to level provide highly granular data, such as the district, street, and name of an apartment. with additional data, such as activity type (e.g., walking) and the time taken, it could easily be deduced that an individual lives in that narrowly defined region. only cases were labeled as level , which included the following two cases: ( ) patients from abroad with no domestic residence, and ( ) patients who had come from another city. of the disclosed data, . % (n = ) ranged from level to in the "workplace" category. one interesting fact to note was that collective infection at a workplace unavoidably revealed a patient's workplace location. for example, a collective infection case which caused about related cases occurred at a call center located in guro-gu, seoul revealed the specific building and floor of the center (e.g., "korea" building, th floor). a large fraction of cases had a level on workplace location (n = , . %). this low risk of workplace location is possibly due to the confirmed patients being jobless (e.g., older adults, teenagers, patients from abroad). another noteworthy finding is that collective infection at a workplace inevitably exposes the location and the patient's job, which the patient wished to keep private (e.g., patient #u m from tb-gu, seoul, works in the redlight district). other cases classified as "no information" usually had no related information of a workplace. some exceptional cases included the word "office, " but with no location specified (e.g., a.m.- p.m., office). the data revealed several cases of patients' regular visits to a certain place, which makes it possible to infer one's personal details-hobby, religion, and accommodation information. in the hobby category, cases (n = , . %) were identified from patients' regular visits to the gym, golf club, and other places for amusement or leisure activities (see table ). furthermore, religious orientations were revealed because of the collective infection that occurred through religious activities, such as group prayers (n = , . %). after mass contagion, most religious services went online, and only a few infection cases revealed religious places. it was also found that information of a short stay (e.g., a few hours) at a specific accommodation, hotel, or motel, may infringe privacy-although this constituted only a small proportion (n = , . %). along with location data, some of the patients' relationship information was also provided. with relationship data alone or combining location and relationship data, it might be possible to guess a patient's social boundaries and even infer more about personal life. thus, the category was divided into "relationship only" and "relationship and location." in "relationship only" (n = , . %), family and social relations (e.g., colleagues, friends) of a patient were identified. from the analysis, the disclosure of family relations was shown to contain the following two categories: ( ) disclosure of family information involving consecutive infection of family members (e.g., patient # dj (seoul) mother from daegu visited patient # dj's house, patient #t v (seoul) patient # dj's sister), and ( ) disclosure of information on an uninfected family member (e.g., patient #sa (seoul) patient #sa 's husband had contact with patient #x t at work and she was infected while under selfquarantine). in the first category, it was found that information about family relations was usually provided directly as family members' traces overlap and involve consecutive infections. the second category raises questions on the necessity of providing additional information about an uninfected family member. for example, information from the second case unnecessarily reveals that the patient's husband had contact with another patient who was assumed to be his colleague. considering that the patient's husband was not infected, it is difficult to say if his contact with a colleague was an essential piece of information. compared to family relations, social relations of confirmed cases generally provide activities shared together (e.g., carpool, late-night drinks at the bar). in the case of workplace relations, linkage information between patients was revealed largely through collective infection. some cases revealed additional information other than a colleague/friend relationship. for example, contact trace data of patient # f (seoul) revealed his colleague is a member of d l church, a church that was identified as the epicenter of the major outbreak in south korea after the infection of patient #f , a "super-spreader" from daegu. the local government may have judged that providing this information was necessary considering the severity of the outbreak situation. however, the question still remains as to whether it was an appropriate decision to disclose information about religion along with social relationships. "relationship and location" (n = , . %) provides information on visits to certain places that may reveal the presence of another person and lead to speculation and unwanted exposure of one's private relationship. for example, one patient's repeated visits to a motel at regular intervals may lead to speculation that he has an intimate relationship with someone. although excluded from our data analysis, patient #f from suwon (one of the cities in south korea) who had his traces overlapped with his sister-inlaw (patient # if) was highly criticized by the media and social network for having an affair, which turned out to be a rumor ( ) . less sensitive cases reported the location of home and workplace of a patient's family, friends, and other acquaintances. from the data, it was able to identify types of frequent activities of a patient (e.g., commuting, exercise), which extends to inference on a patient's routine behavior and lifestyle patterns (n = , . %). for example, ∼ % of the contact trace data from seoul reported regular commuting time of the patients. these pieces of information are usually provided along with the type of transportation (e.g., on foot/by car/by bus/carpool with a colleague), which enables a detailed inference on one's time schedule. data of patient #t n (seoul) showed repetitive commuting to a church and his later mobility patterns centered around the church. the patient also visited a nearby cafe several times at a similar time before the case was confirmed. this consistent pattern leads us to a plausible speculation that he is a christian who works at a church and often visits nearby places. the speculation in this study was confirmed through a news article that revealed his job, a missionary. as the high data granularity provided in this case leads to several assumptions on private information, it was found that inferred details of the patient (workplace, frequent visits, religion) could also belong to other categories, such as "significant places" and "sensitive information." key findings • demographics were observed in most cases (gender: %, age: . %) and the data on significant places (residence/workplace) showed different levels of privacy risks in over % of the cases. • some places disclosed in the data indicated sensitive information about the patient due to the characteristics of the place (e.g., pc caf 'e -the patient's hobby is playing games, church-the patient is christian). in addition, nearly half of the cases ( . %) exposed the patient's social relationships by describing information about relationships or by showing them visiting certain places with others. • around a quarter of the cases ( . %) revealed the routine behavior of the patient from places that had been visited repeatedly and frequently. the patterns that appeared in routine behavior may be an important factor in inferring the patient's lifestyle. first, variation in privacy risk levels across different regions was analyzed by comparing their average privacy levels. the analyses revealed regional differences in privacy risks for the confirmed patients. in the demographics category, four cities, seoul, busan, incheon, and ulsan, often showed the exact age of patients (e.g., years; i.e., level ), while sejong, daejeon and gwangju showed the age range (e.g., the twenties; i.e., level ). in terms of nationality, seoul disclosed the nationalities of the confirmed cases of all foreigners. despite its low proportion (∼ %) relative to the number of total cases, seoul reported a higher number of nationalities compared to other cities. it was posited that this was because of capital-specific effects, as the city has ∼ , foreigners. gwangju also reported a considerably high number of nationalities. out of the total cases, gwangju revealed nationality information of all the cases ( % disclosure). unlike seoul, one interesting fact to note from gwangju is that the city also reported the nationality of korean patients. currently, no specific guidelines regarding nationality disclosure have been found. as shown earlier, all cities revealed gender information of the patients, and there was no difference in this regard. in addition, a comparison of the privacy level of significant places was conducted. as shown in table , the average privacy level of residence is distributed between . (ulsan) and . (sejong). all the cities except sejong released only approximate information on a patient's residence such that more than half of the residential information released by each city was equal to or below level ("dong" level). sejong revealed the most detailed information with level on average (mostly at an apartment complex level), which is partly because of the unique characteristics of sejong, a new multifunctional administrative city with many high-rise apartment buildings. with regard to the workplace, the presence of a mass infection in the same building made the difference. important cases, such as the call center of an insurance company in guro-gu, seoul, influenced the high proportion of level cases in seoul ( . %) and incheon ( . %); same was the case with a government building of the ministry of oceans and fisheries in sejong ( . %). most of the patients in sejong work in government buildings, thereby resulting in a high ratio of level . daejeon showed a comparatively high ratio of level ( . %), despite having no case of mass infection, unlike other cities. in the "sensitive information" category, "hobby" showed a substantial proportion of cases that reported privacy level across all cities. in level , sejong reported . %, which is a markedly higher figure compared to other cities. this is interesting to note, as one patient who took a zumba class infected the other students. "religion" showed a moderately high percentage of level in an overall sense, but busan showed . % of cases that were level . collective infection occurred at a church that contributed to this relatively high level of disclosure. "accommodation" information appeared only in a small fraction of the dataset, but such visits were often suspected for cheating, as reported in the news articles ( ) . from "hobby" and "religion, " it was found that a particular incident that involved collective infection unavoidably led to a disclosure of sensitive information. "routine behavior" showed a higher average level of disclosure than "sensitive information." in this category, sejong and daejeon showed relatively high percentages of . and . %, respectively. in sejong (n = ), confirmed cases showed very similar mobility patterns, as collective infection revealed that most of the patients worked at the same government and shared the same leisure activity (i.e., zumba class). it was assumed that the unique characteristics of this newly built administrative city have also contributed to this dense infection within the community, as the population is relatively small and a large proportion of residents are government officials. despite no occurrence of collective infection, daejeon (n = ), as shown earlier, revealed the workplace of the confirmed patients. disclosed workplaces are usually research institutes or tech companies, as the city is a well-known mecca of science and technology in south korea. from the data, . % of workplace revelations were particularly found in seo-gu and yuseong-gu, districts dense with research institutes. inferring the patients' routine behavior was relatively easier as their workplaces were revealed and they lived in the same area. cases from these two cities demonstrate that characteristics of a city can be reflected in contact trace data and enable an indication of one's routine behavior and daily patterns. in "social relationship, " ulsan showed the highest percentage of data disclosure (level and level combined: . %), followed by gwangju (level and level combined: . %). from ulsan, it was posited that mass influx from abroad and their traces with family members may have contributed to this high percentage of privacy disclosure. the korean government announced a guideline limiting the scope and detail of the information to be disclosed on march , . as shown in table , it was analyzed how the release of the government's official guidelines influenced privacy risk levels across different regions, by comparing the average privacy levels before and after the announcement. overall, average privacy risk levels decreased for the workplace, hobby, religion, and routine behavior, whereas other items remained somewhat similar. it is notable that while detailed demographic information (i.e., nationality, gender, and age) is generally considered as sensitive information, the average privacy levels for these remained unchanged even after the announcement. in privacy risk levels in general, every region showed a similar the change in trend. however, notable regional differences were found in accommodation and relationships; as an illustration, for relationships, the average levels decreased for seoul, daejeon, and gwangju, while the levels increased for busan and sejong. these findings indicate that the announcement of government guidelines can lower risk levels. however, the effects of the government guidelines could be limited to several factors, such as workplaces and routine behaviors, and vary across regions (or local governments). key findings • differences in privacy risk levels among the cities were observed. in particular, the data from sejong revealed the most detailed information on significant places (the average privacy risk levels for residence and workplace in sejong were over level ), whereas ulsan showed a relatively high percentage of data disclosure on social relationships (i.e., . % of the confirmed patients in ulsan). • the government guidelines on data disclosure have been released recently, and the effects were limited to a few factors, such as workplaces and routine behaviors. disclosed contact trace data (e.g., "where, when, and for how long") help people to self-identify potential close contacts with people confirmed to be infected. however, location trace disclosure may pose privacy risks because a person's significant places and routine behaviors can be inferred. privacy risks are largely dependent on a person's mobility patterns, which are affected by several regional and policy factors (e.g., residence type, nearby amenities, and social distancing orders). in addition, the results showed that disclosed contact trace data in south korea often include superfluous information, such as detailed demographic information (e.g., age, gender, nationality), social relationships (e.g., parents' house), and workplace information (e.g., company name). disclosing such personal data of already identified persons may not be useful for contact tracing whose goal is to locate unidentified persons who may be in close contact with confirmed people. in other words, for contact tracing purposes, it would be less useful to disclose the personal profile of the confirmed person and their social relationships, such as family or acquaintances. the detailed location of the workplace could be omitted because, in most cases, it is easy to reach employees through internal communication networks; an exceptional case would be when there is a concern of potential group infection with secondary contagions. likewise, it is not necessary to reveal detailed travel information of overseas entrants (which were not reported in the main results), such as the arrival flight number and purpose/duration of foreign travels. based on the results and discussions, this subsection presents policy and technical implications for contact tracing and data disclosure. detailed guidelines are required: the scope and details of patient data disclosure should be carefully considered in the official guidelines. as shown earlier, some of the information included in the patient data in south korea could be controversial because it is not clear whether it is essential to prevent further spread of covid- . the current guidelines set by the kcdc, which are shown in table , do not provide detailed recommendations. therefore, the guideline about "information that identifies a specific person" could be interpreted differently by different contact trace officers. at the time of contact tracing, it is difficult for officials to envision how a combination of different pieces of information provides an important clue the patient's identity. to reduce the possibility of subjective interpretation, current guidelines can be augmented with the patterns of problematic disclosure, which could be documented by carefully reviewing existing cases. in this case, the codebook of this study could serve as a starting point for analyzing the patterns of problematic disclosure. for instance, one's residence and workplaces can be generally considered sensitive information. the codebook allows the assessment of privacy risk level on a patient's residence and workplaces when disclosing the patients' visited places and transport modes. in addition, for location privacy protection, privacy protection rules, such as k-anonymity can be applied. the k-anonymity ensures that k people in that region cannot be distinguished ( ) . due to public safety, however, its strict application is not feasible, yet a relaxed version of k-anonymity can be used: at least for a given region, when there are multiple confirmed cases with overlapping periods, removing identifiers (or confirmed case numbers) could be considered to further protect their location privacy. proper management of revealed data is required: given that some level of privacy risk is unavoidable due to public safety, it is important to manage the patients' data that have been opened to the public. official guidelines recommend that municipalities erase outdated data from their official websites. while scouring the dataset over several months for this research, it was noticed that contact trace data are replicated on multiple sources, ranging from official channels of municipalities (e.g., homepage, blogs, social media, and debriefing videos on youtube) to online news articles and personal sites. diversifying information access channels would be beneficial for public safety; however, the authorities should set a strict code of conduct or regulations on managing replicated contact trace data (e.g., "register before publish") to promote responsible use (e.g., removing outdated data). it's possible to automatically check privacy issues: contact tracers' subjective interpretation could be a source of privacy risks. one could consider an intelligent system that detects possible privacy issues from the patient data before disclosure. for example, personal data can be detected by utilizing supervised machine learning that analyzes semantic, structural, and lexical properties of the data ( ) or by estimating privacy risks with visual analytic tools based on k-anonymity and l-diversity models ( ). if a system utilizes a metric for quantifying the privacy threat and evaluation model as proposed in the previous study ( ) , the system could not only detect potential issues but also obscure the data automatically until it meets a certain privacy level. however, these automatic approaches should be considered with care because they may hide essential contact trace information that needs to be released for public safety. unified management of contact tracing data could be introduced: decentralized management of contact trace data in each municipality makes it difficult to examine privacy risks and manage data replication. in addition, the quality of user interfaces varies widely across different regions. introducing a unified system that manages and visualizes the contact trace data across all regions would be beneficial. of course, there is a concern of a single point of failure, yet this issue can be overcome by introducing decentralized server systems with cloud computing. to promote responsible replication and management of patient data, one can implement a "register before publish" policy. moreover, an information system can help to manage the people who reprocess the patient data officially provided by the local government and deliver it to the public via news articles. this system should have the ability to ( ) authorize data usage, ( ) track in which article the data is being used, and ( ) delete the data automatically when it is outdated. the system could also provide a built-in sharing feature as in youtube's video embedding. youtube allows users to add a video to their websites, social network sites, and blogs by embedding the video to the sites, while any modification or deletion of the original video on youtube is also reflected in the embedded video ( ) . a similar mechanism can also be applied to the system. mobile technologies for contact tracing can be alternatively considered: mobile technologies could be utilized to avoid privacy concerns from public disclosure ( , ) . short-range wireless communications (bluetooth) can be used to automatically detect close contacts by keeping periodic scanning results of nearby wireless devices [e.g., tracetogether ( ) and apple/google's app ( , ) ]. a confirmed user can now publish its anonymized bluetooth id, which helps other people to check whether they are in close contact with the patient. this approach certainly helps protect user privacy because location information is not explicitly shared. however, there are major concerns about its assumption: a majority of people voluntarily need to install mobile applications. there should be further studies on how to consider multiple contact tracing methods along with traditional methods of public disclosure. with the outbreak of covid- , as mentioned in the introduction, several countries have been disclosing contact trace data. although this paper presents the privacy risks of contact tracing practices, the results should be carefully interpreted, given the limitations of the study. first, this work is focused on south korea and the results may not be generalizable to other nations due to policy differences. however, our methodologies and insights could still be applied in other nations that make contact trace data public. comparing the differences in disclosure policies and privacy risk levels would be an interesting direction for future work, as slight differences in disclosure exist. for instance, the hong kong government reveals the patient's information in an interactive map dashboard that showed not only the demographics but also the full address of both residential and non-residential places that the patient had visited ( ) . the singapore government also released detailed patient information, such as nationality, visited sites, and infection sources ( ). aggressive contact tracing and data disclosure were considered effective methods for suppressing the spread of a virus. while there is an ongoing dispute between promoting public safety and protecting personal privacy, there is a growing consensus that a reasonable level of personal privacy needs to be sacrificed for public safety, as shown in a recent survey ( ) . for all these cases, our policy and technical implications could help lower privacy risks and yet allow governments to effectively conduct contract tracing. in future studies, researchers could compare the differences between governmental policies of open access to contact trace data and the opinions from the public among these countries to set international guidelines on data disclosure in pandemic situations. next, there are privacy issues that remain to be quantified; for example, revealing foreign travel logs, underlying medical conditions, and even part of a patient's name (i.e., the last name of the patient). place log information may include hospital visits that are not related to covid- ; this could reveal a patient's underlying health or personal conditions (e.g., urology, dermatology, and cosmetic surgery). therefore, this study should be expanded to evaluate diverse privacy-violating elements. it is also necessary to study the media's disclosure patterns of patient information. in some cases, the media provided more specific data than the government through an exclusive report. recently in south korea, new media publicized a patient's sexual orientation by investigating visited places (e.g., specific types of bars) or workplace/social information (e.g., infected healthcare workers). therefore, one could compare the disclosed data from the local government with that from the media to evaluate how much further privacy leakage would occur through the news media. this work mainly focused on analyzing the officially disclosed patient data, nevertheless, it is also important to find out what people (both patients and the public) really think about that data. opinions on sharing my data as opposed to someone else's may differ ( ) , and the perception of risk of information disclosure could be influenced by the consequent results of both benefits and risks ( ) . thus, researchers could possibly find an optimal level where personal privacy and public benefit are well-balanced. all datasets presented in this study are included in the article/ supplementary material. feasibility of controlling covid- outbreaks by isolation of cases and contacts contact tracing during an outbreak of ebola virus disease available online at response to covid- in taiwan: big data analytics, new technology, and proactive testing how coronavirus is eroding privacy south korea is reporting intimate details of covid- cases: has it helped? fear and stigma: the epidemic within the sars outbreak defect: issues in the anthropology of public health. stigma and global health: developing a research agenda privacy: are south korea's alerts too revealing? ( ) national human rights commission of korea. nhrck chairperson's statement on excessive disclosure of private information available online at division of risk assessment and international cooperation, kcdc. press release-updates on covid- in korea (as of march) division of risk assessment and international cooperation, kcdc. press release-the updates on covid- in korea as of may sharing location in online social networks location disclosure to social relations: why, when, & what people want to share rethinking location sharing: exploring the implications of social-driven vs. purpose-driven location sharing available online at using thematic analysis in psychology localness of location-based knowledge sharing: a study of naver kin ?here? yonhap news agency. no. of foreign rresidents in s. korea hits record . mln in south korea's tracking of covid- patients raises privacy concerns using data visualization technique to detect sensitive information re-identification problem of real open dataset private data discovery for privacy compliance in collaborative environments the metric model for personal information disclosure available online at: https:// support.google.com/youtube/answer/ ?hl=en mobile phone data and covid- : missing an opportunity? arxiv covid- contact tracing and data protection can go together singapore says it will make its contact tracing tech freely available to developers available online at available online at latest situation of coronavirus disease (covid- ) in hong kong americans rank halting covid- spread over medical privacy less than half in singapore willing to share covid- results with contact tracing tech teenagers' perceptions of online privacy and coping behaviors: a risk-benefit appraisal approach gj and hl collaboratively analyzed the dataset and wrote the main texts (i.e., introduction, results, discussion). ak actively guided the design of the study, helped data analyses/visualizations, and wrote the background and summary sections. ul supervised the overall research, provided detailed feedback for data analyses and paper organization, and reviewed the entire manuscript. all authors contributed to the article and approved the submitted version. the authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.copyright © jung, lee, kim and lee. this is an open-access article distributed under the terms of the creative commons attribution license (cc by). the use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. no use, distribution or reproduction is permitted which does not comply with these terms. key: cord- - jn agir authors: shahzad, arfan; hassan, rohail; aremu, adejare yusuff; hussain, arsalan; lodhi, rab nawaz title: effects of covid- in e-learning on higher education institution students: the group comparison between male and female date: - - journal: qual quant doi: . /s - - -z sha: doc_id: cord_uid: jn agir in response to the emerging and ever solution to the covid- outbreak. this study proposes a theoretical framework based on literature and model to determined e-learning portal success. the study compared males and females to e-learning portal usage. the study objective is to check the difference between male and female e-learning portals’ accessibility among the students’ perspective. the study included service quality, system quality, information quality, user satisfaction, system use, and e-learning portal success. the empirical data of students participated from the different universities of malaysia through google surveys analyzed using the partial least squares structural equation modelling. the study further divided the full model into two domains, which are female and male. in the male model, information quality and system quality have direct relationships with user satisfaction. information quality also supported the relationship with system use. at the same time, there is a positive relationship between user satisfaction and e-learning portals. likewise, in the female model, e-service quality and information quality both are supported by system use and user satisfaction. similarly, system quality has a positive relationship with user satisfaction, and user satisfaction has a positive relationship with e-learning portals. the study will be further helpful for the malaysian universities policy-makers such as top management, ministry of higher education, malaysian universities union in designing the policies and programs on e-learning portal success in the country. the findings of the study reveal that males and females have a different level of in terms of usage of towards e-learning portals in malaysian universities. in the st century, at the end of in wuhan, the high technology business hubs of china experience an epidemic of an entirely distinctive coronavirus appeared that had killed a few thousand chinese within the fifty days of spreads and thousands of other citizens are suffered. the novel virus was nominated as covid- novel coronavirus by the chinese scientists (shereen et al. ) . later on, in a shorter period, this covid- spread worldwide. several country's economies are severely affected due to covid- . further, the outbreak has changed the operating conditions all over the globe within a month. the consequences of a pandemic are unstoppable and uncontrollable for many industries of the world. later on, almost countries have stopped face-to-face learning; approximately a billion students' education is effected worldwide with covid- . most of the higher education system is operating through the e-learning (azzi-huck and shmis ; shahzad et al. a, b) . meanwhile, to tackle the covid- pandemic, almost all the world, and including malaysian higher education ministry, has issued the ordered to close the public school and higher education closure as an emergency measure to stop spreading the infection. technologies have changed the traditional way of education to the modern way of learning, like artificial intelligence (di vaio et al. a) . thus e-learning is covered under a larger term of technology-based learning through websites, learning portals, video conferencing, youtube, mobile apps, and thousand types of free available websites for blended learning tools. currently, e-learning is enhancing students' knowledge, even the academic staff and professional and industry people skills through the internet (adams et al. ; chopra et al. ). most of the higher education universities are providing online courses for their students within and off campuses. in malaysia, the government is providing many resources to higher education. based on the news reports, the malaysian universities, colleges, polytechnics are using massive open online courses (moocs). the growth of the online education market is expected . % annually over the forecast period, - . with the massive growth of the internet, maybe university teaching and learning models will be changed in to years. thus, it based on male and female students on the use of the e-learning portal. this study focuses on comparisons between male and female counterparts on e-learning portal usage among university students during the covid- period. globally, due to covid- outbreak universities closed and lockdown, most teachers and students are happy by the move online education. the faculty members of worldrenowned universities have begun to get online instructor certifications to deliver online teaching to their students. at the same time, faculty and staff members are learning how to use online learning platforms. previous, they are using only the delivery through face-toface teaching. however, the shift to online mode has raised many queries on the quality of education (sahu ) . furthermore, the quality of education and excellent infrastructures such as computers and it modern equipment reception are now in massive demand and universities are changing their teaching models with the use of intellectual capital (alvino et al. ; di vaio et al. b ). thus, an unexpected shift from face-to-face learning to online, there are few difficulties faced by students and lectures. moreover, most of the countries significant issues with technological infrastructure in rural areas; thus, the standard of online education may be a critical issue that needs essential focus. therefore, based on the above-said issues, this study tries to investigate the impact of male and female students on the use of the e-learning portal. in previous literature, the d&m model is tested on the overall population, like banks and other financial sectors. in the current study, the whole population is divided into males and female categories to hoped different theoretical contributions by having a division of the population. consequently, universities in malaysia are offering an online course to the students on campus and off-campus. the current study focuses on the male and female user satisfaction and e-learning system use toward the e-learning portal success of the malaysian universities. based on many researchers (cidral et al. ; selvaraj ) claimed that user satisfaction and e-learning system use have a significant impact on success. this study will conduct on the students how are enrolled with malaysian universities and using the e-learning portal for their learning. based on the argument so far, there is a still gap in literature the e-learning system among universities after the spread of the covid- outbreak on higher education closure. the purpose of the present study to investigate the effect of information quality, system quality, and service quality toward user satisfaction and e-learning system use impact on the e-learning portal success. therefore, the study focuses on group comparison between male and female students on the e-learning portal uses. this paper contains five major sections. section briefly described the compiling review of previous literature that how the learning curve has shifted towards online portals by using delone and mclean model. in sect. , a description of the research method and the overall process of data collection and analysis were the main focus. section relates to the multi-group analysis (mga) and the interpretation of the result. lastly, sect. concluded with a holistic discussion related to the male and female groups of the study. toward the end of february, as alerts sounded on the increasing spread of the covid- infection, the world bank built up a multi-sectoral worldwide task force team to help nation reaction and adapting measures. at that point, just china and some other influenced nations were upholding social distance through the closure of schools. in the meantime, following fourteen days after the fact, countries have closed schools impacting almost a billion students across the world that have experience closures of their schools for the period (azzi-huck and shmis ). in this light, the covid- pandemic has forced the universities to close face-to-face education and send students home. this force the universities to introduce courses through online portals. also, education industries are adopting the technologies available such as digital video conferencing platforms like zoom, microsoft platform, and webex blackboard and google classroom (larry ). therefore, this will be enhancing e-learning globally (chen ; yengin et al. ; larry ) . therefore, the current study concentrating on the comparison between male and female on e-learning. in this light, applying remote learning and education resources to mitigate the loss of learning: in web-based technology, electron learning is well-knowns as well as the earliest application (azhari and ming ) . in today the e-learning is getting very popular worldwide. e-learning is described as the delivery of learning through technology and the internet (gros et al. ; hong et al. ; aljawarneh ) . almost all the universities and colleges have developed the e-learning portal of their students and faculties (moore et al. ). in the st century, the e-learning creates a more significant impact on all types of the student, much as the part-time and full-time or distance learning student in the higher education institution (azhari and ming ) . nowadays, the majority of the postgraduate students are registered as a part-time student, because they are working in the companies. e-learning helps them a lot because of their time constrain. the advancement in e-learning has been started through massive open online courses (moocs) for students, society, and the industry as well since (calisir et al. ; margaryan et al. ) . moocs are recognized as a significant development in higher education million of the peoples and student are taking the benefits and uplifting the existing skill (gupta and gupta ) . moreover, in recent decades, several malaysian universities have adopted the e-learning portals (hussin et al. ; paechter and maier ) . based on the research of azhari and ming ( ) highlighted several issues related to the learning management system (lms) of the malaysian universities such as the lack of trained lectures, slow down of the internet speed, wifi coverage, infrastructure, the interface of design, quality of content, system use and students' adoption. in the present research, the comparison between male and female students is measured based on e-learning portal success. meanwhile, the researchers will find out the importance of e-learning tools' success in terms of male and female malaysian student perspectives. d & m model iss success model has gained significant attention from researchers in the field of information systems. in , this model was initially developed by delone and mclean to measure the dependent construct of is success (delone and mclean ) was primarily based on following three aspects: (shannon and weaver ) study on communications, taxonomy of (mason ) measuring information output and research work on information system during that period. there are three levels of communication per (shannon and weaver ) first level: technical: (accuracy of information system) second level: semantic (success of right information conveyed to the right receiver) third level: effectiveness (influence of information on the receiver). the information success model ( ) has discussed the six dimensions, such as information quality, system quality, system use, user satisfaction, and organizational impact. after one decade, the author modified the original model by adding service quality dimension and the end replaced the individual impact and organizational impact with the net benefits. figure showed the original d&m model having six interdependent dimensions (delone and mclean ; mohammadi ) . in this model, the "system quality" construct depicts "technical success." in contrast, the "information quality" variable demonstrate "semantic success," while the other four elements "use," "user satisfaction," "individual impact," and "organizational impact" show "effectiveness success." therefore, this study focus on male students' comparison with female students on the e-learning portal. as time passed, the researcher used d&m model to check the success of the information communication system. many studies suggested to add the new dimensions of d&m model, and some have recommended to include the such as "service quality," "workgroup impact," "consultant/vendor quality" in the d & m model (gable et al. ; ifinedo ) . while the few researchers criticize d & m model "use" and "user satisfaction," dimensions. many studies have changed the number of aspects according to their context, but no study considers organization capabilities mediating role in the relation between alone impact and organizational impact. in this light, a comparison of males and females among university students to determine the e-learning portal. the study of ifinedo ( ) highlights the importance of the system quality creates a great impact on the e-learning portal. the system quality of the e-learning will generate query results more quickly. moreover, system quality will increase the interest of the enduser. also, the user-friendly interface and modern graphical interface increase the level of user satisfaction. the study of petter et al. ( ) highlighted that service provider should adopt the new changes and modify time to time the system. also, the system quality construct has been used in the context of the e-learning system, which depicts whether the user is satisfied with the quality of the e-learning portal. "perceived ease of use" is the most used measure of "system quality" (davis, ) . furthermore, sq has been identified as ease of learning, ease of use, the convenience of access, the usefulness of system features, system complexity, system features, and response time of information system (beheshti and beheshti ) . furthermore, as suggested by sedera et al. ( ) , sq means ease of learning, efficient, consistent, modifiable, efficient, personalizable, meets the requirements, ease of use, and reliable. in addition, (delone and mclean ) acknowledged system quality by measures: ease of use, availability, flexibility, reliability, usefulness, and response time. there are certain modifications in standards of system quality that occurs over time. however, some measures remain the same and consistently being applied and validated, which are as follows: the luxury of use, the comfort of learning, reliability, personalizabality, reply-time, availability, system interactivity, and system security. moreover, the study concentrates on the male and female comparison. the iq depicts the precision as well as the accuracy of the information provided by the e-learning system; timeliness is another important indicator of information quality that information should be generated within time and latest. so, that higher management can make quick decisions, sufficiency is another characteristic of information quality that it should not be insufficient and must contain all information required to the user. understandability is a very effective characteristic of the iq construct that it should be easy to understand and should not be complex that difficult to grasp. conciseness is another vital part of information quality produced by the e-learning system, system in any organization (petter et al. ; cai and zhu ; chiang et al. ) . additionally, information quality demonstrates the output characteristics of the information system that it is proving the timely information to all the departments of the organization; the information should be relevant to the particular user or the department. the required information is available at the right time to the right person; the data' provided by the information system should be understandable to the users (ifinedo, ; muda and erlina ; pham et al. cr ) . therefore, the comparison of male and female students on the e-learning portal is the target of this study. furthermore, the main purpose of information quality is to provide users online knowledge with correct relevant information on / . so, it must be considered active for the e-learning portal success. information quality in past literature considered being part of user satisfaction measurement. it is not treated as a separate variable further argued that information quality measures vary depending upon the type of information system success going to be evaluated. however, there are consistent measures of information quality for e-learning success as follows: relevance, usefulness, understandability, accuracy, reliability, currency, completeness, timeliness. based on the comparison of male and female university students. service quality refers to "responsiveness," which is a very important indicator. it means how efficiently the technical department responds to the queries of users. also, empathy is another characteristic of service quality of how attentively they are helping the users (petter et al. ; haryaka et al. ; xu and du ) . moreover, in the d&m model, service quality was neglected and updated in d&m ( ). is success model. many items have been stated to measure e-learning system service quality but mostly cited instrument developed by authors (parasuraman et al. ; parasuraman et al. ; parasuraman et al. ) . initially, ten dimensions were used to measure service quality, which later transformed into five named; tangibles, reliability, responsibility, assurance, and empathy. this updated model of d&m model found very useful for evaluating the success of different types of technologies related application. most of the studies used this model of an information system, erp system success, e-procurement application, e-government application on the user perspective, e-banking application use success, and also several other online application successes business (hsu et al. ; almarashdeh ; aparicio et al. ; ojo ; cidral et al. ; aparicio et al. ). e-learning involves the usage of modern technology to impart learning; thus, the present study needs to investigate how the end-user accepts much e-learning portal success. therefore, the present study highlighted the comparison between male and female students on the e-learning portal. thus, fig. has shown the research framework of the present study and hypothesis. h e-service quality has a significant positive impact between males and females on the system use. h e-service quality has a significant positive impact between males and females on user satisfaction. based on previous research, the current study has developed a survey instrument. the questionnaire was adopted/adapted and reworded in the form of e-learning. all items used a point likert scale ranging from to from "strongly disagree" to "strongly agree." the questionnaire items of system quality, information quality, system usage, satisfaction user was adopted from (mcgill, hobbs and klobas, ; rai, lang and welker, ) . the e-learning portal success survey instrument items were adopted from (freeze et al. ) . all items of e-learning portal success were used a -point likert scale ranging from -"poor," -"fair," -"satisfactory," -"very good" to -" excellent." the research used a quantitative approach; the survey was conducted through the google form; the links were shared with the student through the whatsapp group of lecturers. all the malaysian university student is participating in the e-learning portal survey. the convenience sampling technique was used. the study employed a cross-sectional survey method. in table , the students of undergraduate ( . %) and postgraduate ( . %) have participated in the online survey conducted from august to september . in table , in the demographic analysis, the 'gender' factor showed that female participants have a more response rate of . % as compared to . % who were male. in malaysian universities is more enrolment of female as compared to male. regarding the 'age group' of respondents, data revealed that the majority was having the age-group of respondents - years ( . %). also, regarding the "experience using the e-learning portal," data showed that most of the students who participated in the survey they have experienced using the e-learning portal is more than two years. to achieve the research objectives, the study has employed partial least squares (pls) version . . to facilitate data analysis. based on the context of inferential analyses, partial least squares-structural equation modelling (pls-sem) application has been applied in several disciplines (hair et al. ). these developments contribute to the growth of pls-sem that is generally used as a research instrument in the field of management information systems as well as the social sciences (hair et al. ) . also, confirm the importance of pls's ability to analyze variables in complex models, simultaneously. smart pls evaluates two models, mainly, which are measurement and structural (hair et al. ). internal consistency mentions the degree to which all indicators are different then each other (sub) scale is evaluating an equivalent concept (hair et al. ) . inline thereupon, the composite reliability score value must be higher than . , and the ave score value to quite . (hair et al. (hair et al. , . therefore, it explained in table below, all the constructs included in the present study have ave, and composite reliability (cr) is above the criteria above said, which may be a suggestion of measurement model reliability. while table stated that the average variance extracted (ave) and cr values of all variables are in an appropriate range. it shows that all the constructs of the female and male groups have ave's and cr value are above the threshold values, which proved the reliability of the measurement model (figs. , , ). in the present study, also check the discriminant validity criterion, which measures the degree to which a variable is not equivalent to other constructs (hair et al. ). according to fornell & lacker criterion explained that the higher level of discriminant validity proposes that constructs are differents than the respective variables and not explaining some phenomena. the present study performed the discriminant validity taking the square root of ave of the constructs. thus, the values are higher than the correlations among latent constructs (hwang and min ) . therefore, in table , the present study of male and female and full model having no issues with discriminant validity (naala et al. ). the measure two sets of university students, male and female, the study applies an invariance test. at the same, it is vital to conduct the invariance test before conducting a multi-group analysis. thus, the purpose is to determine "whether, under different conditions of observing and studying phenomena, measurement models yield measures of the same attribute" (henseler et al. ). afterward, the study follows three steps, namely, configural invariance, compositional invariance, and equality of composite means values and variances to test measurement invariance (henseler et al. ) . firstly, since the measurement models have the same number of constructs for both groups, therefore the male and female group data is established for configural invariance (see tables , ). secondly, compositional invariance was measured engaging a permutation test. this assures that the composite scores are the same between the groups. lastly, equality of composite variances and means values of groups were assessed. thus, the difference of the composite's mean and variance ratio results, as shown in table , must fall within the % confidence interval. as indicated in table , the result reveals that each of the composite constructs has non-significant differences regarding the composite mean and variances ratio. moreover, in table , full measurement invariance is depicted for male and female. therefore, it is often deduced that the various model estimations of male and female groups students are not distinct in terms of content or usage of the e-learning portal. in the current study, the researchers have applied pls-mga in order to calculate the differences by using welch-satterthwait test on male and female groups (sarstedt et al. ) . furthermore, table is depicting the path coefficient and difference of the composite's means. meanwhile, several paths are found to be different in terms of male and female data sets and significantly different. the present study found the e-service quality toward the system use male and female have a difference. on the otherwise information quality toward the system use having a difference between the male and female. as per the female group assessment concerned in table , hypothesis predicted that esq is positively significantly related to system use with path coefficient of ( . **), while (p value > . ) not supporting hypothesis . thus, hypothesis , demonstrate the significant positive association among esq and us (β = . , p value < . ) hypothesis supported. hypothesis articulated that information quality is significant positively linked with system use (β = . , p value > . ), not supporting hypothesis . equally, hypothesis , demonstrates a significant positive association with iq and us (β = . , p value < . ) supporting hypothesis . furthermore, hypothesis predicted that system quality is insignificant negatively associated with system use (β = . , p value > . ), not supporting hypothesis . results of hypothesis demonstrate a significant positive association between system quality and user satisfaction (β = . , p value > . ), not supporting hypothesis . hence, hypothesis articulated that system use is significant positively linked with e-learning portal success (β = . , p value > . ), not supporting hypothesis . finally, hypothesis , demonstrates an insignificant negative association with user satisfaction and e-learning portal success (β = . , p value > . ), not supporting hypothesis . therefore, based on the above evidence, it can be concluded that female students more understand the important usage of e-learning portal success in the content of the university. this implies that female students hold the dominant positions on the usage of the e-learning portal. regarding male group assessment, table hypothesized that esq is negatively insignificantly related to system use with (β = . , p value > . ) not supporting hypothesis . next, with hypothesis , that demonstrates an insignificant negative association between esq and us (β = . , p value < . ) hypothesis supporting. while hypothesis articulated that iq is significant positively linked with system use (β = . , p value > . ), not supporting hypothesis . additionally, hypothesis , demonstrates a significant positive association with information quality and user satisfaction (β = . , p value < . ) supporting hypothesis . consequently, hypothesis revealed that system quality is insignificant negatively associated with system use (β = . , p value > . ), not supporting hypothesis . results of hypothesis demonstrate a significantly positive association between sq and us (β = . , p value > . ), not supporting hypothesis . thus, hypothesis articulated that system use is significant positively linked with e-learning portal success (β = . , p value > . ) not (table ) and (figs. , , ) . in regard to the hypothesis testing on the female group, six ( ) variables were, directly and indirectly, significant and supported to e-learning portal success. in the female students model, e-service quality and information quality both are supported with system use and user satisfaction. similarly, system quality has a positive relationship with user satisfaction, and user satisfaction has a positive relationship with e-learning portals. on the other hand, in male students, four ( ) variables are significant. more precisely, information quality and system quality have direct relationships with user satisfaction. information quality also supported the relationship with system use. at the same time, there is a positive relationship between user satisfaction and e-learning portals. in the full model, user satisfaction is significant in terms of information quality and service quality. consequently, user satisfaction is also significant in terms of e-learning portals. thus, e-learning portal usage is more towards female students in malaysian universities. regarding theoretical implication, the adoption studies are numerously studied phenomena both at an individual (customer) as well as organizational (management) levels. although, technology adoption studies like e-learning portals success among students is a rarely researched phenomenon, particularly in malaysian universities. the present study is considered a pioneer in terms of e-portals implementation in universities to test empirical, theoretical relationships in terms of males and females. mainly, it elaborated on the relationship between the variables such as e-learning portal success, esq, iq, sq, su has a significant impact on us. this present study proposes and examined the factors that affect the e-learning portal by extending d & m theory to incorporate elements that are more associated with the is success and e-learning among university students. additionally, d & m model is taken into account because the is tangible factors that would help universities strengthen and enhance their services. the significant contribution of the present study is the behavior of model changes if we divide the full model into parts (male and females). the proposed that d and m models provide misleading out in terms of the full model. thus, its theoretical relationship changes as the model divided into different subgroups. this confirms the effectiveness of the e-learning portals in different for male and female students of malaysian universities. the current study, in the context of malaysian as well as other countries' universities, has numerous implications related to e-learning portals technology. the study will be further helpful for the malaysian government and university policy-makers such as top management, ministry of higher education, malaysian universities union in designing the policies and programs on e-learning portal success in the country. at the same time, university top management/dean of faculty and hod of the department need to concentrate on the importance of enhancing university quality education. the study further implies that females students of malaysian universities are more focused on the e-learning portal as compare to male students. in the covid- outbreak, the operational habits have changed around the globe within a short period, and mainly the education sector affected worldwide. in the future, most of the universities will be offering online courses to the students. the covid- will create a long-term impact on higher education institutions. if the pandemic remains longer, it might change education from face to face to online. based on that, the quality of the e-learning system, quality of information that will create an impact on user satisfaction and system use that will lead toward the e-learning portals. it will decrease the education cost, and education will reach outside the border as well. education will be borderless in to years. this study recommends some of the suggestion to higher education institution such accessibility of the e-learning portal / , error-free information, quality of information, content quality, the robustness of the server, training module materials related to e-learning portal use for new users, updated information, well-organized data, userfriendly design of the portal, and time to time feedback from the user will increase the durability and acceptability of the e-learning portal. this study focuses on comparisons between male and female counterparts on e-learning portal usage among university students. e-learning portal services adaptability in the among the student perspective of male and female. the present study highlight difference between the male and female success-fully use on the e-learning portal in malaysia. while future studies will have to consider some variables like (technology infrastructure support, system change perceived of use, perceived usefulness, user's perception, and technical expertise), moreover, the further studies can be conducted by applying longitudinal design to empirically test the theoretical constructs by analyzing other countries' universities. in the end, top management of universities is provided with recommendations for developing an understanding of the implication of e-service quality, information quality, system quality, system use, and user satisfaction concerning the e-learning portal success. e-learning readiness among students of diverse backgrounds in a leading malaysian higher education institution sharing instructors' experience of learning management system: a technology perspective of user satisfaction in distance learning course reviewing and exploring innovative ubiquitous learning tools in higher education intellectual capital and sustainable development: a systematic literature review grit in the path to e-learning success gamification: a key determinant of massive open online course (mooc) success review of e-learning practice at the tertiary education level in malaysia managing the impact of covid- on education systems around the world: how countries are preparing, coping, and planning for recovery improving productivity and firm performance with enterprise resource planning the challenges of data quality and data quality assessment in the big data era predicting the intention to use a webbased learning system: perceived content quality, anxiety, perceived system quality, image, and the technology acceptance model linking employees'e-learning system use to their overall job outcomes: an empirical study based on the is success model the investigation of e-learning system design quality on usage intention effectiveness of e-learning portal from students' perspective: a structural equation model (sem) approach. interactive technology and smart education e-learning success determinants: brazilian empirical study perceived usefulness, perceived ease of use, and user acceptance of information technology information systems success: the quest for the dependent variable the delone and mclean model of information systems success: a ten-year update information systems success measurement artificial intelligence in the agri-food system: rethinking sustainable business models in the covid- scenario human resources disclosure in the eu directive / / eu perspective: a systematic literature review is success model in e-learning context based on students' perceptions enterprise systems success: a measurement model future trends in the design strategies and technological affordances of e-learning. learning, design, and technology: an international compendium of theory technology and e-learning in higher education partial least squares structural equation modeling (pls-sem) an emerging tool in business research partial least squares structural equation modeling: rigorous applications, better results and higher acceptance. long range plan user satisfaction model for e-learning using smartphone using pls path modeling in new technology research: updated guidelines. industrial management & data systems internet cognitive failure relevant to users' satisfaction with content and interface design to reflect continuance intention to use a government e-learning system assessing erp post-implementation success at the individual level: revisiting the role of service quality instructional design and e-learning: examining learners' perspective in malaysian institutions of higher learning. campus-wide information systems identifying the drivers of enterprise resource planning and assessing its impacts on supply chain performances extending the gable et al enterprise systems success measurement model: a preliminary study an empirical analysis of factors influencing internet/e-business technologies adoption by smes in canada information systems security policy compliance: an empirical study of the effects of socialisation, influence, and cognition instructional quality of massive open online courses (moocs) measuring information output: a communication systems approach user-developed applications and information systems success: a test of delone and mclean's model investigating users' perspectives on e-learning: an integration of tam and is success model e-learning, online learning, and distance learning environments: are they the same? influence of human resources to the effect of system quality and information quality on the user satisfaction of accrual-based accounting system innovation capability and firm performance relationship: a study of pls-structural equation modeling (pls-sem) validation of the delone and mclean information systems success model online or face-to-face? students' experiences and preferences in e-learning. the internet and higher education refinement and reassessment of the servqual scale servqual: a multiple-item scale for measuring consumer perceptions of service quality servqual: a multiple-item scale for measuring consumer perc information systems success: the quest for the independent variables does e-learning service quality influence e-learning student satisfaction and loyalty evidence from vietnam assessing the validity of is success models: an empirical test and theoretical analysis closure of universities due to coronavirus disease (covid- ): impact on education and mental health of students and academic staff multigroup analysis in partial least squares (pls) path modeling: alternative methods and empirical results success of e-learning systems in management education in chennai city-using user's satisfaction approach malaysian smes performance and the use of e-commerce: a multi-group analysis of click-and-mortar and pureplay e-retailers covid- impact on e-commerce usage: an empirical evidence from malaysian healthcare industry the mathematical theory of communication covid- infection: origin, transmission, and characteristics of human coronaviruses factors influencing users' satisfaction and loyalty to digital libraries in chinese universities e-learning success model for instructors' satisfactions in perspective of interaction and usability outcomes are we there yet? a step closer to theorizing information systems success publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations acknowledgements the authors would like to thank the anonymous referees for providing helpful comments and suggestions which lead to the improvement of the paper.funding this research received no external funding. the authors declare no conflicts of interest. arfan shahzad · rohail hassan · adejare yusuff aremu · arsalan hussain · rab nawaz lodhi key: cord- - q avsy authors: fallis, don title: the epistemic threat of deepfakes date: - - journal: philos technol doi: . /s - - - sha: doc_id: cord_uid: q avsy deepfakes are realistic videos created using new machine learning techniques rather than traditional photographic means. they tend to depict people saying and doing things that they did not actually say or do. in the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not. several philosophers (e.g., deborah johnson, luciano floridi, regina rini) have now issued similar warnings. in this paper, i offer an analysis of why deepfakes are such a serious threat to knowledge. utilizing the account of information carrying recently developed by brian skyrms ( ), i argue that deepfakes reduce the amount of information that videos carry to viewers. i conclude by drawing some implications of this analysis for addressing the epistemic threat of deepfakes. deepfakes are realistic videos created using new machine learning (specifically, deep learning) techniques (see floridi ) . they are not produced by traditional photographic means where the light reflected from a physical object is directed by lenses and mirrors onto a photosensitive surface. deepfakes tend to depict people saying and doing things that they did not actually say or do. a high profile example is "face-swap in order to survive and flourish, people need to constantly acquire knowledge about the world. and since we do not have unlimited time and energy to do this, it is useful to have sources of information that we can simply trust without a lot of verifying. direct visual perception is one such source. but we cannot always be at the right place, at the right time, to see things for ourselves. in such cases, videos are often the next best thing. for example, we can find out what is going on at great distances from us by watching videos on the evening news. moreover, we make significant decisions based on the knowledge that we acquire from videos. for example, videos recorded by smart phones have led to politicians losing elections (see konstantinides ) , to police officers being fired and even prosecuted (see almukhtar et al. ) , and, most recently, to mass protests around the world (see stern ) . and we are significantly more likely to accept video evidence than other sources of information, such as testimony. thus, videos are extremely useful when collective agreement on a topic is needed (see rini ) . many videos that we watch simply show people (reporters, politicians, teachers, friends, etc.) speaking. we do not learn much directly about the world from such videos, other than that a particular person said a particular thing. in such cases, videos are just another way of receiving testimony along with face-to-face conversations, phone calls, and e-mails. but even then, videos can still provide an important epistemic benefit. if we know (a) that x is trustworthy and (b) that x said that s is the case, we can be justified in believing that s is the case (see fallis , ) . and videos can provide almost as good evidence as a face-to-face conversation that x actually said that s is the case. for example, watching lester holt on the evening news gives me good evidence that he actually said certain things. thus, if i already know that lester holt is a reliable testifier, i am justified in believing that these things are true. as floridi suggests, deepfakes seem to be interfering with our ability to acquire knowledge about the world by watching videos. but exactly how do deepfakes harm us epistemically? the main epistemic threat is that deepfakes can easily lead people to acquire false beliefs. that is, people might take deepfakes to be genuine videos and believe that what they depict actually occurred. and this epistemic cost could easily have dire practical consequences. for example, chesney and citron ( , ) ask us to imagine "a video showing an american general in afghanistan burning a koran. in a world already primed for violence, such recordings would have a powerful potential for incitement." in addition, "a convincing video in which [a well-known politician] appeared to admit to corruption, released on social media only hours before the election, could have spread like wildfire and proved impossible to debunk in time" (chesney and citron , ) . however, in addition to causing false beliefs, there are other ways that deepfakes can prevent people from acquiring knowledge. for example, even after watching a genuine video and acquiring true beliefs, one might not end up with knowledge because one's process of forming beliefs is not sufficiently reliable. as deepfakes become more prevalent, it may be epistemically irresponsible to simply believe that what is depicted in a video actually occurred. thus, even if one watches a genuine video of a well-known politician taking a bribe and comes to believe that she is corrupt, one might not know that she is. moreover, in addition to causing false beliefs and undermining justification for true beliefs, deepfakes can simply prevent people from acquiring true beliefs (see fallis , ) . when fake videos are widespread, people are less likely to believe that what is depicted in a video actually occurred. thus, as a result of deepfakes, people may not trust genuine videos from the legitimate news media (see chesney and citron , ; toews ) . indeed, a principal goal of media manipulation is to create uncertainty by sowing doubt about reliable sources (see oreskes and conway ; coppins ) . frighteningly, such deepfakes might be disseminated by legitimate political organizations as well as rogue actors. for instance, the democratic national committee would not pledge not to use deepfakes (see coppins ) . schauer and zeckhauser ( , - ) make a similar point with respect to misleading testimony. as kant ( kant ( [ , ) points out, in the extreme case, no one will believe anyone if everyone deceives whenever it is to her advantage. as a result of deepfakes, people may also end up in a worse epistemic state with respect to what other people believe about the events depicted in a video. and this can also have dire practical consequences. for example, even if you do not think that the video is genuine, if you believe that your friends are going to be taken in, there may be pressure for you to act as if you believe that the politician is corrupt (see mathiesen and fallis , ) . of course, deepfakes can only prevent people from acquiring true beliefs if no other reliable source for the same information is available. however, there is often no feasible alternative to video evidence that is equally reliable. direct visual perception certainly provides reliable evidence that an event has occurred. but people can only make such observations with respect to events in close physical proximity. photography and testimony allow us to learn about a much wider range of far-flung events. however, they are not nearly as reliable as direct visual perception. even before deepfake technology, still photographs could be faked with photoshop (see rini ) . and it has always been possible for people to testify convincingly about events that did not actually occur. psychological studies consistently find that people are only slightly better than chance at detecting lies (see bond and depaulo ) . finally, deepfakes can also interfere with our ability to acquire knowledge from video testimony in the same three ways. even if the person speaking to us in a video looks like someone that we know to be a trustworthy source (such as lester holt), it could be a deepfake designed to fool us. if it is a deepfake and we believe what the person says, we could easily end up with a false belief. and even if it is a genuine video of a trustworthy source saying something true, we might not be justified in believing what she says, or we might fail to believe what she says, because the video could have been a deepfake. now, realistic fake videos of events that did not actually occur are nothing new. for example, during world war two, the nazis created propaganda films depicting how well jews were treated under nazi rule (see margry ) . also, well before the advent of deepfakes, people have been worried about videos being fake. for example, many people have thought that the videos of the apollo moon landings were faked (see villard ) . so, there is certainly a sense in which deepfakes do not pose a brand new epistemic threat. nevertheless, deepfake technology threatens to drastically increase the number of realistic fake videos in circulation. thus, it threatens to drastically increase the associated epistemic costs. machine learning can make it possible for almost anyone to create convincing fake videos of anyone doing or saying anything. in fact, there are literally apps for that, such as fakeapp and zao. user-friendly software can be downloaded from the internet along with online tutorials on how to use that software. one no longer has to be an expert in machine learning with a lot of time and computing resources in order to create a deepfake. as johnson puts it, "we're getting to the point where we can't distinguish what's real-but then, we didn't before. what is new is the fact that it's now available to everybody, or will be ... it's destabilizing. the whole business of trust and reliability is undermined by this stuff." even though deepfakes can interfere with people acquiring knowledge, deepfake technology also has potential epistemic benefits. most notably, it might be used for educational purposes. for example, in addition to realistic videos of events that never happened, deepfake technology can also be used to create realistic videos of events that happened, but that were not actually recorded. thus, extremely accurate reenactments of historical events can be created more easily (see chesney and citron , ) . also, content that was originally recorded in one language can be more seamlessly dubbed into another language (see lu ). face-swapping can be used to allow vulnerable people to speak the truth while preserving their anonymity (see heilweil ) . finally, realistic video lectures can be created just using the sound recording of the lecture and a few still photographs of the lecturer (see griffin ) . however, it seems unlikely that such epistemic benefits will outweigh the epistemic costs of deepfake technology. in any event, my goal in this paper is not to argue that this technology will necessary lead to less knowledge overall or to establish exactly how epistemically bad it is (e.g., as compared with other things such as fake news). my goal is just to explain the epistemic threat that deepfakes pose. how do deepfakes lead to the epistemic harms described above? my thesis is that, as a result of deepfakes, videos now carry less information about the events that they depict. this claim requires some unpacking. as jonathan cohen and aaron meskin ( , - ) point out, recordings (photographs, videos, and sound recordings) carry information about the events that they depict. they do so in the much same way that thermometers carry information about the temperature and that tree rings carry information about the age of a tree. and it is because these things carry information that we can use them to learn about the world (see stegmann , - ) . just as we can find out exactly how hot it is by reading a thermometer or how old a tree is by counting its rings, we can also find out that a particular well-known politician took a bribe by watching a video of the event. moreover, it is an objective matter whether something carries information. for instance, tree rings carry information about the age of a tree regardless of whether anyone recognizes this fact. also, a video carries information about the event that it depicts even if no one ever watches it or forms a belief about that event. so, videos carry information. but what does it mean for a video to carry less information than it used to? one sense in which a video might carry less information is in virtue of providing less detail. that is, a video might carry information about fewer states of affairs. for example, unlike color videos, black-and-white videos do not carry information about the colors of the objects depicted. however, this is not the sense of less information that i have in mind. deepfakes can provide just as much detail as genuine videos. indeed, they would be much easier to detect if they could not. alternatively, a video might carry less information in virtue of providing less evidence about a particular state of affairs. that is, we cannot be as confident that a basically, we can have a much more realistic version of the television show "you are there" where the venerable reporter walter cronkite covered the salem witch trials and the assassination of julius caesar. in addition, the arrival of deepfakes may spur us to engage in certain epistemically beneficial activities, such as improving our critical thinking and our information environment (see silbey and hartzog ) . but this would not make deepfake technology itself epistemically beneficial since we could do these things anyway. videos can also carry information about things beyond the events that they depict, such as the skill of the videographer. cavedon-taylor ( , - ) makes the same point about photographs. however, my thesis is just that videos carry less information about the events that they depict as a result of deepfakes. state of affairs actually obtains (such as that a particular well-known politician took a bribe) as a result of watching the video depicting that state of affairs. this is the sense of less information that i have in mind. but in order to explain how videos carrying less information leads to epistemic harm, we need a formal account of carrying information. several formal accounts of carrying information have been proposed by philosophers. on fred dretske's ( ) extremely influential account, r carries the information that s if and only if r provides a guarantee that s is true. more formally, r carries the information that s if and only if p(s | r) = and p(s) < . for example, the thermometer reading °carries the information that the temperature is °because the probability that the temperature is °given that the thermometer reads °is . another popular account cashes out carrying information in terms of counterfactuals rather than in terms of objective probabilities. according to cohen and meskin ( ) , r carries the information that s if and only if r would not have occurred if s were not true. for example, the thermometer reading °carries the information that the temperature is °because the thermometer would not read °if the temperature were not °. these two accounts of information carrying can also be applied to videos. consider, for example, a video that convincingly depicts a particular well-known politician taking a bribe. on dretske's account, the video carries the information that the politician took a bribe if and only if the existence of the video guarantees that the politician actually took a bribe. on cohen and meskin's account, the video carries the information that the politician took a bribe if and only if the video would not exist if the politician had not actually taken a bribe. unfortunately, dretske's account and cohen and meskin's account rule out the possibility of information carrying coming in degrees. on their accounts, something only carries the information that s if it provides a guarantee that s is true. in other words, something only carries the information that s if it is not possible for this thing to occur when s is false. thus, on their accounts, carrying information is an all-or-nothing affair. admittedly, dretske and cohen and meskin can say that a member of one class of things (such as videos) is less likely to carry information about some state of affairs s than a member of another class of things (such as handmade drawings). but the only way that one particular thing can carry less information about s than any other thing is for it to carry no information about s at all. and if we are trying to decide whether the politician took a bribe, what matters is how much information the particular video that we are watching carries about that state of affairs. fortunately, more recent accounts of carrying information weaken dretske's stringent requirement of providing an absolute guarantee (see stegmann , ) . the specific account of carrying information that i endorse comes from skyrms ( ) . it was developed in an attempt to understand how animal signals carry information to other animals. for example, the elaborate tails of peacocks carry information about their quality as potential mates, the alarm calls of prairie dogs carry the information that predators are in the vicinity, and the red, yellow, and black stripes of coral snakes carry the information that they are venomous. on skyrms's account, a signal r carries information about a state of affairs s whenever it distinguishes between the state of affairs where s is true and the state where s is false. that is, r carries the information that s when the likelihood of r being sent when s is true is greater than the likelihood of r being sent when s is false. more formally, r carries the information that s if and only if p(r | s) > p(r | not-s). for instance, the prairie dog's alarm call carries the information that there is a predator in the vicinity because it is more likely to occur when there is a predator in the vicinity than when there is not. unlike dretske and cohen and meskin, skyrms thinks that a signal r can carry the information that s even if r sometimes occurs when s is false. in other words, skyrms allows for the possibility of false positives. for instance, a prairie dog might mistake a seagull for a hawk and, thus, give the alarm call when there is no predator in the vicinity. in addition, one species might mimic a signal that is commonly sent by another species. for instance, after the venomous coral snake evolved its distinctive appearance to warn potential predators to stay away, the non-venomous scarlet king snake subsequently evolved to resemble the coral snake in order to free ride on this warning system (see forbes , ) . thus, seeing a snake with red, yellow, and black stripes does not guarantee that one is dealing with a venomous snake. as a result, on skyrms's account, information carrying comes in degrees. basically, the more likely it is for a signal r to be sent in the state where s is true than it is for r to be sent in the state where s is false, the more information that r carries about s. in other words, the higher the probability of a true positive relative to the probability of a false positive, the more information that a signal carries. conversely, the higher the probability of a false positive relative to the probability of a true positive is, the less information that a signal carries. we can formalize this idea using likelihood ratios. a signal r carries more information than a signal q about a state of affairs s if and only if p(r | s) / p(r | not-s) > p(q | s) / p(q | not-s). for example, if it is less likely to be deployed by "low-quality" individuals, the peacock's tail carries more information about his reproductive fitness than the sage grouse's strutting behavior carries about his. it should be noted that talk about how much information a signal carries can be translated into talk about the reliability of the evidence that the signal provides. namely, r carries information about s if and only if r is evidence that s is the case. also, r carries more information about s than q if and only if r is more reliable evidence that s is the case than q is. but for purposes of this paper, and following cohen and meskin skyrms ( , ) gives a slightly different formulation. he says that a signal r carries the information that s if and only if p(s | r) > p(s). however, my formulation is formally equivalent and avoids certain infelicities. for instance, since the signal r occurs after the state of affairs s, it makes more sense to talk about the conditional probability of r given s than to talk about the conditional probability of s given r. i assume here that the ratios are both greater than and equal to . if p(r | s) / p(r | not-s) is less than , r carries information about not-s. i also assume here that the ratio is infinite when p(r | not-s) = . r carries the maximum amount of information about s if r never occurs when s is false. ( ), i use the language of information theory to cash out the intuitive idea that photographs and videos carry information. the fact that information carrying comes in degrees means that different signals can carry different amounts of information. the probability of a false positive for one signal can be higher or lower than the probability of a false positive for another signal. for example, prairies dogs actually have distinct alarm calls for different predators (see slobodchikoff et al. , ) . and the alarm call for a coyote in the vicinity carries more information than the alarm call for a hawk in the vicinity if the prairie dog is less likely to mistake something innocuous for a coyote than she is to mistake something innocuous for a hawk. basically, how much information is carried by a signal can vary with the specific content of the signal. the fact that information carrying comes in degrees also means that the amount of information carried by a particular signal can change over time. as the environmental situation changes, the probability of a false positive may increase or decrease. so, for example, if the number of king snake mimics increases in a particular region, the coral snake's appearance will not carry as much information about its being a venomous snake as it once did. and it is important to note that these probabilities should not simply be interpreted as observed frequencies. for example, if there is an influx of king snake mimics into a particular region, the probability of a false positive will increase even if none of the new mimics have yet been observed. another extremely important aspect of skyrms's account is that he is not measuring the amount of information carried by a signal from god's eye view. instead, skyrms is concerned with how much information is carried to the animal receiving the signal. consider, for example, the signals of the coral snake and the king snake. as with most animal mimics, the king snake's appearance is not a perfect copy of the coral snake's appearance. as is pointed out in the handy rhyme ("red next to yellow, kill a fellow. red next to black, venom lack"), the stripes are in a different order in the two species. so, from god's eye view, the coral snake's appearance does guarantee that you are dealing with a venomous snake. however, potential predators, such as hawks and coyotes, are not able to distinguish between the red, yellow, and black stripes of the coral snake and the red, yellow, and black stripes of the king snake. thus, from their perspective, the coral snake's appearance does not guarantee that they are dealing with a venomous coral snake. as far as potential predators can tell, there is a possibility that they are dealing with a nonvenomous king snake. their inability to distinguish the coral snake's appearance from the king snake's appearance is part of what determines the probability of a false positive. as a result, the coral snake's appearance carries much less information in other words, even though there is currently a high correlation between having red, yellow, and black stripes and being a venomous snake, it is not a robust correlation (see fallis , - ) . so, the signal does not carry as much information as the observed frequencies might suggest. see hájek ( ) for other possible interpretations of objective probabilities, such as propensities, infinite long-run frequencies, and lewis's "best-system" chances. these potential predators might not have the perceptual ability to distinguish between the two patterns. or they might simply not have the time to safely make use of this ability while in the presence of a potentially deadly serpent. of course, the relative numbers of king snakes and coral snakes in the region is also part of what determines the probability of a false positive. about what kind of snake it is to potential predators than it does to a human who is familiar with the aforementioned rhyme. even though how much information the coral snake's appearance carries is relative to the observer, it is still an objective matter. the coral snake's appearance carries a certain amount of information to anyone in a given epistemic position regardless of whether anyone in that position actually observes the snake or forms a belief about what kind of snake it is. consider an analogy. in a deterministic world, a fair coin is definitely going to come up heads or it is definitely going to come up tails. so, from god's eye view, the probability of heads is either or . however, from the limited epistemic position of humans, the probability is / . and this is an objective matter (see beebee and papineau ) . the probability is / regardless of whether anyone observes the coin or forms a belief about the chances that it will land heads. finally, it is important to keep in mind that skyrms's account of carrying information is a mathematical model of a real-world phenomenon. and as edwin mansfield ( , ) points out with respect to microeconomic theory, "to be useful, a model must in general simplify and abstract from the real situation." skyrms's model is intended to capture the important structural features of the phenomenon of signals (and, as i describe in the following section, it can do the same for the phenomenon of videos). the goal is not to calculate precisely how much information is carried by particular signals, such as the prairie dog's alarm call or the peacock's tail. skyrms's account of information carrying can be applied to videos as well as signals. consider, for example, a video that convincingly depicts a particular well-known politician taking a bribe. the video carries the information that the politician took a bribe if and only if the probability of this video existing if the politician actually took a bribe is higher than the probability of this video existing if she did not. , moreover, how much information the video carries depends on how much greater the former probability is relative to the latter probability. in other words, the video carries more information about the politician taking a bribe when the probability of a false positive is low compared with the probability of a true positive. thus, with skyrms's account of information carrying, we can now talk about a particular video carrying less information than another (or than it once did). in addition, it is important to emphasize that skyrms's account tells us how much information is carried to a particular viewer of the video rather than how much information is carried from god's eye view. how much information is carried to a a video could carry the information that the politician took a bribe even if it did not convincingly depict the event. for example, if the person in the video is clearly wearing a cardboard mask with a photograph of the politician on it, it is not a convincing depiction. even so, it is possible that such a video is somewhat more likely to be produced if the politician took a bribe than if she did not. however, since deepfakes make it difficult for people to distinguish between genuine videos and realistic fake videos, it is not clear that they have much impact on the amount of information carried by such unrealistic videos. strictly speaking, what matters is not the probability that the video exists, but the probability that the video is available to be seen. for example, even if all sorts of deepfakes have been produced, they would not decrease the amount of information videos carry to viewers if they never see the light of day (e.g., as a result of effective government censorship). viewer depends on the probability of a false positive, and the probability of a false positive depends on the viewer's ability to distinguish between genuine videos and fake videos. to sum up, deepfake technology now makes it easier to create convincing fake videos of anyone doing or saying anything. thus, even when a video appears to be genuine, there is now a significant probability that the depicted event did not actually occur. admittedly, the observed frequency of deepfakes, especially in the political realm, is still fairly low (see rini ) . nevertheless, the probability of a false positive has increased as a result of deepfake technology. moreover, this probability will continue to increase as the technology improves and becomes more widely available. so, what appears to be a genuine video now carries less information than it once did and will likely carry even less in the future. even after the advent of deepfakes, the amount of information that a video carries is going to depend on the specific content. for example, the probability of a false positive is probably higher when it comes to a politician taking a bribe than when it comes to a politician touring a factory. even if the technology exists to easily create realistic fake videos of both, fewer people are going to be motivated to fake the latter content. but almost all videos are going to carry less information as a result of deepfake technology. we can now say precisely how deepfakes lead to the epistemic harms discussed above in section . deepfake technology increases the probability of a false positive. that is, realistic fake videos that depict events that never occurred are more likely to be produced. as a result, videos carry less information than they once did. and this can lead to the significant epistemic harms described above. consider, for example, a video that convincingly depicts a particular well-known politician taking a bribe. if videos carry less information than they used to, that means that the probability that a video with this particular content is genuine is lower than it would have been in the past. in particular, suppose that, prior to deepfakes, the probability that a video of this politician taking a bribe is genuine would have been . . but after the advent of deepfakes, the probability that such a video is genuine is only . . if you believe that this politician took a bribe on the basis of this video, the belief that you acquire is five times more likely to be false than it was before deepfakes. and even if your belief happens to be true, it probably does not qualify as knowledge. your belief is much less safe (see greco , ) . after all, there was only a % chance that your belief would turn out to be true. in addition, you can suffer epistemic harm by failing to acquire true beliefs as a result of deepfake technology. even after the advent of deepfakes, videos from certain trustworthy sources can still carry a lot of information. for example, suppose that the video of the politician taking a bribe comes from a legitimate news source, such as the evening news or the new york times. thus, the video is extremely likely to be genuine. however, suppose that, even though you can tell the video comes from this source, you suspend judgment on whether the politician took a bribe because you are extremely worried about deepfakes. by suspending judgment, you avoid the risk of acquiring a false belief. but as i explain below, you still may be epistemically worse off. if the odds are sufficiently in your favor of acquiring a true belief, it can be epistemically preferable to believe even though you thereby run a small risk of acquiring a false belief (see levi ; riggs ) . so, for example, when it comes to the politician taking a bribe, we might suppose that the epistemic benefit of having a true belief eight times out of ten outweighs the epistemic cost of having a false belief two times out of ten. that is, while you should suspend judgment on whether the politician took a bribe if the probability that she did is less than . , you should believe that the politician took a bribe if the probability that she did is greater than . . thus, if the probability that a video from this legitimate news source is genuine exceeds this threshold for belief, you would have been epistemically better off believing that the politician took a bribe instead of suspending judgment. of course, the epistemic harms that i have described so far are not just the result of deepfake technology. these harms only occurred because you also failed to proportion your belief to the evidence (see hume hume [ , , locke locke [ , ). in the aforementioned cases, when you acquire a false belief (or a true belief that is not knowledge) from the video, you should have suspended judgment rather than believing what you saw. and when you failed to acquire a true belief from the video from the legitimate news source, you should have believed what you saw rather than suspending judgment. but as i explain below, even if you do proportion your belief to the evidence, there can still be epistemic harm from deepfakes. basically, we cannot learn as much about the world if less information is carried by videos. even if you proportion your belief to the evidence, you are in a less hospitable epistemic environment when the amount of information carried by videos goes down. as a result, as i explain below, you will end up in a worse epistemic state than you would have been in without the decrease in the amount of information carried. in particular, you will end up with fewer true beliefs than you would have had otherwise. let us continue to assume (a) that deepfake technology has reduced the probability that the video of the politician taking a bribe is genuine from . to . and (b) that your threshold for belief is . . thus, if you proportion your belief to the evidence, you will suspend judgment on whether the politician took a bribe. suspending judgment is epistemically preferable to believing that the politician took a bribe when there is a % chance that the video is a deepfake. but you are epistemically worse off than you would have been without deepfakes. prior to deepfakes, you would have believed that the politician took a bribe, and you would have been right nine times out of ten. and per our assumptions about your threshold for belief, believing that the politician took a bribe is, under these circumstances, epistemically preferable to suspending judgment. of course, it is possible that the advent of deepfakes does not decrease in the amount of information carried by a video past the threshold for rational belief. for example, it might be that enough information is still carried by the video that you will believe that the politician took a bribe even after the advent of deepfakes. alternatively, it might be that you would have suspended judgment on whether the politician took a bribe even prior to deepfakes. however, even if the advent of deepfakes does not alter what you fully believe after watching this video, there can still be epistemic harm. it is possible to measure the distance of someone's credences from the whole truth (see pettigrew ) . and if the amount of information carried by a video decreases, you can expect your credences to be further from the whole truth than they would have been. it might be objected that deepfake technology does not significantly reduce the amount of information that videos carry. indeed, it is true that deepfake technology may not yet significantly reduce the amount of information that videos carry. also, it is true that deepfake technology may not significantly reduce the amount of information that all videos carry. for example, some videos do not carry that much information to begin with. but as i argue in this section, deepfake technology will soon significantly reduce the amount of information that a large number of videos carry. in order to significantly reduce the amount of information that videos carry, deepfakes must be indistinguishable from genuine videos. but it might be suggested that it is not all that difficult to distinguish deepfakes from genuine videos. for example, in order to appeal to a certain constituency, an indian politician recently appeared in a deepfake speaking a language (haryanvi, a hindi dialect) that he does not actually speak, but many viewers noticed "a brief anomaly in the mouth movement" (christopher ). however, even though existing deepfakes are not perfectly realistic, the technology is rapidly improving (see rini ; toews ). of course, even if a deepfake is indistinguishable from a genuine video based purely on a visual inspection of the image, we might still be able to distinguish it from a genuine video just because the content is implausible. for example, despite the high quality of the deepfake, very few people would think that bill hader actually morphed into tom cruise during an interview on the david letterman show (see hunt ). however, that still leaves a lot of room for convincing deepfakes. for example, it would not be all that unusual for an indian politician to speak haryanvi or for a politician to have taken a bribe. but even if it is difficult to distinguish deepfakes from genuine videos, it might be suggested that deepfake technology does not necessarily decrease the amount of information that videos carry. as noted above, deepfake technology can be used to create realistic videos of events that happened, but that were not actually recorded. in other words, it can be used to increase the number of true positives as well as to increase the number of false positives. and just as the amount of information carried by videos goes down as the probability of a false positive increases, it goes up as the hume ( hume ( [ , ) and locke ( locke ( [ , ) make a similar point about testimony. probability of a true positive increases. however, while it is a conceptual possibility, it seems very unlikely that deepfake technology will lead to a net increase in the amount of information carried by videos. more people are likely to be motivated to create videos of events that did not actually occur than to simply create accurate reenactments. moreover, empirical studies on fake news (e.g., vosoughi et al. ) suggest that false information spreads faster, and to more people, than true information. thus, even if accurate reenactments were just as prevalent as videos of events that did not actually occur, we might expect more people to come across the latter. of course, even if deepfake technology decreases the amount of information that videos carry, it might be suggested that it does not do so to a significant degree. it is not yet true that almost anyone can create a deepfake with any content whatsoever. the deepfake apps that are currently available on the internet often have fairly limited functionality (such as only allowing face-swapping into existing videos). in addition, some of the existing apps have been removed from the internet (see cole ) . but just as the technology is rapidly improving in terms of quality, it is improving in terms of flexibility and is becoming more readily available. but even once deepfake technology is widely available, it will not significantly reduce the amount of information that a particular video carries if that particular content was already easy to fake. for example, even before deepfake technology, it would not have been difficult to create a convincing fake video of american soldiers burning a koran. one would just need to rent some uniforms and drive out to the desert with a video camera. in addition, even without deepfake technology, it was not difficult to create convincing fake videos (aka shallowfakes) of even well-known individuals doing things that they had not actually done. for example, one can speed up a video to make someone seem violent, one can slow down a video to make someone appear to be drunk, or one can cut out parts of a video to make someone sound racist (see dupuy and ortutay ). however, without deepfake technology, it would be difficult to fake all sorts of content. for example, what can be depicted in shallowfakes of readily identifiable people or places is significantly constrained by what is already depicted in the genuine source material. prior to deepfake technology, it would have been difficult to create a convincing fake video of a well-known american soldier, such as general colin powell, burning a koran. but this is just the sort of video that one can produce with machine learning. thus, there are plenty of videos that will carry significantly less information as a result of deepfake technology. moreover, this is precisely the sort of content that chesney and citron worry could have dire practical consequences. it is dangerous if people are misled (when the video is a deepfake) or if people fail to acquire knowledge (when the video is genuine). i have argued that deepfake technology is reducing the amount of information that videos carry about the events that they depict. in addition, i have argued that, as a result, deepfake technology is interfering with our ability to acquire knowledge from videos by causing false beliefs, causing unjustified beliefs, and preventing true beliefs. my contention is that this explains why deepfakes are such a serious threat to knowledge. however, even if i am correct that deepfakes interfere with our ability to acquire knowledge from videos as a result of reducing the amount of information that they carry, it might be suggested that this is not the best explanation of why deepfakes are such a serious threat to knowledge. in this section, i consider two alternative explanations. the first appeals to the epistemic difference between photographs and handmade drawings. the second appeals to how recordings underpin our trust in testimony. i argue that these explanations can only account for the seriousness of the threat of deepfakes by appealing (at least tacitly) to the "carrying less information" explanation. not much philosophical work has been done on the epistemology of videos. however, there is a substantial literature on the epistemology of photographs. and it might be suggested that this work is applicable to the issue of deepfakes (see rini ) . after all, a video is literally a sequence of still photographs taken in quick succession (typically synchronized with a sound recording). several philosophers (e.g., walton ; cohen and meskin ; walden ; cavedon-taylor ) have tried to identify the property of photographs that makes them epistemically superior to handmade drawings. it might be suggested (a) that videos are epistemic valuable because they have the same property and (b) that they are losing this property as a result of deepfakes. for example, kendall walton ( , ) famously claims that photographs are epistemically superior to handmade drawings because they are "transparent. we see the world through them." as technology advances, our vision is mediated in ever more complicated ways. we see things through telescopes, microscopes, corrective lenses, mirrors, etc. walton argues that we also literally see the objects and events depicted in photographs. if this is correct, the same would seem to apply to videos (see cohen and meskin , ) . and it might be suggested that, as a result of deepfakes, we will no longer be able to see through videos. however, it is controversial whether seeing is a property that distinguishes photographs from handmade drawings. for example, cohen and meskin ( , ) have argued that we do not literally see through photographs. also, helen yetter-chappell ( , ( ) ( ) ( ) ( ) ( ) ( ) ( ) has recently argued that we can literally see through handmade drawings. but all of these philosophers do agree that, unlike handmade drawings, photographs provide us with a connection to objects and events that is not mediated by the cognitive processes of another human being. admittedly, a photographer does get to make a lot of decisions. she decides what to aim her camera at, she decides how to frame the shot, etc. but once the photographer opens the shutter, the features of an object are captured whether or not the photographer notices them. and providing this sort of unmediated connection to objects and events may be the property of photographs that makes them epistemically superior to handmade drawings (see walden ; cavedon-taylor ) . deepfakes certainly do not provide us with a connection to objects and events that is not mediated by the cognitive processes of another human being. but the existence of deepfakes does not prevent genuine videos from providing an unmediated connection to objects and events. for example, my rearview camera still connects me to the objects behind my car in a way that is not mediated by the cognitive processes of another human being. even so, it is true that, as a result of deepfakes, videos are much less likely to provide us with an unmediated connection to objects and events. but exactly why is a connection to objects and events that is not mediated by the cognitive processes of another human being so valuable epistemically? after all, we can still acquire knowledge about an event even without an unmediated connection to it. for example, we can hear an accurate report about the event on the evening news or we can watch an accurate reenactment of it. but even though we can acquire knowledge about an event through a mediated connection, it might be suggested that such knowledge is epistemically inferior to knowledge based on an unmediated connection (see cavedon-taylor ; rini ). one possible reason for the supposed inferiority is that forming beliefs on the basis of a connection that is mediated by the cognitive processes of another human being is less reliable. as scott walden ( , ) notes, "painters or sketchers, their mentation primarily involved in the formative process, can easily add features to their pictures that have no analogues in the depicted scene … photographers may wish to do the same, but have a much harder time doing so, as their mentation is only secondarily involved in the formative process." but if that is why an unmediated connection is epistemically valuable, we are essentially appealing to the "carrying less information" explanation of why deepfakes are a serious epistemic threat. as discussed above in section , r carries less information about s than q if and only if r is less reliable evidence that s is the case than q is. in order to provide an independent explanation, an unmediated connection to objects and events must be epistemically valuable even if it does not lead to greater reliability. another possible reason for the supposed inferiority is that forming beliefs on the basis of a connection that is mediated by the cognitive processes of another human being is less epistemically autonomous (see, e.g., locke [ ], , fricker . however, it is controversial whether epistemic autonomy has much value beyond leading to greater reliability in certain circumstances (see, e.g., zagzebski ; dellsén ) . given that, it is not clear how the mere loss of epistemic autonomy could be more significant than the loss of knowledge. thus, the "carrying less information" explanation seems to identify a more serious epistemic threat from deepfakes than the "loss of epistemic autonomy" explanation. with testimony and reenactments, but not typically with deepfakes, the human being whose cognitive processes mediate our connection to an event invites us to trust them. so, it might be suggested that, as a result of deepfakes, videos are much less likely to provide us with either an unmediated connection or a mediated connection that comes with an invitation to trust. however, much the same line could be taken with respect to this more complicated suggestion as i take with respect to the simpler suggestion in the text. it can only account for the seriousness of the threat of deepfakes by appealing to the "carrying less information" explanation. cavedon-taylor ( , ) suggests some other possible reasons. for example, unlike testimonial knowledge, "photographically based knowledge" is a "generative source of knowledge." that is, it allows us to discover new knowledge and not just transmit existing knowledge. but deepfakes only prevent us from using videos to discover new knowledge by preventing us from acquiring knowledge from videos. rini ( ) offers a different sort of explanation for why deepfakes pose an epistemic threat. she points out that, over the past two centuries, recordings (photographs, videos, and sound recordings) have played an important role in underpinning our trust in testimony. first, a recording of an event can be used to check the accuracy of testimony regarding that event (think of the watergate tapes). second, the possibility that an event was recorded can motivate people to testify truthfully regarding that event. (think of trump warning comey to tell the truth since their conversations might have been recorded.) rini argues that deepfakes threaten to undermine the "epistemic backstop" to testimony that recordings provide. thus, while rini agrees with me that deepfake technology interferes with our ability to acquire knowledge, she is concerned about the knowledge that we fail to acquire from testimony (because videos no longer provide an epistemic backstop) rather than about the knowledge that we fail to acquire from watching videos. however, while recordings certainly play such a role in underpinning our trust in testimony, it is not clear to me how significant an epistemic harm it would be to lose this epistemic backstop to testimony. it is not as if we do not have other techniques for evaluating testimony. for example, even without recordings, we can consider whether the person testifying has a potential bias or whether other people corroborate what she says (see hume hume [ , ). and it is not as if we do not continue to regularly deploy these techniques. admittedly, given their extensive surveillance capabilities, governments and large corporations may often have access to recordings that bear on testimony that they want to evaluate. but most people, most of the time, do not. but whether or not losing the epistemic backstop that recordings provide is a significant epistemic harm, this explanation for why deepfakes pose a serious epistemic threat must appeal to the "carrying less information" explanation. it is precisely because deepfakes interfere with our ability to acquire knowledge from the recordings themselves that they provide less of an epistemic backstop to testimony. implications of skyrms's account for addressing the epistemic threat so, the main epistemic threat of deepfakes is that they interfere with our ability to acquire knowledge, and information theory can explain how deepfake technology is having this effect. but in addition, skyrms's account of information carrying can suggest strategies for addressing this epistemic threat (and identify some of their limitations). in this section, i discuss three such strategies. the first involves changing our information environment so that it is epistemically safer, the second involves changing us so that we are at less epistemic risk, and the third involves identifying parts of our information environment that are already epistemically safe. first, deepfake technology decreases the amount of information that videos carry by increasing the probability that realistic fake videos depicting events that never occurred will be produced. thus, an obvious strategy for increasing the amount of information that videos carry is to decrease the probability of realistic fake videos being produced. although there are fewer and fewer technical constraints on the production of realistic fake videos, it is still possible to impose normative constraints. for example, laws restricting the creation and dissemination of deepfakes have been proposed (see brown ; chesney and citron , ; toews ). also, informal sanctions can be applied. indeed, some apps for creating deepfakes have been removed from the internet due to public outcry (see cole ) . videos will carry more information as long as the probability of deepfakes being produced is low. it does not matter why this probability is low. of course, there are some worries about the strategy of banning deepfakes. first, sanctions are not going to deter all purveyors of deepfakes (see brown ) . but they are likely to make deepfakes less prevalent. and to paraphrase voltaire, we do not want to make perfection the enemy of the good. second, as john stuart mill ( [ , ) points out, access to information can have epistemic benefits even when the information is false or misleading. for example, we often gain "the clearer perception and livelier impression of truth, produced by its collision with error." but it is not clear that access to information has the epistemic benefits that mill envisioned when the information is intentionally misleading as it is with deepfakes (see mathiesen , ) . furthermore, it should be noted that it is not necessary to ban deepfakes per se. it is only necessary to ban deepfakes of events that did not actually occur that viewers are unable to distinguish from genuine videos. for example, the deepfakes accountability act only requires "clear labeling on all deepfakes" (brown ) . deepfakes that are labeled as deepfakes need not decrease the amount of information that videos carry. also, deepfakes that are labeled as deepfakes can still provide the epistemic benefits discussed in section above. for example, a reenactment can certainly still serve its educational function even if it is labeled as a reenactment. second, the amount of information that videos carry does not just depend on the probability that fake videos will be produced. it also depends on the ability of this seems to be what walden ( , ) has in mind when he asks, "if digital imaging techniques make it easy to undermine the objectivity-based epistemic advantage, will the difference between photographic and handmade images dissipate, or will institutional factors limit the extent to which this takes place?" (emphasis added). although such informal sanctions have typically been motivated by moral, rather than epistemic, worries about deepfake technology, they can have epistemically beneficial consequences. of course, even deepfakes that are labeled as deepfakes can decrease the amount of information that videos carry if people ignore the labels. researchers are trying to develop ways to label online misinformation that will actually convince internet users that it is misinformation (see, e.g., clayton et al. forthcoming) . the labeling of deepfakes could get in the way of some epistemic benefits. in just the right situations, we might be able to get people to believe something true by getting them to believe that a deepfake is a genuine video. for example, if the well-known politician actually took a bribe, but the event was not captured on video, we might create a deepfake of the politician taking a bribe. also, if people will only believe a particular (accurate) message if it comes from a particular (trusted) individual, we might create a deepfake of that individual delivering that message. but situations where deceptive deepfakes have epistemic benefits would seem to be rare. viewers to distinguish fakes videos from genuine videos. thus, another possible strategy for increasing the amount of information that videos carry is for us to get better (individually and/or collectively) at identifying deepfakes. as with animal mimics, deepfakes are not perfect counterfeits. so, even if laypeople cannot identify deepfakes with the naked eye, it is still possible for experts in digital forensics to identify them and to tell the rest of us about them. and it is possible for the rest of us to identify such experts as trustworthy sources (see fallis ) . of course, while this strategy can reduce the epistemic threat of deepfakes, there may be limits to its effectiveness. consider, once again, the signals of the coral snake and the king snake. suppose that hawks and coyotes learn how to distinguish the coral snake's appearance from the king snake's appearance. in that case, the king snake would likely evolve to resemble the coral snake more precisely. and in a similar vein, as people develop techniques for detecting deepfakes, other people may have an incentive to create deepfakes that are not detectable by these techniques (see chesney and citron , ; labossiere ; toews ). in addition, much like with the coral snake and the king snake, even if people have the ability to detect deepfakes, they might not have the time to safely make use of this ability. when deepfakes are used to deceive, the intended victim is often put in a pressure situation (see harwell ) . finally, in order to address the epistemic threat of deepfakes, we do not have to increase the amount of information carried by all videos. different videos carry different amounts of information. for example, as noted above, a video shown on the evening news is much more likely to be genuine than a random video posted on the internet. after all, the evening news is a source that has "such credit and reputation in the eyes of mankind, as to have a great deal to lose in case of their being detected in any falsehood" (hume (hume [ , ). in other words, even without laws against deepfakes, the evening news is subject to normative constraints. thus, we can try to identify those videos that still carry a lot of information. but again, while this strategy can reduce the epistemic threat of deepfakes, there may be limits to its effectiveness. purveyors of deepfakes can try to make it difficult for people to determine whether a video comes from a source that is subject to normative constraints. indeed, this is precisely what has happened with the phenomenon of fake news. with text-based news, there have never been any technical constraints preventing someone from writing a story about something that never happened. thus, text-based news is only trustworthy if we can tell that the source is subject to normative constraints. so, sources of fake news are typically "designed to look like legitimate news media" (mathiesen , ) . and as a result, text-based news does not carry as much information as it once did. cryptographic techniques can be used to digitally sign (or watermark) videos as well as text. if a video is digitally signed by a trustworthy source, we can be reasonably sure that it is genuine. thus, it carries a lot of information. it is as if a coral snake could make its appearance so distinctive that no other snakes could free ride on its warning system. admittedly, this strategy only eliminates the epistemic threat of deepfakes if the vast majority of genuine videos are digitally signed by a trustworthy source (see rini ) . but again, perfection should not be the enemy of the good. black lives upended by policing: the raw videos sparking outrage probability as a guide to life accuracy of deception judgments congress wants to solve deepfakes by photographically based knowledge deepfakes and the new disinformation war: the coming age of post-truth geopolitics we've just seen the first use of deepfakes in an indian election campaign forthcoming). real solutions for fake news? measuring the effectiveness of general warnings and factcheck tags in reducing belief in false stories on social media on the epistemic value of photographs we are truly fucked: everyone is making ai-generated fake porn now creator of deepnude, app that undresses photos of women, takes it offline the billion-dollar disinformation campaign to reelect the president the epistemic value of expert autonomy meditations on first philosophy. john cottingham, tr knowledge and the flow of information deepfake videos pose a threat, but 'dumbfakes' may be worse on verifying the accuracy of information: philosophical perspectives privacy and lack of knowledge routledge handbook of applied epistemology artificial intelligence, deepfakes and a future of ectypes dazzled and deceived testimony and epistemic autonomy the sensitivity principle in epistemology edtech company udacity uses deepfake tech to create educational videos automatically interpretations of probability. stanford encyclopedia of philosophy an artificial-intelligence first: voice-mimicking software reportedly used in a major theft how deepfakes could actually do some good an enquiry concerning human understanding deepfake danger: what a viral clip of bill hader morphing into tom cruise tells us practical philosophy viral videos that derailed political careers deep fakes. philosophical percolations on the seriousness of mistakes an essay concerning human understanding microeconomics the greatest liar has his believers: the social epistemology of political lying theresienstadt ( - ): the nazi propaganda film depicting the concentration camp as paradise fake news and the limits of freedom of speech merchants of doubt accuracy and the laws of credence in the age of a.i., is seeing still believing paltering you thought fake news was bad? deep fakes are where truth goes to die the upside of deep fakes signals prospects for probabilistic theories of natural information they used smartphone cameras to record police brutality-and change history face face: real-time face capture and reenactment of rgb videos deepfakes are going to wreak havoc on society. we are not prepared did nasa fake the moon landing? the spread of true and false news online photography and knowledge transparent pictures: on the nature of photographic realism he predicted the fake news crisis. now he's worried about an information apocalypse seeing through eyes, mirrors, shadows and pictures ethical and epistemic egoism and the ideal of autonomy publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord- - erjf rk authors: maurushat, alana title: the benevolent health worm: comparing western human rights-based ethics and confucian duty-based moral philosophy date: - - journal: ethics inf technol doi: . /s - - - sha: doc_id: cord_uid: erjf rk censorship in the area of public health has become increasingly important in many parts of the world for a number of reasons. groups with vested interest in public health policy are motivated to censor material. as governments, corporations, and organizations champion competing visions of public health issues, the more incentive there may be to censor. this is true in a number of circumstances: curtailing access to information regarding the health and welfare of soldiers in the kuwait and iraq wars, poor health conditions in aboriginal communities, downplaying epidemics to bolster economies, and so forth. this paper will look at the use of a computer worm (the benevolent health worm) to disseminate vital information in␣situations where public health is threatened by government censorship and where there is great risk for those who ‹speak out’. the discussion of the benevolent health worm is focused on the peoples’ republic of china (china) drawing on three public health crises: hiv/aids, sars and avian influenza. ethical issues are examined first in a general fashion and then in a specific manner which uses the duty-based moral philosophy of confucianism and a western human rights-based analysis. technical, political and legal issues will also be examined to the extent that they better inform the ethical debate. censorship in the area of public health has become increasingly important in many parts of the world for a number of reasons. groups with a vested interest in public health policy are motivated to censor material. this may include governments, corporations, professions, and organizations. the censorship may be direct (legal sanctions) or indirect (corporate and individual self-censorship). as experts in the field, ngos and other citizen movements champion competing visions of public health issues, the more incentive there may be to censor. this is true in a number of circumstances. for example, curtailing access to information regarding the health and welfare of soldiers in the kuwait and iraq wars, poor health conditions in aboriginal communities, downplaying epidemics to bolster economies, and so forth. this paper will look at the use of a computer worm (the benevolent health worm) to disseminate vital information in situations where public health is threatened by government censorship and where there is great risk for those who 'speak out'. while there are many examples along the spectrum of public health censorship, this paper's discussion of the benevolent health worm will be limited to that of the peoples' republic of china (china) drawing on three public health crises: hiv/aids, sars and avian influenza. in each of these situations, chinese citizens faced a public health epidemic (which then spread to the international community). in each of these situations the chinese government heavily censored information, allowing the disease to unnecessarily spread faster in an uncontained manner. and in each of these situations individuals who vocalized or published unauthorized news articles on the epidemic (many prominent experts, doctors and activists) were detained without reason serving time in prison. some were threatened or charged with divulging a state secret. the author uses china by way of example due to the extremity of the example, as well as her experience and familiarity with politics, censorship strategy and legal developments in the region. the use of a controversial technology such as a computer worm to disseminate uncensored, sanctioned public health information in china presents contentious ethical issues worth examining. when is the use of an illegal technology ethical? does the dual use of a computer worm for malicious or benevolent reasons play a part in the analysis? if so, at what point? is motivation the determining factor? intended use? actual consequences? is there a moral duty to write and disseminate public health information which differs from authorized accounts? is the duty a general duty or is it specific to certain members of society? does the mode of information delivery play a part in an ethical analysis? are anonymous modes of dissemination less ethical than methods which provide accountability? to what extent does the source of the information factor into the equation? what role does risk of criminal sanction play in ethics? does the risk of criminal sanction depend on the actual use or potential consequences of the technology? does the violation of human rights justify the illegal activity? if so, is the chinese context justifiable? is the use of a benevolent worm compatible with western ethical traditions? chinese ethical traditions? the above questions are raised to illustrate the abundance of ethical queries triggered by the topic, and are not meant to be fully covered in this paper. for the purpose of this paper, an account will be given of the censorship environment in china both in general and in the specific context of public health. this will be followed by an account of technical aspects of the benevolent health worm inasmuch as it will inform and frame the debate on ethical issues. a further account will be given examining the use of an illegal technology such as a computer worm to disseminate non-authorized public health information. the core of the paper will examine ethical issues in a general fashion, and then in a specific manner drawing on the moral philosophy of confucianism and western notions of civil liberties/human rights. confucianism is often thought to be incompatible with western rights-based theories (democracy, civil liberties, human rights and other autonomous rights based theories). the point is not to justify the benevolent health worm through either a 'western' or 'eastern' lens. the point, rather, will be to examine many of the ethical issues from multiple perspectives. in particular, the perspective of chinese moral philosophy -confucianism -which is rooted within the framework of values and duties -will be used along side a western human rights-based analysis. the use of western rights-based theories (human rights) alongside the eastern duty-based theory of confucian moral philosophy provides an interesting platform for an ethical analysis of the benevolent health worm. the author will suggest how human rights and confucian moral philosophy may be used to better understand the ethical issues presented with the use of the benevolent health worm. the paper does not aim to justify the use of the benevolent worm. instead, there is the modest aim to initiate ethical debate on the subject. the application of the analysis could extend to a broader examination of benevolent payloads, and, further along the spectrum, the ethical use of illegal technologies. governments in china have traditionally utilized censorship as a means for control. using censorship as a control mechanism has historically been pitted against the chinese promotion of intellectual growth. the rise of the chinese communist party (ccp) brought with it the continued ideal of control over the dissemination of works and ideas. china continues to censor books, newspapers, and basically, most forms of publications that threaten the governing regime or criticize china's attitude towards human rights. included in this overall censorship strategy is tight control of the media and the internet. all news agencies, including news websites and chatrooms, must be accredited. and sale of foreign news in china may only be purchased and published from the state-run government news agency, xinhua. review and enforcement of laws and regulations is performed by two agencies (one for press, radio, film and television, and the other for written publications including the internet), both of which are run by the communist party's central propaganda department (cpd). against the backdrop of what could best be described as a labyrinth of laws and regulations, the cpd issues weekly informal directives to news agencies and internet service providers (e.g. google and microsoft) on news items requiring restrictive coverage. virtually all statutes and regulations concerned with communications (news or otherwise) contain vague language allowing authorities sufficient flexibility in determining which publications are in breach of the law. navigating through the ever-changing and complex media and internet regulations is a seemingly never-ending process. while the regulations are ever-changing, there is a standard set of vaguely written provisions which appear in all such regulations: divulging state secrets; harming the honour or interests of the nation; spreading rumours which may disturb social order; and inciting illegal assemblies which could disturb social order -all punishable as a criminal offence. state secrets provisions are the most problematic as their wording and interpretation in practice has proven malleable to political will. it is difficult for writers (whether they be journalists or mere bloggers) to determine in advance whether their message would contravene the law. where writers publish illegal content they are subject to a number of punishments such as dismissal from employment, demotion, libel, fines, closure of business, and imprisonment. imprisonment for illegal new stories extends to employees of foreign news agencies. for example, hong kong based journalist ching cheong (singapore's straits times) and zhoa yan (new york times) were arrested and detained for reporting articles about communist party leaders. news around sensitive topics such as a public health crisis is heavily censored and monitored. historically, individuals who reported and disseminated sanctioned public health news were often detained without reason and, in some cases, these individuals were charged with divulging a 'state secret'. many academics and experts have written on the scope of 'state secret' in china. the notion of 'state secret' has traditionally been broad and deliberately ambiguous, while its scope of application is ever-changing. it remains impossible to ascertain whether a person's actions would fall within a 'state secret'. people have been charged with this serious offence for the dissemination of banned information related to human rights, revealing draft laws (white papers), publishing unauthorised news reports, and publishing information critical of governing authorities. the act of circumventing the ''great firewall'' for illicit purpose, and mere research on internet censorship, could conceivably fall within the parameters of 'state secret'. the ccp's unpredictable use of broad, ambiguous laws to deter freedom of expression is heavily criticized in the international arena. while 'state secret' laws remain a potent threat, the ccp has a number of criminal provisions which it regularly uses to curtail the dissemination of sanctioned information. to paraphrase a prominent malaysian journalist steve gan, ''we have the right of freedom of expression. the problem is that we have no rights once such words are freely expressed.'' the same could be said of china. access to information may involve more than freedom of expression; timely information may have repercussions for the health and welfare of individuals. indeed, there are three specific areas where censorship and a lack of accurate information distributed in a timely manner have had unrefuted consequences in china in recent history: aids, sars and avian influenza (often referred to as bird flu). the chinese government has suppressed and continues to suppress information on the spread of c. zissis.''mediacensorshipinchina''councilonforeign relations, hiv/aids. by the government had reported only known cases claiming that aids was a foreigners' disease. the lack of reporting and ineffective preventative measures led a number of people to become aids activists. these activists reported significant rates of people infected hiv and campaigned for the government to take proactive measures to reduce the spread of this debilitating disease. many aids activists, including the famous activist wan yanhai, have been detained and charged with divulging a state secret. infected blood supplies appear to have initially been the main source of the problem. infected blood supplies, however, still taint china with many people in poorer areas donating blood for money while drug use, prostitution and a lack of educative measures continue to exacerbate the situation. the reality today is that china has one of the highest hiv/ aids rates in the world outside of africa. while we will never know the effect that accurate and timely information would have had in this epidemic, it is certainly plausible that access to such important information could have reduced the rate of infection. similar to the hiv/aids crisis, the chinese government withheld critical health information on severe acute respiratory syndrome in . china has a longstanding tradition of curtailing news deemed harmful to society and to china's image. in the case of sars, it was thought that exercising tight media control would reduce public fear and lessen economic damage in the region. the lack of reliable dissemination of information and the underreporting of infected sars patients to the world health organization allowed the disease to spread more readily from guangdong province to other provinces in china, to hong kong and to other countries in the world. the sars health crisis can be partially attributable to nondisclosure of pertinent information. avian influenza (also referred to as bird flu) while avian influenza a has not yet reached the level of crises of hiv/aids or sars in china, historical events give clear signs that any information provided by chinese officials should be perceived with caution. it is believed that quiao songju, a chinese farmer, was arrested and detained for reporting a potentially infectious bird in the anhui province. prominent hong kong virologist, guan yi, was invited by the chinese government to study abf. his account of the disease was vastly different from the official version reported by the chinese government. guan's publications are censored in china while it has been made known to the prominent virologist that he should not return to china. it is rumoured that guan has been threatened with detainment and there is further speculation that he may be in violation of having disclosed a 'state secret'. china's newly drafted censorship rules compound the situation. the newly drafted law states that it is a criminal act to publish any information on 'sudden events' without prior authorization from the chinese government. 'sudden events' are defined as 'industrial accidents, natural disasters, health and public security issues'. the government claims that the law is aimed at irresponsible journalists who report untruths potentially causing panic among the public. critics have claimed that the draft law is aimed at preventing future disclosures of embarrassing news. public health epidemics would fall under the category of sudden events. the internet and wireless technologies have been heralded as vehicles of free expression. it has generally been thought that no government could control information on the internet, hence the expression ''the internet routes around censorship. '' in china, this is increasingly no longer true. the chinese government erected, through its golden shield project, what has become known as the 'great firewall of china'. access, control and censorship of internet content in china is most often attributed to the 'great firewall of china'. this is, however, something of a misnomer; the firewall is merely one path in a maze of controlling technologies and non-technological means in an overall internet censorship strategy. this censorship strategy is comprehensive, incorporating sophisticated technologies, numerous regulatory measures, market influences, and aggressive policing and surveillance of internet activity, resulting in an atmosphere of self-censorship. laws regulating free speech and the internet are implemented out of concern for the potential harm posed by unfettered access to sites that contain political, ideological, social, or moral content that the ccp perceives as harmful. china has adopted a comprehensive internet censorship strategy utilizing a range of control mechanisms. mechanisms of control include laws and regulations pertaining to physical restrictions, regulations of use, ownership and operation of internet service providers (isps), internet access providers (iaps), and internet content providers (icps). similar to media regulations, a series of ever-changing internet regulations are also relevant to the dissemination of information. authorized access entails individuals having to obtain licenses for internet access. in order to obtain a license, individuals are required to register with the local police and provide their names, the names of their service provider, their e-mail addresses, and list any newsgroups in which they participate. this, of course, does not mean that anonymity and pseudonymity cannot be achieved for chinese cybersurfers. users have flocked to cybercafe´s and universities to access the web. the ccp has responded by shutting down many cyber-cafe´s, then later by requiring all cybercafe´s and universities to obtain user identification, and to keep detailed logs of user activities (the regulations are complex and comprehensive). the extent to which such entities have fully complied with the law in practice has not been explored, but the threat of surveillance continues to lead to an environment of self-censorship. the ability to access banned documents and to communicate anonymously is challenged. information flows from the internet subscriber (home, cybercafe´) to the internet service providers (isps) to four gateways controlled by the ministry of posts and telecommunications. isps are regulated through a myriad of laws which are, again, ambiguous and complex. it is difficult for any party to know if they are in compliance with the law. the regulations require isps to restrict and control access to harmful/banned websites, allow surveillance software on their systems, and keep logs of user activity. email is neither private nor anonymous when using an isp regardless of whether a domestic or foreign service is used. isps must and do comply with requests to reveal personal information of the true identity of users as well as information about email content. for example, both yahoo!china and yahoo!hk have disclosed confidential user information of prominent journalists by releasing internet protocol addresses to chinese authorities. many journalists deliberately have email accounts in jurisdictions such as hong kong with a strong rule of law tradition in order to shield their identities from chinese authorities. yahoo!hk handed over the internet protocol address (not the user's name) of journalist shi tao to chinese authorities based merely on an informal request. as such, yahoo!hk circumvented formal judicial requirements of a court order compelling the disclosure of confidential information. yahoo!hk claims that they disclosed the information in compliance with a criminal investigation in mainland china. the dispute has become one of many disputes over the scope of chinese jurisdiction in hong kong. disclosure of an internet protocol address is not classified as personal information under hong kong law making it safe for isps to circumvent their otherwise legal obligation to keep personal information confidential under the privacy ordinance. the securing of shi tao's internet protocol address led the chinese authorities directly to a specific computer port number, and directly to shi tao's computer. the information was classified in china as state secret. shi tao was arrested and sentenced to years in jail. recent popular methods for dissemination of taboo/illegal documents include spam, weblogs and chatrooms -all delivery methods involving the internet which allow for some degree of anonymity or . the precise number of gateways has not been established. some report , others , and others . the author has taken the middle figure as an average only. this ambiguity illustrates the cloud that shrouds accurate information pertaining to the 'great firewall'. liang, footnote . for a short article on the privacy commissioner's ruling see, http://www.reuters.com/article/worldnews/ idussha . pseudonymity. chinese officials have recently begun to crack down on weblog and chatroom use, introducing a host of new regulations directly targeted at information deemed harmful to chinese society. china's filtering/anti-spam technology has likewise greatly evolved so that spam has become a less effective means of communicating information. those who continue to engage in the exchange of banned communications, whether it be via spam, weblogs, text message or other fora, potentially face criminal charges. as the regulations are written in the traditional fashion of ambiguously overbroad provisions, the reality is that merely opening a spam message known to contain harmful material, or forwarding the message could be a contravention of the law. is it possible to route around censorship in china? circumventing the 'great chinese firewall' is achievable using a number of different methods which range from the use of web proxies (tor, anonymizer, dynapass, psiphon) to accessing the internet in peak times (state surveillance requires a large amount of bandwidth), to the use of encryption services. proxies such as tor may still be blocked at the node level (although currently they are not). while state surveillance requires large amounts of bandwidth, the threat of legal and economic sanction plus self-censorship -isps restricting access to potentially contentious sites, cybercafe´s and universities discouraging banned websurfing, individuals refraining from accessing even potentially illegal material -effectively fills the gap left by technological constraints. the use of encryption is able to circumvent filtering and keyword sniffing technology at the router level, but this does not provide a safety net for those wishing to disseminate contraband information. as stated previously, isps must and do comply with requests to disclose personal information. many isps have also built censorship functions into their encryption technology. activists using the encrypted skype technology, for example, have been cautioned against its use due to built-in censorship functions. encrypted messages may arouse further suspicion which may lead from monitoring of general data traffic over the internet to the surveillance of specific individuals and groups. regardless of the method employed, the threat of criminal sanction is always a possibility. the ability to use the internet to publish sanctioned information is a risky proposition. assuming that there are strong ethical arguments in favour of disseminating sanctioned information in times of public health crises, a new mechanism will be required for large-scale information delivery. the benevolent health worm provides one possible solution. a benevolent computer worm is a form of malware. malware typically includes the following types of computers programs: virus, worm, trojan horse, spyware, adware, spam, bot/agent, zombie, exploit, bug, keylogging and so forth. malware refers to computer software which either acts maliciously or whose effects are malicious -the two are not necessarily synonymous. in a wider context, malicious would extend to any type of computer code installed without consent regardless if any damage occurs to the computer. the theory is that the malicious component encompasses the use of bandwidth and again, that there is no consent. the idea of a benevolent virus or worm is not novel. early research and debate focused on the use of a worm to patch existing security flaws in software. expressed more precisely by leading it expert bruce schneier, ''patching other people's machines without annoying them is good; patching other people's machines without their consent is not … viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn't make things better.'' under this definition, no malware could be construed as benevolent. the weakness of this argument is that its discussion has been limited to patching and similar e-commerce activities, where consent is desirable from a corporate ethics perspective and is necessary in order to conclude a binding legal contract. missing from this discussion is the application of a benevolent worm outside of the e-commerce realm, along with the discussion of the difference between consent and informed consent, the latter being the legal requirement in most jurisdictions. the subject of informed consent in the digital age is contentious. it has been argued that consent is given in most internet applications through checking the ''i agree'' button of end-user license agreements and privacy policy statements. this is not representative of informed consent. most users do not read end-user-license agreements (eula). when they do, such licenses contain onerous obligations unilaterally imposed on them expressed in complex, aggressive legal rhetoric -most of these types of terms remain untested in law and run against the basic tenants of the law of contracts, namely consideration, meeting of the minds, and adequate notice of change of terms. this is perhaps best illustrated by way of example. many corporations, such as sony, release products with an end-user license term authorizing them to utilize rootkits, backdoors and digital rights management systems for a variety of unspecified purposes, all of which may be subject to change without notification to the user. the rootkits, in turn, render computers vulnerable to intruders to install malicious applications onto their computers. digital rights management systems allow monitoring devices which track the use of a work (for example, a music c.d.), which could theoretically be used as evidence to bring legal suits against those who make illegal use of the copyrighted work. the author uses the example of consent to illustrate the discrepancy between tangential concepts of theory and practice. the author agrees that informed consent is a desirable feature in software distribution mechanisms. concluding that consent is required in all contexts is to prematurely rule on an issue which has, so far, only been discussed in the limited context of electronic commerce. if consent is gained, do benevolent payloads become ethical? if there is no consent, are benevolent worms precluded from becoming ethical? it appears as though the debate on consent and malware has inherited the intellectual baggage of assumptions surrounding consent. nowhere is this better articulated than in the famous essay by robin west, ''authority, autonomy, and choice: the role of consent in the moral and political visions of franz kafka and richard posner.'' west exposes the fallacy in posner's theory that choice and consent in a legal system allow for an increase both in morality and autonomy. within the confines of benevolent payloads, there is an assumption that lack of consent is inherently bad or unethical contrasted with acts where a vague notion of consent is obtained, thereby magically summoning the requisites of legal and ethical action. the presence of consent should be regarded as one component in an analysis of all factors contributing to an ethical framework. an effects analysis would look to whether any tangible damage, other than use of bandwidth, has been done to the computer, webserver or user, or in the event that other types of damage are sustained, whether there are compelling reasons to derogate from the principles of user consent and avoiding damaging third party property. more importantly, an effects analysis would address the issues of when it is permissible to utilise bandwidth and install software on a user's computer without their consent. when, if ever, does a benevolent payload become permissible or mandatory as a moral duty? in a larger sense, the issue is one of normative ethics drawing on effects-based analysis in consequentialism with that of moral duty in deontology. while a robust examination of types of malware is not required to understand the benevolent health worm, a basic understanding of the differences between a virus and worm is essential, as the underlying technology of a worm alleviates some of the ethical and legal issues for its intended benevolent use. a virus is a ''block of code that inserts copies of itself into other programs''. viruses generally require a positive act by the user to activate the virus. such a positive act would include opening an email or attachment containing the virus. viruses often delay or hinder the performance of functions on a computer, and may infect other software programs. they do not, however, propagate copies of themselves over networks. again, a positive act is required for both infection and propagation. a worm is a program that propagates copies of itself over networks. it does not infect other programs nor does it require a positive act by the user to activate the worm. in this sense, it is selfreplicating. irrespective of the characterization nearly all computer viruses and worms infect either software or hard-drives without the authorization of the computer owner. similarly, all computer viruses and worms utilize bandwidth imposing a strain on traffic and resource demands. all computer viruses and worms may inadvertently cause unexpected damage to a computer system and may contain bugs. the benevolent health worm is no exception. there are ways to minimize damaging effects of the worm through technical design. such elements include: ( ) slow-spreading, ( ) utilize geo-location technology to limit its propagation within a region (''.cn'' and its equivalent for the internationalized domain name in chinese characters), ( ) installation of short and reasonable shut-down mechanisms to avoid perpetual replication, ( ) low-demand bandwidth, and ( ) undergo professional debugging standards. the benevolent health worm would be an information delivery method with worm-like characteristics. a computer worm is a self-replicating computer program containing a tailor-designed payload. the payload would be programmed to spread from computer to computer in china with the specific function of displaying the information in a pop-up window, or override a user's default web page with one displaying information. in the case of the benevolent health worm, the message would contain vital information relating to a public health crisis otherwise unavailable through traditional media sources. the information would ideally come from a trusted source containing accurate and truthful information (see discussion in following sections). the payload would be carefully programmed to prevent any deliberate or positive technical action by the recipient. the recipient would, therefore, have no knowledge or control of the worm. the latter points require elaboration. the selfreplication method of worms is ideal in this situation as it is only the infected computer which takes part in the dissemination of sanctioned information; the person whose computer is infected is technically prohibited from any deliberate or accidental positive acts, and has no control or knowledge of the worm. in order to achieve this, the pop-up message generated by the worm must have the following features: • must not be a virus in that it must be self-replicating, • not contain links to additional sources of information, • the user would not be able to save the information to his or her computer, • the user could not forward the message to others, and • the information would disappear from the system altogether after a specified amount of time. in the case of the latter, the pop-up would appear for a specified time (e.g. min) and re-appear each time a person turned on their computer for a programmed length of time (e.g. weeks). at the end of a short period of time (e.g. weeks) the worm would completely disappear from the user's computer systemall technically feasible through the programming of the payload. these features greatly reduce if not eliminate any risk to the recipients of the information. all elements necessary to prove a criminal act are removed: positive act, knowledge or foreseeable knowledge, mens rea, and motive. a chief criticism of the use of viruses and worm for benevolent purposes is that there are safer alternative means of achieving the same goal. the same cannot be true with the benevolent worm. alternative means of health distribution would include: illegal news reporting; illegal dissemination of news domestically through a blog, chatroom or spam; spam techniques from a foreign jurisdiction; and access to materials outside of china through anonymizing technologies such as web proxies. a common flaw of these methods is the necessity of a positive act by both the sender and recipient of information. this is especially so for the first two means. a positive act, whether it is through technical (e.g. virus) or manual (e.g. forward spam message) would allow for the possibility of dual criminal charges. meanwhile there are further challenges with the latter two distribution means of foreign spam and web proxies. as pre-eminent human rights activist sharon om has noted, human rights spamming lists are potentially illegal under the united states can spam act. the use of anonymizing technologies such as web proxies is by no means fool proof. such technologies are capable of being blocked (policy choice not a technical feat), even trust-enabled web proxies such as psiphon. in the case of the benevolent health worm, only the sender of the information would perform a positive act. these acts would still be illicit on many fronts but the sender would bear a greater risk than the recipient. the programmer of the human rights worm will be in violation of computer misuse law. in the event that the computer programmer is not necessarily the person or group who distributes the worm, those individuals responsible for ''letting the worm loose'' could face criminal and civil charges. finally, the authors of the actual information appearing in the pop-up screen may be charged with a number of criminal acts including state secret and possibly the new law on disclosure of non-authorized news on 'sudden events.' positive acts are performed by those actors along the sender chain while recipients of information remain removed from the process short of reading the content displayed in the window. stated another way, the benevolent worm potentially offers a way to restore an individual's right to physical and mental well-being through a method that reduces the risk of persecution for those who disseminate un-authorized information and reduces and possibly removes the risk for those who receive the information. the above scenarios, however, envision the propagation of the worm and information writing to be performed by individuals within china. such risks could be greatly reduced by creating the worm outside of china. while it is true that computer misuse is illegal in most jurisdictions, the threat of sanction depends greatly on political will. with open american support of projects which address human rights and democracy in suppressed regions, congressional hearings on internet censorship in china and, more specifically, us corporate compliance and aid in censorship; and the passing of the global internet freedom act, it is hard to believe that, at least in the united states, that there would be political will to prevent the benevolent worm. if anything, there may be available funding. there is a strong psychological and political element to the creator (and disseminator) of the worm. a worm created inside china would have the distinct advantage of appearing to be change from within; a worm created outside china raises issues of external meddling, sovereignty, imperialism, or worse yet, information warfare. these issues will be more fully integrated into the ethical discussion below. the ethical dimensions of the benevolent worm encompass several layers. a more sophisticated approach would be to treat the layers as information branches in the total infosphere. for the purpose of this paper i will adopt a simpler approach referring to the author/producer, sender/distributor, recipient, content, delivery method and medium of communication. one great concern in the propagation of public health information through a computer worm is that of trusted source. trusted sources may be divided into two groups. the first involves the content of the information. the second relates to the information producers -authors and distributor. the 'who' in 'who says what' may be more important than the 'what'. in this sense, a worm released by a national state could be construed as interference with sovereignty and may not carry the same weight as a worm released by a trusted ngo working in the region. the reality, however, is that there is no full-proof method between distinguishing between a trusted and deceptive source. all electronic commerce applications suffer from the same ambiguity of trusted and deceptive sources. the following analysis, therefore, assumes that it is possible to utilize trusted sources. indeed it may be a great leap of faith. the analysis further assumes that the issue of public health endemics is sufficiently grave to warrant deviation from traditional paths of information dissemination (as discussed in the previous sections). western-based rights treatises, in particular human rights frameworks, may provide some justification of a benevolent health worm. human rights or civil liberties frameworks operate on two theoretical models. the first is one related to public international law where states bind themselves to legal obligations contained in treaties. the second involves the universality principle of human rights based on moral rights as opposed to legal rights. a similar dichotomy is to distinguish between what is legal and what is legitimate. the law -that is, what is legal -is premised on the notion that there, ''is a system of enforceable rules governing social relations and legislated by a political system.'' breach of a rule results in an activity being classified as illegal. legitimacy in this context is used in its broadest sense to reflect what is moral which need not be legal. discussions of morality are naturally dependent on the framework of the analysis. the notion of morality as seen through the lens of natural law (e.g. aquinas, aristotle, religion), would likely differ from morality as seen through the lens of normative jurisprudence (e.g. virtue ethics, utilitarianism, deontology). the moral or ethical framework shapes the debate. this section of the paper does not use a specific ethical framework such as deontology to discuss issues of human rights. to the extent that any differentiation is made, there is some delineation between what is legal versus legitimate. the debate will mostly inter-mingle notions of legality with those of legitimacy, as well as notions of binding public international law with the universal principle of human rights. the aim is to initiate ethical debate on the subject, not to fully justify the use of a benevolent worm. l. floridi. information ethics, its nature and scope. in j. van under a legal rights based theory, specific rights and obligations are only provided to the extent of treaty provisions in international law. such rights may or may not be entrenched in domestic/national law. where rights are protected under international law, they may contradict and clash with domestic law. the nexus between national and international law has been discussed using the theories of dualism and monism. as the honorable justice kirby writes: ''for the monist, international law is simply part of the law of the land, together with the more familiar areas of national law. dualists, on the other hand, assert that there are two essentially different legal systems. they exist ''side by side within different spheres of action -the international plane and the domestic plane. '' the clash between national and international law is influenced by whether a court adopts a monist or dualist position. the chinese government and courts use a dualist theory where human rights are viewed as a matter of 'foreign affairs.' as one human rights expert writes, ''the chinese government essentially views these obligations as a matter of foreign affairs, and seeks to insulate the domestic arena from the reach of international human rights law, both in symbolic and practical terms. '' international tribunals and courts also adopt a dualist approach. national law is treated as a fact. an obligation in international law cannot be avoided or excused due to a clash with domestic/national law. other governments and courts adopt a monist approach. this can be seen in the erosion of the dualist approach in many countries such as australia and canada. there have been many court decisions which integrate international law principles into the national landscape. the second level relates to the universality of human rights. universality is not a legal proposition but a moral one; that human rights are naturally acquired at birth regardless of the area of the world where you reside. human rights subsist regardless of international and domestic legal obligations. regardless of the interpretation of human rights, the benevolent health worm represents undisputed legal and moral rights which may be stated in a simple form: everyone has the right of freedom of expression, and the right to the enjoyment of the highest attainable standard of physical and mental health. these rights are legally protected in a number of international, regional, and united nations treaties to which china is party, and, according to the model of human rights one adheres to, are inherently entrenched regardless of the law. the constitution for the people's republic of china (prc) recognizes ''freedom of speech'', however, the concept of free speech is viewed differently in china than in western democracies. reed, an expert on freedom of expression in china, notes: ''the prc believes that rights are only instruments for realizing state objectives. individual rights are merely residual freedoms found within the confines of the law. if necessary, all rights must be sacrificed for the good of the common collective. as a result, china traditionally keeps the dissemination of information and freedom of expression to a minimum. the ccp controls all facets of government, including the freedom of expression granted in the constitution.'' several distinct questions surface as a result of the above passage. is china within its legitimate sovereign right to censor free speech on public health issues on the grounds that such discourse falls under the exemption of ''national security''? is civil disobedience justified in the context of disobeying the law for a higher purpose whether it be construed as a moral obligation or interpreted as for the greater good of the community (emphasis here on worm created within china)? would a worm created outside of china be a deliberate act of interference with a nation's sovereignty? under what circumstances might the benevolent worm be construed as part of information warfare? public health as 'national security' threat according to one view, national security always trumps individual rights because security, on a hobbesian-type view, is necessary for a peaceful society in which persons can enjoy their liberty and rights. this view appears to be gaining adherents at least among legislators. on the moderate viewpoint, free speech rights are defeasible, but only when appropriate justification for m. kirby censorship is available. in order to protect free speech rights, legislative limitations on censorship powers are necessary. in a rights-respecting society, balancing involves prioritizing different rights in the case of conflicts between prima facie rights (e.g. freedom of expression may conflict with freedom of religion). in the case of liberal democracies, there should be strong limitations on violations of freedom of expression and liberty. on the other hand, as we have seen in the quotation from reed, above, from the perspective of the prc, rights are merely instrumental to the common good, and balancing rights can be done by determining what maximizes the common good. rights can be overridden whenever the common good requires. under public international law, governments are allowed to restrict the free flow of information to protect interests such as national security or public morals. national security ideology has, however, been used by authorities to justify human rights infringement. for this reason, international documents and principles were developed to keep rights exemptions confined to narrow determinations. for example, the johannesburg principles on national security, freedom of expression and access to information adopt a standard whereby freedom of expression and access to information may only be restricted where a number of conditions are met: prescribed by law, protects a legitimate national security interest, and is necessary in a democratic society. for example, a legitimate national security interest is incitement to violent overthrow of a government. national security restrictions are not justifiable in the case, for example, of ''embarrassment or exposure of wrongdoing, or to conceal information about he functioning of its public institutions …'' (principle (b)). china's restrictions on free speech and access to information clearly do not adhere to the johannesburg principles or other international standards for protecting the right to information. as human rights watch notes, ''prior censorship in particular is severely disfavoured in international law, and not permitted in many constitutional systems. a decision to block access to online material should be subject to the highest level of scrutiny, with a burden on the government to demonstrate that censorship would effectively avert a threat of irreparable, imminent, and weighty harms, and that less extreme measures are unavailable as alternatives to protect the state interest at issue. at present, it seems apparent that china engages in no such scrutiny … '' moreover, the decision to punish certain speakers merely for exercising their right to speak frankly online (or off) is arbitrary and unpredictable with no opportunity for an individual or group to know in advance whether their actions comport with the law. the inability to comply with the law based on a lack of transparency severely undermines any attempt to posit a law as moral. morality here is used in a procedural sense along the line of natural law theorist, lon fuller's essential principle that the law should be written and promulgated in a fashion which guides behaviour. conscientious objection and civil disobedience i will refer to two general types of civil disobedience. the first is better known as 'conscientious objection' where the moral agent performs or abstains from performing an act to preserve the agent's own moral integrity. there is only the duty to obey a just law. fashioned in a similar vein, there may in fact be a moral obligation to disobey an unjust law. in the case of the benevolent worm, a number of parties including the author/producer, and sender/distributor may feel morally compelled to send what they believe is vital, accurate health information otherwise not available within china. conjecturing on the moral agent's dilemma, the agent is motivated to break the law in order to achieve a number of possible goals such as informing the populace of important news related to the epidemic and to encourage behaviour associated with containing the disease in question. g. hagen and a. maurushat, surveillance, technology, and national security: issues in civil liberties, course materials. asia-america institute in transnational law, . available at http://hei.unige.ch/humanrts/instree/ johannesburg.html. human rights watch, footnote . martin luther king junior, for example, writes: ''how can you advocate breaking some laws and obeying others?'' the answer is found in the fact that there are two types of laws: there are just and there are unjust laws. i would agree with saint augustine that ''an unjust law is no law at all.'' ''… one who breaks an unjust law must do it openly, lovingly … and with a willingness to accept the penalty. i submit that an individual who breaks a law that conscience tells him is unjust, and willingly accepts the penalty by staying in jail to arouse the conscience of the community over its injustice, is in reality expressing the very highest respect for law.' ' ( ) the other type of disobedience, on the other hand, is known as 'civil disobedience' in the sense that it, ''is conscientious disobedience of the law directed primarily … at bringing about a change in a law, policy, institution that is morally unjust or otherwise morally unacceptable ... or a law which may be acceptable in itself but which is disobeyed in order to protest against the offending law.'' the moral agent, in this instance, is motivated to affect change in the law. in the case of the benevolent worm, however, this would likely be a possible ancillary effect rather than a primary goal. conscientious objection and civil disobedience have been justified on a number of grounds. one thought is that disobedience of the law may be justified where there is no disrespect or harm to others. another ground speaks to a utilitarian approach of bringing about useful reconsideration of public policy, respect for human rights, interests of minorities and disadvantages groups, and actual reform of the law. it has been shown that other methods such as news reporting and spam potentially create harm not only for the sender but also for the recipient of sanctioned health information. the benevolent worm has the goal of safely minimizing harm to the sender and attempts to eliminate harm to the recipient (realizing, of course, that unintended consequences are not always foreseeable). many philosophers have specified that justifiable civil disobedience ought to be non-violent with the agent ready and willing to accept punishment as a consequence for breaking the law. this view seeks to disassociate civil disobedience from revolutionary disobedience. the author suggests that this dichotomy is more useful in a democratic state with rule of law whose political leaders and citizens first have respect for their constitutions and second for human rights in general. the dichotomy, therefore, seems less appropriate for autocratic states with documented histories of human rights abuse. would a worm created outside of china be a deliberate act of interference with a nation's sovereignty? the answer to this question may lie in the meaning of sovereignty. in modern international law the notion of sovereignty is ''people's sovereignty rather than the sovereign's sovereignty … [whereby] no serious scholar still supports the contention that internal human rights are ''essentially within the domestic jurisdiction of any state'' and hence insulated from international law.'' the notion of sovereignty in human rights is, therefore, predominantly premised on democracy and rule of law. china is not a democratic nation adherent to the rule of law. it does not, however, follow that china is not entitled to sovereignty but rather, that issues of sovereignty are burdened with additional questions. yet sovereignty has generally been understood as one nation interfering with another nation's legitimate right to runs its affairs. one thinks of the invasion of iraq and not generally of information on public health endemics. sovereignty issues may be affected by the 'who' in 'who says what'. a worm released by the canadian government, for example, could conceivably be construed as intentional sovereign interference. a worm released by a ngo, on the other hand, would be less likely to be perceived as sovereign interference; this would be further buttressed by a trusted ngo with strong links to china. in an extreme circumstance the benevolent worm might be construed as part of information warfare (iw). defined simplistically, information warfare refers to, ''actions taken to affect an adversary's information and information systems while defending one's own information and information systems. '' there are six broad components to iw: physical attack/destruction, electronic warfare, computer network attack, military deception, psychological operations, and operations security. it is difficult to conceive how the benevolent worm in its described application in this paper would fit into any one of these categories. one cannot, however, rule out the possibility of a worm with false and potentially harmful information concerning an epidemic to be released as part of an overall iw strategy. a strategy of disinformation, however, is applicable in a number of contexts including conventional means of information dissemination such as false news reporting, spam, and so forth. careful attention to trusted sources could reduce the risk of the worm being perceived as iw. at its most base conception, 'asian values' emphasize the community as opposed to the individual or self. it has been argued that human rights are incompatible with 'asian values'. expressed more poignantly by samuel huntington: the traditionally prevailing values in east asia have differed fundamentally from those in the west and, by western standards, they are not favourable to democratic development. confucian culture and its variants emphasize the supremacy of the group over the individual, authority over liberty and responsibility over rights. expressed somewhat differently, western human rights-based rhetoric focus on rights of an individual whereas eastern confucian moral philosophy focuses on the duties of an individual to the community. the following analysis places ethical debate on the benevolent worm in the confucian moral philosophy tradition. confucian moral philosophy is often referred to as a duty-based philosophy. confucian ethical teachings are grounded in five moral values: li (ritual), hsia (filial piety, duty to family), yi (righteousness), xin (honesty and trustworthiness), ren/jen (benevolence, social virtue, humaneness towards others), and chung (loyalty to the state). confucius' view of duty was not traditionally extended to all people but was limited to five relationships: ruler to subject, father to son, eldest brother to younger siblings, husband to wife, and friend to friend. there has never been a duty from human to human in traditional confucian thought. two contentious issues are raised in applying confucius' teachings to the benevolent health worm. first, the values of ren, benevolence towards others, may compete with that of chung, loyalty to the state. second, there is no general duty between humans outside of the five relationships. the value of chung requires a person to be loyal to the state but not at any cost. confucius writes, ''if a ruler's words be good, is it not also good that no one oppose them? but if they are not good, and no one opposes them, may there not be expected from this one sentence the ruin of his country?'' [confucius, the analects, book translated by legge]. the most important value as espoused by confucius was ren. a major component of ren involved individual self-cultivation in virtuous action. it has further been suggested that li -norms of social ritual and interaction -is a critical component in analysing ren. li is learned by socializing and interacting with persons who embody ren. as lai writes: the paradigmatic man is a creator of standards rather than a follower … and he possesses a keen sense of moral discrimination. moral achievement reaches its culmination in those who have attained the capacity to assess events and who, being attuned to li, embody a sense of rightness. good governance and social order were derived from a hierarchical chain of individual virtuous action thus it is written that, ''their hearts being rectified, their persons were cultivated. their persons being cultivated, their families were regulated. their families being regulated, their states were rightly governed. their states being rightly governed, the whole kingdom was made tranquil and happy'' [confucius, great learning, translated by legge] . what of the case where ren is not personally cultivated leading to poor governance? loyalty to the state is loyalty to a righteous government who has fulfilled its duties to its citizens in the spirit of ren; loyalty to the state has never been an absolute. by no means does the author suggest that the overall governance of china has been poor under the current administration. china has had to face many problems that other nations, and in particular wealthy democratic nations, have never had to address: starvation, extreme poverty, territory occupation, a devastated economy, and population explosion to name but a few. while china has overcome many hurdles to better provide for its people, its record on factors contributing to human dignity is poor (freedom of expression, protection of minorities, access to important and timely information, and so forth). it is within this limited latter context of human dignity that it is conceivable to characterize governance as poor. the manner in which public epidemics such as hiv/aids, sars and bird flu has been handled is evidence that the government has not fulfilled its duties to the extent required under the spirit of ren. this resembles the notion of obeying just laws, and being further obliged to disobey unjust laws. the formation of a person's character through virtuous action is strongly tied to a sense of community and to one's role in a community. for this similarly, extension of duties beyond the classic five relationships has also been newly interpreted. it could be said that certain members of society may have the duty to disclose information on epidemics which could save lives, reduce the spread of the infectious disease, and perhaps altogether avoid a disease reaching the level of epidemic. certain societal members may include scholars, doctors, journalist, experts, ngos, and other international organizations. this bears a resemblance to justifications of the moral agent in conscientious disobedience. the dissemination of vital information is potentially a virtuous act whether it is through direct means of an internet website, news publications, or whether it is less direct through a benevolent health worm. of course, the opposite could be argued for those who adhere to a traditional view of confucian moral philosophy. it becomes more difficult to justify the benevolent worm in the absence of one of the five relationships giving rise to duties. the most relevant relationship to the benevolent worm is that of ruler and subject. the ruler-subject relationship is predicated on the subject's respect for the ruler. this is similar to chung, loyalty to the state. again, there is the assumption that the ruler will have ren, and will act as an example to his subjects. in the absence of ren, the subject duty to obey the ruler is lessened. regardless of the framework adopted, the use of a benevolent worm does not seem to represent any clear abdication from confucian moral philosophy. water holds up the boat; water may also sink the boat. emperor taizong ( - , tang dynasty) in much the same way, benevolent payloads have the potential to be destructive. they also have the potential to be beneficial. benevolent payloads have in the past been analysed in the context of patches and e-commerce. conclusion has been reached in the wider technological community that benevolent payloads are simply a 'bad idea' because there is no consent, and there are safer methods available. there has been no analysis of benevolent payloads outside of the electronic commerce context. the benevolent health worm provides an interesting case study which undermines and challenges many of the ethical issues of benevolent payloads. this article has attempted to untangle many of the complex ethical issues surrounding the benevolent health worm, and benevolent payloads in general. good' worms and human rights collectivism and constitutions: press systems in china and japan human rights: beyond the liberal vision confucian values and the internet: a potential conflict computer viruses still a bad idea? moral autonomy you've got dissent!: chinese dissident use of the internet and beijing's counter-strategies. rand the limits of official tolerance: the case of aizhixing china's media censorship the analects of confucius. ebooks@adelaide the great learning. ebooks@adelaide a comparison between the ethics of socrates and confucius confucius -the secular as sacred information ethics, its nature and scope will the boat sink the water? public affairs will the boat sink the water? public affairs using predators to combat worms and viruses: a simulation-based study annual computer security applications conference anonymous and malicious human rights and spam: a china case study counter-revolutionaries, subversives, and terrorists: china's evolving national security law. in national security and fundamental freedoms: hong kong's article under scrutiny american democracy in relation to asia: democracy and capitalism: asian and american perspectives bird flu -what is china hiding? battling sars: china's silence costs lives. international herald tribune the growing rapprochment between international law and national law learning from chinese philosophies: ethics of interdependent and contextualised self liberal rights or/and confucian virtues? has china achieved its goals through the internet regulations moral judgment, historical reality, and civil disobedience censorship: a world encyclopedia conscientious disobedience of the law: its necessity, justification, and problems to which it gives rise democracy and confucian values international human rights and humanitarian law new regulations in china target foreign media a theory of justice (rev from the great firewall of china to the berlin firewall: the cost of content regulation on internet commerce reporters without borders. government turns deaf ear to call for ching cheong's release liberalism and the limits of justice benevolent worms, crypto-gram newsletter aids in china: an annotated chronology confucian ethics: a comparative study of self, autonomy and community virtue ethics, the analects, and the problem of commensurability law and ideology the bioethical principles and confucius' moral philosophy learning, and politics: essays on the confucian intellectual. suny is china hiding avian influenza? human rights as ''foreign affairs'': china's reporting under human rights treaties world health organization. who urges member states to be prepared for a pandemic chinese information warfare: a phantom menace or emerging threat? the strategic studies institute media censorship in china'' council on foreign relations key: cord- - vrlrim authors: lefkowitz, e.j.; odom, m.r.; upton, c. title: virus databases date: - - journal: encyclopedia of virology doi: . /b - - . - sha: doc_id: cord_uid: vrlrim as tools and technologies for the analysis of biological organisms (including viruses) have improved, the amount of raw data generated by these technologies has increased exponentially. today's challenge, therefore, is to provide computational systems that support data storage, retrieval, display, and analysis in a manner that allows the average researcher to mine this information for knowledge pertinent to his or her work. every article in this encyclopedia contains knowledge that has been derived in part from the analysis of such large data sets, which in turn are directly dependent on the databases that are used to organize this information. fortunately, continual improvements in data-intensive biological technologies have been matched by the development of computational technologies, including those related to databases. this work forms the basis of many of the technologies that encompass the field of bioinformatics. this article provides an overview of database structure and how that structure supports the storage of biological information. the different types of data associated with the analysis of viruses are discussed, followed by a review of some of the various online databases that store general biological, as well as virus-specific, information. in , niu and frankel-conrat published the c-terminal amino acid sequence of tobacco mosaic virus capsid protein. the complete -amino-acid sequence of this protein was published in . the first completely sequenced viral genome published was that of bacteriophage ms in (genbank accession number v ). sanger used dna from bacteriophage phix ( j ) in developing the dideoxy sequencing method, while the first animal viral genome, sv ( j ), was sequenced using the maxam and gilbert method and published in . viruses therefore played a pivotal role in the development of modern-day sequencing methods, and viral sequence information (both protein and nucleotide) formed a substantial subset of the earliest available biological databases. in , margaret o. dayhoff published the first publicly available database of biological sequence information. this atlas of protein sequence and structure was available only in printed form and contained the sequences of approximately proteins. establishment of a database of nucleic acid sequences began in through the efforts of walter goad at the us department of energy's los alamos national laboratory (lanl) and separately at the european molecular biology laboratories (embl) in the early s. in , the lanl database received funding from the national institutes of health (nih) and was christened genbank. in december of , the los alamos sequence library contained sequences of which were from eukaryotic viruses and were from bacteriophages. by its tenth release in , genbank contained sequences ( nucleotides) of which ( nucleotides) were viral. in august of , genbank (release ) contained approximately records, including viral sequences. the number of available sequences has increased exponentially as sequencing technology has improved. in addition, other high-throughput technologies have been developed in recent years, such as those for gene expression and proteomic studies. all of these technologies generate enormous new data sets at ever-increasing rates. the challenge, therefore, has been to provide computational systems that support the storage, retrieval, analysis, and display of this information so that the research scientist can take advantage of this wealth of resources to ask and answer questions relevant to his or her work. every article in this encyclopedia contains knowledge that has been derived in part from the analysis of large data sets. the ability to effectively and efficiently utilize these data sets is directly dependent on the databases that have been developed to support storage of this information. fortunately, the continual development and improvement of data-intensive biological technologies has been matched by the development and improvement of computational technologies. this work, which includes both the development and utilization of databases as well as tools for storage and analysis of biological information, forms a very important part of the bioinformatics field. this article provides an overview of database structure and how that structure supports the storage of biological information. the different types of data associated with the analysis of viruses are discussed, followed by a review of some of the various online databases that store general biological information as well as virusspecific information. definition a database is simply a collection of information, including the means to store, manipulate, retrieve, and share that information. for many of us, lab notebook fulfilled our initial need for a 'database'. however, this information storage vehicle did not prove to be an ideal place to archive our data. backups were difficult, and retrieval more so. the advent of computers -especially the desktop computer -provided a new solution to the problem of data storage. though initially this innovation took the form of spreadsheets and electronic notebooks, the subsequent development of both personal and large-scale database systems provided a much more robust solution to the problems of data storage, retrieval, and manipulation. the computer program supplying this functionality is called a 'database management system' (dbms). such systems provide at least four things: ( ) the necessary computer code to guide a user through the process of database design; ( ) a computer language that can be used to insert, manipulate, and query the data; ( ) tools that allow the data to be exported in a variety of formats for sharing and distribution; and ( ) the administrative functions necessary to ensure data integrity, security, and backup. however, regardless of the sophistication and diverse functions available in a typical modern dbms, it is still up to the user to provide the proper context for data storage. the database must be properly designed to ensure that it supports the structure of the data being stored and also supports the types of queries and manipulations necessary to fully understand and efficiently analyze the properties of the data. the development of a database begins with a description of the data to be stored, all of the parameters associated with the data, and frequently a diagram of the format that will be used. the format used to store the data is called the database schema. the schema provides a detailed picture of the internal format of the database that includes specific containers to store each individual piece of data. while databases can store data in any number of different formats, the design of the particular schema used for a project is dependent on the data and the needs and expertise of the individuals creating, maintaining, and using the database. as an example, we will explore some of the possible formats for storing viral sequence data and provide examples of the database schema that could be used for such a project. figure (a) provides an example of a genbank sequence record that is familiar to most biologists. these records are provided in a 'flat file' format in which all of the information associated with this particular sequence is provided in a human-readable form and in which all of the information is connected in some manner to the original sequence. in this format, the relationships between each piece of information and every other piece of information are only implicitly defined, that is, each line starts with a label that describes the information in the rest of the line, but it is up to the investigator reading the record to make all of the proper connections between each of the data fields (lines). the proper connections are not explicitly defined in this record. as trained scientists, we are able to read the record in figure (a) and discern that this particular amino acid sequence is derived from a strain of ebola virus that was studied by a group in germany, and that this sequence codes for a protein that functions as the virus rna polymerase. the format of this record was carefully designed to allow us, or a computer, to pull out each individual type of information. however as trained scientists, we already understand the proper connections between the different information fields in this file. the computer does not. therefore, to analyze the data using a computer, a custom software program must be written to provide access to the data. extensible markup language (xml) is another widely used format for storing database information. figure (b) shows an example of part of the xml record for the ebola virus polymerase protein. in this format, each data field can be many lines long; the start and end of a data record contained within a particular field are indicated by tags made of a label between two brackets (''). unlike the lines in the genbank record in figure (a), a field in an xml record can be placed inside of another, defining a structure and a relationship between them. for example, the tseq_orgname is placed inside of the tseq record to show that this organism name applies only to that sequence record. if the file contained multiple sequences, each tseq field would have its own tseq_orgname subfield, and the relationship between them would be very clear. this self-describing hierarchical structure makes xml very powerful for expressing many types of data that are hard to express in a single table, such as that used in a spreadsheet. however, in order to find any piece of information in the xml file, a user (with an appropriate search program) needs to traverse the whole file in order to pull out the particular items of data that are of interest. therefore, while an xml file may be an excellent format for defining and exchanging data, it is often not the best vehicle for efficiently storing and querying that data. that is still the realm of the relational database. 'relational database management systems' (rdbmss) are designed to do two things extremely well: ( ) store and update structured data with high integrity, and ( ) provide powerful tools to search, summarize, and analyze the data. the format used for storing the data is to divide it into several tables, each of which is equivalent to a single spreadsheet. the relationships between the data in the tables are then defined, and the rdbms ensures that all data follow the rules laid out by this design. this set of tables and relationships is called the schema. an example diagram of a relational database schema is provided in figure . this viral genome database (vgd) schema is an idealized version of a database used to store viral genome sequences, their associated gene sequences, and associated descriptive and analytical information. each box in figure represents a single object or concept, such as a genome, gene, or virus, about which we want to store data and is contained in a single table in the rdbms. the names listed in the box are the columns of that table, which hold the various types of data about the object. the 'gene' table therefore contains columns holding data such as the name of the gene, its coding strand, and a description of its function. the rdbms is lines and arrows display the relationships between fields as defined by the foreign key (fk) and primary key (pk) that connect two tables. (each arrow points to the table containing the primary key.) tables are color-coded according to the source of the information they contain: yellow, data obtained from the original genbank sequence record and the ictv eighth report; pink, data obtained from automated annotation or manual curation; blue, controlled vocabularies to ensure data consistency; green, administrative data. able to enforce a series of rules for tables that are linked by defining relationships that ensure data integrity and accuracy. these relationships are defined by a foreign key in one table that links to corresponding data in another table defined by a primary key. in this example, the rdms can check that every gene in the 'gene' table refers to an existing genome in the 'genome' table, by ensuring that each of these tables contains a matching 'genome_id'. since any one genome can code for many genes, many genes may contain the same 'genome_id'. this defines what is called a one-to-many relationship between the 'genome' and 'gene' tables. all of these relationships are identified in figure by arrows connecting the tables. because viruses have evolved a variety of alternative coding strategies such as splicing and rna editing; it is necessary to design the database so that these processes can be formally described. the 'gene_segment' table specifies the genomic location of the nucleotides that code for each gene. if a gene is coded in the traditional manner -one orf, one protein -then that gene would have one record in the 'gene_segment' table. however, as described above, if a gene is translated from a spliced transcript, it would be represented in the 'gene_segment' table by two or more records, each of which specifies the location of a single exon. if an rna transcript is edited by stuttering of the polymerase at a particular run of nucleotides, resulting in the addition of one or more nontemplated nucleotides, then that gene will also have at least two records in the 'gene_segment' table. in this case, the second 'gene_segment' record may overlap the last base of the first record for that gene. in this manner, an extra, nontemplated base becomes part of the final gene transcript. other more complex coding schemes can also be identified using this, or similar, database structures. the tables in figure are grouped according to the type of information they contain. though the database itself does not formally group tables in this manner, database schema diagrams are created to benefit database designers and users by enhancing their ability to understand the structure of the database. these diagrams make it easier to both populate the database with data and query the database for information. the core tables hold basic biological information about each viral strain and its genomic sequence (or sequences if the virus contains segmented genomes) as well as the genes coded for by each genome. the taxonomy tables provide the taxonomic classification of each virus. taxonomic designations are taken directly from the eighth report of the international committee on taxonomy of viruses (ictv). the 'gene properties' tables provide information related to the properties of each gene in the database. gene properties may be generated from computational analyses such as calculations of molecular weight and isoelectric point (pi) that are derived from the amino acid sequence. gene properties may also be derived from a manual curation process in which an investigator might identify, for example, functional attributes of a sequence based on evidence provided from a literature search. assignment of 'gene ontology' terms (see below) is another example of information provided during manual curation. the blast tables store the results of similarity searches of every gene and genome in the vgd searched against a variety of sequence databases using the national center for biotechnology information (ncbi) blast program. examples of search databases might include the complete genbank nonredundant protein database and/or a database comprised of all the protein sequences in the vgd itself. while most of us store our blast search results as files on our desktop computers, it is useful to store this information within the database to provide rapid access to similarity results for comparative purposes; to use these results to assign genes to orthologous families of related sequences; and to use these results in applications that analyze data in the database and, for example, display the results of an analysis between two or more types of viruses showing shared sets of common genes. finally, the 'admin' tables provide information on each new data release, an archive of old data records that have been subsequently updated, and a log detailing updates to the database schema itself. it is useful for database designers, managers, and data submitters to understand the types of information that each table contains and the source of that information. therefore, the database schema provided in figure is color-coded according to the type and source of information each table provides. yellow tables contain basic biological data obtained either directly from the genbank record or from other sources such as the ictv. pink tables contain data obtained as the result of either computational analyses (blast searches, calculations of molecular weight, functional motif similarities, etc.) or from manual curation. blue tables provide a controlled vocabulary that is used to populate fields in other tables. this ensures that a descriptive term used to describe some property of a virus has been approved for use by a human curator, is spelled correctly, and when multiple terms or aliases exist for the same descriptor, the same one is always chosen. while the use of a controlled vocabulary may appear trivial, in fact, misuse of terms, or even misspellings, can result in severe problems in computer-based databases. the computer does not know that the terms 'negative-sense rna virus' and 'negative-strand rna virus' may both be referring to the same type of virus. the provision and use of a controlled vocabulary increases the likelihood that these terms will be used properly, and ensures that the fields containing these terms will be easily comparable. for example, the 'geno-me_molecule' table contains the following permissible values for 'molecule_type': 'ambisense ssrna', 'dsrna', 'negative-sense ssrna', 'positive-sense ssrna', 'ssdna', and 'dsdna'. a particular viral genome must then have one of these values entered into the 'molecule_type' field of the 'genome' table, since this field is a foreign key to the 'molecule_type' primary key of the 'genome_molecule' table. entering 'double-stranded dna' would not be permissible. raw data obtained directly from high-throughput analytical techniques such as automated sequencing, protein interaction, or microarray experiments contain little-to-no information as to the content or meaning. the process of adding value to the raw data to increase the knowledge content is known as annotation and curation. as an example, the results of a microarray experiment may provide an indication that individual genes are up-or downregulated under certain experimental conditions. by annotating the properties of those genes, we are able to see that certain sets of genes showing coordinated regulation are a part of common biological pathways. an important pattern then emerges that was not discernable solely by inspection of the original data. the annotation process consists of a semiautomated analysis of the information content of the data and provides a variety of descriptive features that aid the process of assigning meaning to the data. the investigator is then able to use this analytical information to more closely inspect the data during a manual curation process that might support the reconstruction of gene expression or protein interaction pathways, or allow for the inference of functional attributes of each identified gene. all of this curated information can then be stored back in the database and associated with each particular gene. for each piece of information associated with a gene (or other biological entity) during the process of annotation and curation, it is always important to provide the evidence used to support each assignment. this evidence may be described in a standard operating procedure (sop) document which, much like an experimental protocol, details the annotation process and includes a description of the computer algorithms, programs, and analysis pipelines that were used to compile that information. each piece of information annotated by the use of this pipeline might then be coded, for example, 'iea: inferred from electronic annotation'. for information obtained from the literature during manual curation, the literature reference from which the information was obtained should always be provided along with a code that describes the source of the information. some of the possible evidence codes include 'ida: inferred from direct assay', 'igi: inferred from genetic interaction', 'imp: inferred from mutant phenotype', or 'iss: inferred from sequence or structural similarity'. these evidence codes are taken from a list provided by the gene ontology (go) consortium (see below) and as such represent a controlled vocabulary that any data curator can use and that will be understood by anyone familiar with the go database. this controlled evidence vocabulary is stored in the 'evidence' table, and each record in every one of the gene properties tables is assigned an evidence code noting the source of the annotation/curation data. as indicated above, the use of controlled vocabularies (ontologies) to describe the attributes of biological data is extremely important. it is only through the use of these controlled vocabularies that a consistent, documented approach can be taken during the annotation/curation process. and while there may be instances where creating your own ontology may be necessary, the use of already available, community-developed ontologies ensures that the ontological descriptions assigned to your database will be understood by anyone familiar with the public ontology. use of these public ontologies also ensures that they support comparative analyses with other available databases that also make use of the same ontological descriptions. the go consortium provides one of the most extensive and widely used controlled vocabularies available for biological systems. go describes biological systems in terms of their biological processes, cellular components, and molecular functions. the go effort is community-driven, and any scientist can participate in the development and refinement of the go vocabulary. currently, go contains a number of terms specific to viral processes, but these tend to be oriented toward particular viral families, and may not necessarily be the same terms used by investigators in other areas of virology. therefore it is important that work continues in the virus community to expand the availability and use of go terms relevant to all viruses. go is not intended to cover all things biological. therefore, other ontologies exist and are actively being developed to support the description of many other biological processes and entities. for example, go does not describe disease-related processes or mutants; it does not cover protein structure or protein interactions; and it does not cover evolutionary processes. a complementary effort is under way to better organize existing ontologies, and to provide tools and mechanisms to develop and catalog new ontologies. this work is being undertaken by the national center for biomedical ontologies, located at stanford university, with participants worldwide. the most comprehensive, well-designed database is useless if no method has been provided to access that database, or if access is difficult due to a poorly designed application. therefore, providing a search interface that meets the needs of intended users is critical to fully realizing the potential of any effort at developing a comprehensive database. access can be provided using a number of different methods ranging from direct query of the database using the relatively standardized 'structured query language' (sql), to customized applications designed to provide the ability to ask sophisticated questions regarding the data contained in the database and mine the data for meaningful patterns. web pages may be designed to provide simple-touse forms to access and query data stored in an rdbms. using the vgd schema as a data source, one example of an sql query might be to find the gene_id and name of all the proteins in the database that have a molecular weight between and , and also have at least one transmembrane region. many database providers also provide users with the ability to download copies of the database so that these users may analyze the data using their own set of analytical tools. when a user queries a database using any of the available access methods, the results of that query are generally provided in the form of a table where columns represent fields in the database and the rows represent the data from individual database records. tabular output can be easily imported into spreadsheet applications, sorted, manipulated, and reformatted for use in other applications. but while extremely flexible, tabular output is not always the best format to use to fully understand the underlying data and the biological implications. therefore, many applications that connect to databases provide a variety of visualization tools that display the data graphically, showing patterns in the data that may be difficult to discern using text-based output. an example of one such visual display is provided in figure and shows conservation of synteny between the genes of two different poxvirus species. the information used to generate this figure comes directly from the data provided in the vgd. every gene in the two viruses (in this case crocodilepox virus and molluscum contagiosum virus) has been compared to every other gene using the blast search program. the results of this search are stored in the blast tables of the vgd. in addition, the location of each gene within its respective genomic sequence is stored in the 'gene_segment' table. this information, once extracted from the database server, is initially text but it is then submitted to a program running on the server that reformats the data and creates a graph. in this manner, it is much easier to visualize the series of points formed along a diagonal when there are a series of similar genes with similar genomic locations present in each of the two viruses. these data sets may contain gene synteny patterns that display deletion, insertion, or recombination events during the course of viral evolution. these patterns can be difficult to detect with text-based tables, but are easy to discern using visual displays of the data. information provided to a user as the result of a database query may contain data derived from a combination of sources, and displayed using both visual and textual feedback. figure shows the web-based output of a query designed to display information related to a particular virus gene. the top of this web page displays the location of the gene on the genome visually showing surrounding genes on a partial map of the viral genome. basic gene information such as genome coordinates, gene name, and the nucleotide and amino acid sequence are also provided. this information was originally obtained from the original genbank record and then stored in the vgd database. data added as the result of an automated annotation pipeline are also displayed. this includes calculated values for molecular weight and pi; amino acid composition; functional motifs; blast similarity searches; and predicted protein structural properties such as transmembrane domains, coiled-coil regions, and signal sequences. finally, information obtained from a manual curation of the gene through an extensive literature search is also displayed. curated information includes a mini review of gene function; experimentally determined gene properties such as molecular weight, pi, and protein structure; alternative names and aliases used in the literature; assignment of ontological terms describing gene function; the availability of reagents such as antibodies and clones; and also, as available, information on the functional effects of mutations. all of the information to construct the web page for this gene is directly provided as the result of a single database query. (the tables storing the manually curated gene information are not shown in figure .) obviously, compiling the data and entering it into the database required a substantial amount of effort, both computationally and manually; however, the information is now much more available and useful to the research scientist. no discussion of databases would be complete without considering errors. as in any other scientific endeavor, the data we generate, the knowledge we derive from the data, and the inferences we make as a result of the analysis of the data are all subject to error. these errors can be introduced at many points in the analytical chain. the original data may be faulty: using sequence data as one example, nucleotides in a dna sequence may have been misread or miscalled, or someone may even have mistyped the sequence. the database may have been poorly designed; a field in a table designed to hold sequence information may have been set to hold only characters, whereas the sequences imported into that field may be longer than nucleotides. the sequences would have then been automatically truncated to characters, resulting in the loss of data. the curator may have mistyped an enzyme commission (ec) number for an rna polymerase, or may have incorrectly assigned a genomic sequence to the wrong taxonomic classification. or even more insidious, the curator may have been using annotations provided by other groups that had justified their own annotations on the basis of matches to annotations provided by yet another group. such chains of evidence may extend far back, and the chance of propagating an early error increases with time. such error propagation can be widespread indeed, affecting the work of multiple sequencing centers and database creators and providers. this is especially true given the dependencies of genomic sequence annotations on previously published annotations. the possible sources of errors are numerous, and it is the responsibility of both the database provider and the user to be aware of, and on the lookout for, errors. the database provider can, with careful database and application design, apply error-checking routines to many aspects of the data storage and analysis pipeline. the code can check for truncated sequences, interrupted open reading frames, and nonsense data, as well as data annotations that do not match a provided controlled vocabulary. but the user should always approach any database or the output of any application with a little healthy skepticism. the user is the final arbiter of the accuracy of the information, and it is their responsibility to look out for inconsistent or erroneous results that may indicate either a random or systemic error at some point in the process of data collection and analysis. it is not feasible to provide a comprehensive and current list of all available databases that contain virus-related information or information of use to virus researchers. new databases appear on a regular basis; existing databases either disappear or become stagnant and outdated; or databases may change focus and domains of interest. any resource published in book format attempting to provide an up-to-date list would be out-of-date on the day of publication. even web-based lists of database resources quickly become out-of-date due to the rapidity with which available resources change, and the difficulty and extensive effort required to keep an online list current and inclusive. therefore, our approach in this article is to provide an overview of the types of data that are obtainable from available biological databases, and to list some of the more important database resources that have been available for extended periods of time and, importantly, remain current through a process of continual updating and refinement. we should also emphasize that the use of web-based search tools such as google, various web logs (blogs), and news groups, can provide some of the best means of locating existing and newly available web-based information sources. information contained in databases can be used to address a wide variety of problems. a sampling of the areas of research facilitated by virus databases includes . taxonomy and classification; . host range, distribution, and ecology; . evolutionary biology; . pathogenesis; . host-pathogen interaction; . epidemiology; . disease surveillance; . detection; . prevention; . prophylaxis; . diagnosis; and . treatment. addressing these problems involves mining the data in an appropriate database in order to detect patterns that allow certain associations, generalizations, cause-effect relationships, or structure-function relationships to be discerned. table provides a list of some of the more useful and stable database resources of possible interest to virus researchers. below, we expand on some of this information and provide a brief discussion concerning the sources and intended uses of these data sets. the two major, overarching collections of biological databases are at the ncbi, supported by the national library of medicine at the nih, and the embl, part of the european bioinformatics institute. these large data repositories try to be all-inclusive, acting as the primary source of publicly available molecular biological data for the scientific community. in fact, most journals require that, prior to publication, investigators submit their original sequence data to one of these repositories. in addition to sequence data, ncbi and embl (along with many other data repositories) include a large variety of other data types, such as that obtained from gene-expression experiments and studies investigating biological structures. journals are also extending the requirement for data deposition to some of these other data types. note that while much of the data available from these repositories is raw data obtained directly as the result of experimental investigation in the laboratory, a variety of 'valueadded' secondary databases are also available that take primary data records and manipulate or annotate them in some fashion in order to derive additional useful information. when an investigator is unsure about the existence or source of some biological data, the ncbi and embl websites should serve as the starting point for locating such information. the ncbi entrez search engine provides a powerful interface to access all information contained in the various ncbi databases, including all available sequence records. a search engine such as google might also be used if ncbi and embl fail to locate the desired information. of course pubmed, the repository of literature citations maintained at ncbi, also represents a major reference site for locating biological information. finally, the journal nucleic acids research (nar) publishes an annual 'database' issue and an annual 'web server' issue that are excellent references for finding new biological databases and websites. and while the most recent nar database or web server issue may contain articles on a variety of new and interesting databases and websites, be sure to also look at issues from previous years. older issues contain articles on many existing sites that may not necessarily be represented in the latest there are several websites that serve to provide general virus-specific information and links of use to virus researchers. one of these is the ncbi viral genomes project, which provides an overview of all virus-related ncbi resources including taxonomy, sequence, and reference information. links to other sources of viral data are provided, as well as a number of analytical tools that have been developed to support viral taxonomic classification and sequence clustering. another useful site is the all the virology on the www website. this site provides numerous links to other virus-specific websites, databases, information, news, and analytical resources. it is updated on a regular basis and is therefore as current as any site of this scope can be. one of the strengths of storing information within a database is that information derived from different sources or different data sets can be compared so that important common and distinguishing features can be recognized. such comparative analyses are greatly aided by having a rigorous classification scheme for the information being studied. the international union of microbiological societies has designated the international committee on taxonomy of viruses (ictv) as the official body that determines taxonomic classifications for viruses. through a series of subcommittees and associated study groups, scientists with expertise on each viral species participate in the establishment of new taxonomic groups, assignment of new isolates to existing or newly established taxonomic groups, and reassessment of existing assignments as additional research data become available. the ictv uses more than individual characteristics for classification, though sequence homology has gained increasing importance over the years as one of the major classifiers of taxonomic position. currently, as described in its eighth report, the ictv recognizes orders, families, genera, and species of viruses. the ictv officially classifies viral isolates only to the species level. divisions within species, such as clades, subgroups, strains, isolates, types, etc., are left to others. the ictv classifications are available in book form as well as from an online database. this database, the ictvdb, contains the complete taxonomic hierarchy, and assigns each known viral isolate to its appropriate place in that hierarchy. descriptive information on each viral species is also available. the ncbi also provides a web-based taxonomy browser for access to taxonomically specified sets of sequence records. ncbi's viral taxonomy is not completely congruent with that of ictv, but efforts have been under way to ensure congruency with the official ictv classification. the primary repositories of existing sequence information come from the three organizations that comprise the international nucleotide sequence database collaboration. these three sites are genbank (maintained at ncbi), embl, and the dna data bank of japan (ddbj). because all sequence information submitted to any one of these entities is shared with the others, a researcher need query only one of these sites to get the most up-to-date set of available sequences. genbank stores all publicly available nucleotide sequences for all organisms, as well as viruses. this includes whole-genome sequences as well as partial-genome and individual coding sequences. sequences are also available from largescale sequencing projects, such as those from shotgun sequencing of environmental samples (including viruses), and high-throughput low-and high-coverage genomic sequencing projects. ncbi provides separate database divisions for access to these sequence datasets. the sequence provided in each genbank record is the distillation of the raw data generated by (in most cases these days) automated sequencing machines. the trace files and base calls provided by the sequencers are then assembled into a collection of contiguous sequences (contigs) until the final sequence has been assembled. in recognition of the fact that there is useful information contained in these trace files and sequence assemblies (especially if one would like to look for possible sequencing errors or polymorphisms), ncbi now provides separate trace file and assembly archives for genbank sequences when the laboratory responsible for generating the sequence submits these files. currently, the only viruses represented in these archives are influenza a, chlorella, and a few bacteriophages. an important caveat in using data obtained from gen-bank or other sources is that no sequence data can be considered to be % accurate. furthermore, the annotation associated with the sequence, as provided in the genbank record, may also contain inaccuracies or be outof-date. genbank records are provided and maintained by the group originally submitting the sequence to genbank. genbank may review these records for obvious errors and formatting mistakes (such as the lack of an open reading frame where one is indicated), but given the large numbers of sequences being submitted, it is impossible to verify all of the information in these records. in addition, the submitter of a sequence essentially 'owns' that sequence record and is thus responsible for all updates and corrections. ncbi generally will not change any of the information in the genbank record unless the sequence submitter provides the changes. in some cases, sequence annotations will be updated and expanded, but many, if not most, records never change following their initial submission. (these facts emphasize the responsibility that submitters of sequence data have to ensure the accuracy of their original submission and to update their sequence data and annotations as necessary.) therefore, the user of the information has the responsibility to ensure, to the extent possible, its accuracy is sufficient to support any conclusions derived from that information. in recognition of these problems, ncbi established the reference sequence (refseq) database project, which attempts to provide reference sequences for genomes, genes, mrnas, proteins, and rna sequences that can be used, in ncbi's words, as ''a stable reference for gene characterization, mutation analysis, expression studies, and polymorphism discovery''. refseq records are manually curated by ncbi staff, and therefore should provide more current (and hopefully more accurate) sequence annotations to support the needs of the research community. for viruses, refseq provides a complete genomic sequence and annotation for one representative isolate of each viral species. ncbi solicits members of the research community to participate as advisors for each viral family represented in refseq, in an effort to ensure the accuracy of the refseq effort. in addition to the nucleotide sequence databases mentioned above, uniprot provides a general, all-inclusive protein sequence database that adds value through annotation and analysis of all the available protein sequences. uniprot represents a collaborative effort of three groups that previously maintained separate protein databases (pir, swissprot, and trembl). these groups, the national biomedical research foundation at georgetown university, the swiss institute of bioinformatics, and the european bioinformatics institute, formed a consortium in to merge each of their individual databases into one comprehensive database, uniprot. uniprot data can be queried by searching for similarity to a query sequence, or by identifying useful records based on the text annotations. sequences are also grouped into clusters based on sequence similarity. similarity of a query sequence to a particular cluster may be useful in assigning functional characteristics to sequences of unknown function. ncbi also provides a protein sequence database (with corresponding refseq records) consisting of all protein-coding sequences that have been annotated within all genbank nucleotide sequence records. the above-mentioned sequence databases are not limited to viral data, but rather store sequence information for all biological organisms. in many cases, access to nonviral sequences is necessary for comparative purposes, or to study virus-host interactions. but it is frequently easier to use virus-specific databases when they exist, to provide a more focused view of the data that may simplify many of the analyses of interest. table lists many of these virus-specific sites. sites of note include the nih-supported bioinformatics resource centers for biodefense and emerging and reemerging infectious diseases (brcs). the brcs concentrate on providing databases, annotations, and analytical resources on nih priority pathogens, a list that includes many viruses. in addition, the lanl has developed a variety of viral databases and analytical resources including databases focusing on hiv and influenza. for plant virologists, the descriptions of plant viruses (dpv) website contains a comprehensive database of sequence and other information on plant viruses. the three-dimensional structures for quite a few viral proteins and virion particles have been determined. these structures are available in the primary database for experimentally determined structures, the protein data bank (pdb). the pdb currently contains the structures for more than viral proteins and viral protein complexes out of total structures. several virus-specific structure databases also exist. these include the viperdb database of icosahedral viral capsid structures, which provides analytical and visualization tools for the study of viral capsid structures; virus world at the institute for molecular virology at the university of wisconsin, which contains a variety of structural images of viruses; and the big picture book of viruses, which provides a catalog of images of viruses, along with descriptive information. ultimately, the biology of viruses is determined by genomic sequence (with a little help from the host and the environment). nucleotide sequences may be structural, functional, regulatory, or protein coding. protein sequences may be structural, functional, and/or regulatory, as well. patterns specified in nucleotide or amino acid sequences can be identified and associated with many of these biological roles. both general and virus-specific databases exist that map these roles to specific sequence motifs. most also provide tools that allow investigators to search their own sequences for the presence of particular patterns or motifs characteristic of function. general databases include the ncbi conserved domain database; the pfam (protein family) database of multiple sequence alignments and hidden markov models; and the prosite database of protein families and domains. each of these databases and associated search algorithms differ in how they detect a particular search motif or define a particular protein family. it can therefore be useful to employ multiple databases and search methods when analyzing a new sequence (though in many cases they will each detect a similar set of putative functional motifs). interpro is a database of protein families, domains, and functional sites that combines many other existing motif databases. interpro provides a search tool, interproscan, which is able to utilize several different search algorithms dependent on the database to be searched. it allows users to choose which of the available databases and search tools to use when analyzing their own sequences of interest. a comprehensive report is provided that not only summarizes the results of the search, but also provides a comprehensive annotation derived from similarities to known functional domains. all of the above databases define functional attributes based on similarities in amino acid sequence. these amino acid similarities can be used to classify proteins into functional families. placing proteins into common functional families is also frequently performed by grouping the proteins into orthologous families based on the overall similarity of their amino acid sequence as determined by pairwise blast comparisons. two virus-specific databases of orthologous gene families are the viral clusters of orthologous groups database (vogs) at ncbi, and the viral orthologous clusters database (vocs) at the viral bioinformatics resource center and viral bioinformatics, canada. many other types of useful information, both general and virus-specific, have been collected into databases that are available to researchers. these include databases of gene-expression experiments (ncbi gene expression omnibus -geo); protein-protein interaction databases, such as the ncbi hiv protein-interaction database; the immune epitope database and analysis resource (iedb) at the la jolla institute for allergy and immunology; and databases and resources for defining and visualizing biological pathways, such as metabolic, regulatory, and signaling pathways. these pathway databases include reactome at the cold spring harbor laboratory, new york; biocyc at sri international, menlo park, california; and the kyoto encyclopedia of genes and genomes (kegg) at kyoto university in japan. as indicated above, the information contained in a database is useless unless there is some way to retrieve that information from the database. in addition, having access to all of the information in every existing database would be meaningless unless tools are available that allow one to process and understand the data contained within those databases. therefore, a discussion of virus databases would not be complete without at least a passing reference to the tools that are available for analysis. to populate a database such as the vgd with sequence and analytical information, and to utilize this information for subsequent analyses, requires a variety of analytical tools including programs for . sequence record reformatting, . database import and export, . sequence similarity comparison, . gene prediction and identification, . detection of functional motifs, . comparative analysis, . multiple sequence alignment, . phylogenetic inference, . structural prediction, and . visualization. sources for some of these tools have already been mentioned, and many other tools are available from the same websites that provide many of the databases listed in table . the goal of all of these sites that make available data and analytical tools is to provide -or enable the discovery of -knowledge, rather than simply providing access to data. only in this manner can the ultimate goal of biological understanding be fully realized. see also: evolution of viruses; phylogeny of viruses; taxonomy, classification and nomenclature of viruses; virus classification by pairwise sequence comparison (pasc). gene ontology: tool for the unification of biology. the gene ontology consortium national center for biotechnology information viral genomes project virus taxonomy: classification and nomenclature of viruses. eighth report of the international committee on taxonomy of viruses the molecular biology database collection: update reactome: a knowledgebase of biological pathways virus bioinformatics: databases and recent applications immunoinformatics comes of age hiv sequence databases hepatitis c databases, principles and utility to researchers poxvirus bioinformatics resource center: a comprehensive poxviridae informational and analytical resource biological weapons defense: infectious diseases and counterbioterrorism exploring icosahedral virus structures with viper national center for biomedical ontology: advancing biomedicine through structured organization of scientific knowledge los alamos hepatitis c immunology database aidsinfo key: cord- -ylcv zvl authors: buonomo, b.; della marca, r. title: modelling information-dependent social behaviors in response to lockdowns: the case of covid- epidemic in italy date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: ylcv zvl the covid- pandemic started in january has not only threatened world public health, but severely impacted almost every facet of lives including behavioral and psychological aspects. in this paper we focus on the 'human element' and propose a mathematical model to investigate the effects on the covid- epidemic of social behavioral changes in response to lockdowns. we consider a seir-like epidemic model where that contact and quarantine rates are assumed to depend on the available information and rumors about the disease status in the community. the model is applied to the case of covid- epidemic in italy. we consider the period that stretches between february , when the first bulletin by the italian civil protection was reported and may , when the lockdown restrictions have been mostly removed. the role played by the information-related parameters is determined by evaluating how they affect suitable outbreak-severity indicators. we estimated that citizens compliance with mitigation measures played a decisive role in curbing the epidemic curve by preventing a duplication of deaths and about % more contagions. in december , the municipal health commission of wuhan, china, reported to the world health organization a cluster of viral pneumonia of unknown aetiology in wuhan city, hubei province. on january , , the china cdc reported that the respiratory disease, later named covid , was caused by the novel coronavirus sarscov [ ] . the outbreak of covid rapidly expanded from hubei province to the rest of china and then to other countries. finally, it developed in a devastating pandemics aecting almost all the countries of the world [ ]. as of may , a total of , million of cases of covid and , related deaths have been reported worldwide [ ] . in absence of treatment and vaccine, the mitigation strategy enforced by many countries during the covid pandemic have been based on social distancing. each government enacted a series of restriction aecting billions of people including recommendation of restricted movements for some or all of their citizens, and localized or national lockdown with the partial or full closingo of nonessential companies and manufacturing plants [ ] . italy has been the rst european country aected by covid . the country has been strongly hit by the epidemic which has triggered progressively stricter restrictions aimed at minimizing the spread of the coronavirus. the actions enacted by the italian government began with reducing social interactions through quarantine and isolation till to reach the full lockdown [ , ] . on may , , the phase two began, marking a gradual reopening of the economy and restriction easing for residents. one week later, shops also reopened and the restrictions on mobility were essentially eliminated, with the only obligation in many regions to use protective masks [ ] . during the period that stretches between january , and may , , italy suered , ocial covid cases and , deaths [ ] . the scientic community has promptly reacted to the covid pandemic. since the early stage of the pandemic a number of mathematical models and methods has been used. among the main concerns raised were: predicting the evolution of the covid pandemic wave worldwide or in specic countries [ , , ] ; predicting epidemic peaks and icu accesses [ ] ; assessing the eects of containment measures [ , , , , , , ] and, more generally, assessing the impact on populations in terms of economics, societal needs, employment, health care, deaths toll, etc [ , ] . among the mathematical approaches used, many authors relied on deterministic compartmental models. this approach has been revealed successful for reproducing epidemic curves in the past sarscov outbreak in [ ] and has been employed also for covid . specic studies focused on the case of epidemic in italy: gatto et al. [ ] studied the transmission between a network of italian provinces by using a sepia model as core model. their sepia model discriminates between infectious individuals depending on presence and severity of their symptoms. they examined the eects of the intervention measures in terms of number of averted cases and hospitalizations in the period february march , . giordano et al. [ ] proposed an even more detailed model, named sidarthe, in which the distinction between diagnosed and nondiagnosed individuals plays an important role. they used the sidarthe model to predict the course of the epidemic and to show the need to use testing and contact tracing combined to social distancing measures. the mitigation measures like social distancing, quarantine and selfisolation may be encouraged or mandated [ ] . however, although the vast majority of people were following the rules, even in this last case there are many reports of people breaching restrictions [ , ] . local authorities needed to continuously verifying compliance with mitigation measures through monitoring by health ocials and police actions (checkpoints, use of drones, ne or jail threats, etc). this behavior might be related to costs that individuals aected by epidemic control measures pay in terms of health, including loss of social relationships, psychological pressure, increasing stress and health hazards resulting in a substantial damage to population wellbeing [ , , ] . as far as we know, the mathematically oriented papers on covid nowadays available in the literature do not explicitly take into account of the fraction of individuals that change their social behaviors solely in response to social alarm. from a mathematical point of view, the change in social behaviors may be described by employing the method of informationdependent models [ , ] which is based on the introduction of a suitable information index m (t) (see [ , ] ). this method has been applied to vaccine preventable childhood diseases [ , ] and is currently under development (see [ , , ] for very recent contributions). in this paper, the main goal is to assess the eects on the covid epidemic of human behavioral changes during the lockdowns. to this aim we build up an informationdependent seirlike model which is based on the key assumption that the choice to respect the lockdown restrictions, specically the social distance and the quarantine, is partially determined on fully voluntary basis and depends on the available information and rumors concerning the spread of the covid disease in the community. a second goal of this manuscript is to provide an application of the information index to a specic eldcase, where the model is parametrized and the solutions compared with ocial data. we focus on the case of covid epidemic in italy during the period that begins on february , , when the rst bulletin by the italian civil protection was reported [ ] , includes the partial and full lockdown restrictions, and ends on may , when the lockdown restrictions have been mostly removed. we stress the role played by circulating information by evaluating the absolute and relative variations of diseaseseverity indicators as functions of the informationrelated parameters. the rest of the paper is organized as follows: in section we introduce the model balance equations and informationdependent processes. two critical epidemiological thresholds are computed in section . section is devoted to model parametrization for numerical simulations, that are then shown and discussed in section . conclusions and future perspective are given in section . model formulation we assume that the total population is divided into seven disjoint compartments, susceptibles s, exposed e, presymptomatic i p , asymptomatic/mildly symptomatic i m , severely symptomatic (hospitalized) i s , quarantined q and recovered r. any individual of the population belongs to one (and only one) compartment. the size of each compartment at time t represents a state variable of a mathematical model. the state variables and the processes included in the model are illustrated in the ow chart in fig. . the model is given by the following system of nonlinear ordinary dierential equations, where each (balance equation) rules the rate of change of a state variable. the model formulation is described in detail in the next subsections. as mentioned in the introduction, the mitigation strategy enforced by many countries during the covid pandemic has been based on social distancing and quarantine. motivated by the discussion above, we assume that the nal choice to adhere or not to adhere lockdown restrictions is partially determined on fully voluntary basis and depends on the available information and rumors concerning the spread of the disease in the community. from a mathematical point of view, we describe the change in social behaviors by employing the method of informationdependent models [ , ] . the information is mathematically represented by an information figure : flow chart for the covid model ( )( ). the population is divided into seven disjoint compartments of individuals: susceptible s(t), exposed e(t), presymptomatic i p (t), asymptomatic/mildly symptomatic i m (t), severely symptomatic/hospitalized i s (t), quarantined q(t) and recovered r(t). blue colour indicates the informationdependent processes in model (see ( )( )( ), with m (t) ruled by ( )). index m (t) (see appendix a for the general denition), which summarizes the information about the current and past values of the disease [ , ] and is given by the following distributed delay this formulation may be interpreted as follows: the rst order erlang distribution erl ,a (x) represents an exponentially fading memory, where the parameter a is the inverse of the average time delay of the collected information on the status of the disease in the community (see appendix a). on the other hand, we assume that people react in response to information and rumors regarding the daily number of quarantined and hospitalized individuals. the information coverage k is assumed to be positive and k ≤ , which mimics the evidence that covid ocial data could be underreported in many cases [ , ] . with this choice, by applying the linear chain trick [ ] , we obtain the dierential equation ruling the the dynamics of the information index m :Ṁ in this section we derive in details each balance equation of model ( ). susceptibles are the individuals who are healthy but can contract the disease. demography is incorporated in the model so that a net inow rate bn due to births is considered, where b is the birth rate andn denotes the total population at the beginning of the epidemic. we also consider an inow term due to immigration, Λ . since global travel restrictions were implemented during the covid epidemic outbreak [ ] , we assume that Λ accounts only of repatriation of citizens to their countries of origin due to the covid pandemic [ ] . in all airports, train stations, ports and land borders travellers' health conditions were tested via thermal scanners. although the eectiveness of such screening method is largely debated [ ] , for the sake of simplicity, we assume that the inow enters only into the susceptibles compartment. in summary, we assume that the total inow rate Λ is given by: the susceptible population decreases by natural death, with death rate µ and following infection. it is believed that covid is primarily transmitted from symptomatic people (mildly or severely symptomatic). in particular, although severely symptomatic individuals are isolated from the general population by hospitalization, they are still able to infect hospitals and medical personnel [ , ] and, in turn, give rise to transmission from hospital to the community. the presymptomatic transmission (i.e. the transmission from infected people before they develop signicant symptoms) is also relevant: specic studies revealed an estimate of % of secondary cases during the presymptomatic stage from index cases [ ] . on the contrary, the asymptomatic transmission (i.e. the contagion from a person infected with covid who does not develop symptoms) seems to play a negligible role [ ] . we also assume that quarantined individuals are fully isolated and therefore unable to transmit the disease. the routes of transmission from covid patients as described above are included in the force of infection (foi) function, i.e. the per capita rate at which susceptibles contract the infection: the transmission coecients for these three classes of infectious individuals are informationdependent and given by ε p β(m ), ε m β(m ) and ε s β(m ), respectively, with ≤ ε p , ε m , ε s < . the function β(m ), which models how the information aects the transmission rate, is dened as follows: the baseline transmission rate β(·) is a piecewise continuous, dierentiable and decreasing function of the information index m , with β(max(m )) > . we assume that where π is the probability of getting infected during a persontoperson contact and c b is the baseline contact rate. in ( ) the reduction in social contacts is assumed to be the sum of a constant rate c , which represent the individuals' choice to selfisolate regardless of rumors and information about the status of the disease in the population, and an informationdependent rate c (m ), being c (·) increasing with m and c ( ) = . in order to guarantee the positiveness, we assume c b > c + max(c (m )). following [ ] , we nally set namely . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. presymptomatic individuals are infectious people that have not yet developed signicant symptoms. such individuals lie in a stage between the exposed and the expected symptomatic ones. they remain in this compartment, i p , during the postlatent incubation period and diminish due to natural death or progress to become asymptomatic or symptomatic infectious individuals (at a rate η). this compartment includes both the asymptomatic individuals, that is infected individuals who do not develop symptoms, and mildly symptomatic individuals [ ] . as mentioned above, the asymptomatic transmission seems to play a negligible role in covid transmission. however, asymptomatic individuals are infected people which results positive cases to screening (positive pharyngeal swabs) and therefore enter in the ocial data count of conrmed diagnosis. members of this class come from presymptomatic stage and get out due to quarantine (at an informationdependent rate γ(m )), worsening symptoms (at rate σ m ) and recovery (at rate ν m ). equation ( e): severely symptomatic individuals (hospitalized), i s (t) severely symptomatic individuals are isolated from the general population by hospitalization. they arise: (i) as consequence of the development of severe symptoms by mild illness (the infectious of the class i m or the quarantined q); (ii) directly from the fraction − p of presymptomatic individuals that rapidly develop in severe illness. this class diminishes by recovery (at rate ν s ), natural death and diseaseinduced death (at rate d). quarantined individuals q are those who are separated from the general population. we assume that quarantined are asymptomatic/mildly asymptomatic individuals. this population is diminished by nat- . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . . https://doi.org/ . / . . . doi: medrxiv preprint ural deaths, aggravation of symptoms (at rate σ q , so that they move to i s ) and recovery (at a rate ν q ). for simplicity, we assume that the quarantine is % eective, i.e. with no possibility of contagion. quarantine may arise in two dierent ways. from one hand, individuals may be detected by health authorities and daily checked. such health active surveillance ensures also that the quarantine is, in some extent, respected. on the other hand, a fraction of quarantined individuals choose selfisolation since they are condent in the government handling of crisis or just believe the public health messaging and act in accordance [ ] . as mentioned in subsection . , we assume that the nal choice to respect or not respect the selfisolation depends on the awareness about the status of the disease in the community. therefore, we dene the informationdependent quarantine rate as follows. we assume that where the rate γ mimics the fraction of the asymptomatic/mildly symptomatic individuals i m that has been detected through screening tests and is`forced' to home isolation. the rate γ (m ) represents the undetected fraction of individuals that adopt quarantine by voluntary choice as result of the inuence of the circulating information m . the function γ (·) is required to be a continuous, dierentiable and increasing function w.r.t. m , with γ ( ) = . as in [ , ] , we set with d > , < ζ < − γ , potentially implying a roof of − ζ in quarantine rate under circumstances of high perceived risk. a representative trend of γ(m ) is displayed in fig. , bottom panel. after the infectious period, individuals from the compartments i m , i s and q recover at rates ν m , ν s and ν q , respectively. natural death rate is also considered. we assume that individuals who recover from covid acquire long lasting immunity against covid although this is a currently debated question (as of may, ) and there is still no evidence that covid antibodies may protect from reinfection [ ]. a frequently used indicator for measuring the potential spread of an infectious disease in a community is the basic reproduction number, r , namely the average number of secondary cases produced by one primary infection over the course of infectious period in a fully susceptible population. if the system incorporates control strategies, then the corresponding quantity is named control reproduction number and usually denoted by r c (obviously, r c < r ). the reproduction number can be calculated as the spectral radius of the next generation matrix f v − , where f and v are dened as jacobian matrices of the new infections appearance and the other rates of transfer, respectively, calculated for infected compartments at the diseasefree equilibrium [ , ] . in the specic case, if β(m ) = β b and γ(m ) = in ( )( ), namely when containment interventions are not enacted, we obtain the expression of r ; otherwise, the corresponding r c can be computed. simple algebra yields . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . ( )( ). and with (for more details see appendix b). the rst two terms in the r.h.s of ( ) describe the contributions of pre symptomatic infectious and asymptomatic/mildly symptomatic infectious, respectively, to the production of new infections close to the diseasefree equilibrium. the last three terms represent the contribution of infectious with severe symptoms, which could onset soon after the incubation phase or more gradually after a moderate symptomatic phase or even during the quarantine period. note that the latter term is missing in the basic reproduction number ( ), where the possibility for people to be quarantined is excluded. note also that r c = r when β = γ = . numerical simulations are performed in matlab environment [ ] with the use of platform integrated functions. a detailed model parametrization is given in the next subsections. the epidemiological parameters of the model as well as their baseline values are reported in table . the most recent data by the italian national institute of statistics [ ] refer to january , and . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . . provide a countrylevel birth rate b = . / years − and a death rate µ = . / years − , as well as a resident population of aboutn ≈ . · ( ) inhabitants. fluctuations in a time window of just over a year are considered negligible. the immigration inow term Λ accounts for the repatriation of italians abroad. on the basis of data communicated by the italian ministry of foreign aairs and international cooperation [ ] , a reasonable value for Λ seems to be Λ = / days − , namely the average number of repatriated citizens is per week. from ( ), we nally obtain Λ ≈ . · days − . epidemiological data are based on the current estimates disseminated by national and international health organizations [ , , , , , ] or inferred by modelling studies [ , , ] . more precisely, the median incubation period is estimated to be from days, with a range from days, and identication of the virus in respiratory tract specimens occurred days before the onset of symptoms [ , ] . hence, we set the latency (ρ) and prelatency (η) rates to / . days − and / . days − , respectively. from [ ] , the specic informationindependent transmission rates for the presymptomatic (ε p β b ), asymptomatic/mildly symptomatic (ε m β b ) and severely symptomatic (ε s β b ) cases are such that ε m /ε p = . and ε s /ε m = . . they are in accordance with the observation of high viral load close to symptoms onset (suggesting that sarscov can be easily transmissible at an early stage of infection), and with the absence of reported signicant dierence in viral load in presymptomatic and symptomatic patients [ ] . we set β b = . days − , which, together with the other parameters, leads to the basic reproduction number r ≈ . , a value falling within the ranges estimated in [ , , , ] . as in [ ] , we consider that just % of infectious individuals show serious symptoms immediately after the incubation phase, yielding p = . . nonetheless, people with initial mild symptoms may become seriously ill and develop breathing diculties, requiring hospitalization. it is estimated that about in people with covid show a worsening of symptoms [ ] within days from onset [ ] , giving σ m = . / . ≈ . days − . instead, the possibility that the aggravation happens during the quarantine period is assumed to be more rare: σ q = . days − . governmental eorts in identifying and quarantining positive cases were implemented since the early stage of epidemics (at february , quarantined people were already registered [ ] ), hence we consider the daily mandatory quarantine rate of asymptomatic/mildly symptomatic individuals (γ ) for the whole time horizon. from current available data, it seems hard to catch an uniform value for γ because it largely depends on the sampling eort, namely the number of specimen collections (swabs) from persons under investigation, that varies considerably across italian regions and in the dierent phases of the outbreak [ , ] . since our model does not account for such territorial peculiarities and in order to reduce the number of parameters to be estimated, we assume that γ = . σ q , namely it is % higher than the daily rate at which members of the i m class hospitalize, yielding γ ≈ . days − . simulations with such a value provide a good approximation of the timeevolution of registered quarantined individuals at national level, as displayed in fig. , second panel. following the approach adopted in [ ] for a sars-cov epidemic model, we estimate the diseaseinduced death rate as where x is the case fatality and t is the expected time from hospitalization until death. from [ ] , we approximate x = % and t = days (it is days for patients that were transferred to intensive care and days for those were not), yielding d ≈ . days − . similarly, the recovery rates ν j with j ∈ {m, q, s} . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. ( ) versus mandatory quarantine and transmission reduction rates. intersection between dotted black and red (resp. blue) lines indicates the value after the rst (resp. second) step reduction. other parameters values are given in table . are estimated as where t j is the expected time until recovery or expected time in quarantine/hospitalization. preliminary data indicate that the virus can persist for up to eight days from the rst detection in moderate cases and for longer periods in more severe cases [ ] , suggesting t m = days is an appropriate value. as far as the time spent in hospitalization or quarantine, in the lack of exact data, we assume t s < t q , because hospitalized individuals are likely to receive a partly eective, experimental treatment: mainly antibiotics, antivirals and corticosteroids [ ] . moreover, shortages in hospital beds and intensive care units (icus) lead to as prompt as possible discharge [ ] . in particular, we set t s = and t q = days, by accounting also for prolonged quarantine time due to delays in test response (if any) and for who recommendations of an additional two weeks in home isolation even after symptoms resolve [ ] . crucially, we also estimate the initial exponential rate of case increase (say, g ), by computing the dominant eigenvalue of the system's jacobian matrix, evaluated at the diseasefree equilibrium. it provides g ≈ . days − , in accordance to that given in [ ] . we explicitly reproduce in our simulations the eects of the progressive restrictions posed to human mobility and humantohuman contacts in italy. their detailed sequence may be summarized as follows. after the rst ocially conrmed case (the socalled`patient one') on february , in lodi province, several suspected cases emerged in the south and southwest of lombardy region. a`red zone' encompassing municipalities was instituted on february and put on lockdown to contain the emerging . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . reactivity factor of voluntary quarantine · − ζ − ζ is the roof of overall quarantine rate . days − a /a is the average information delay / days − k degree of information coverage . . the next day, a decree evocatively entitled`i'm staying at home' was signed: the lockdown was declared for the whole country with severe limitations to mobility and other progressively stricter restrictions. soon after, on march , , the lockdown was extended to the entire country [ ] , with all commercial and retail businesses except those providing essential services closed down [ ] . finally, on march , , the phase one of restrictions was completed when a full lockdown was imposed by closing all non essential companies and industrial plants [ ] . on may , italy entered the phase two, representing the starting point of a gradual relaxation of the restriction measures. one week later, shops also reopened and the restrictions on mobility were essentially eliminated, with the only obligation in many regions to use protective masks [ ] . because data early in an epidemic are inevitably incomplete and inaccurate, our approach has been to try to focus on what we believe to be the essentials in formulating a simple model. keeping this in mind, we assume that the disease transmission rate incurs in just two step reductions (modelled by the reduction rate β in ( )), corresponding to • march (day ), when the lockdown decree came into force along with the preceding restrictions, cumulatively resulting in a sharp decrease of sarscov transmission; • march (day ), the starting date of the full lockdown that denitely impacted the disease incidence. in the wake of [ , ] , we account for a rst step reduction by % (that is β b − β | ≤t< = . β b ), which drops the control reproduction number ( ) close to (see fig. , dotted black and red lines). it is then strengthened by an additional % about, resulting in a global reduction by % ( β b − β | t≥ = . β b ) that denitely brings r c below (see fig. , dotted black and blue lines). the informationrelated parameter values are reported in table together with their baseline values. following [ , ] , we set ζ = . days − potentially implying an asymptotic quarantine rate of . days − if we could let m go to +∞. the positive constants α e d tune the informationdependent reactivity, respectively, of susceptible and infectious people in reducing mutual social contacts and of individuals with no/mild symptoms in selfisolating. in virtue of the order of magnitude of the information index m (ranging between and ), we set α = · − and d = · − , resulting in a receptive propensity to selfisolation for asymptomatics/mildly symptomatics and less evident degree of variability in contact . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . rate, being the latter already impacted and constrained by government laws (as shown later in fig. ) . values range for the information coverage k and the average time delay of information /a are mainly guessed or taken from papers where the information index m is used [ , , , ] . the former may be seen as a`summary' of two opposite phenomena: the disease underreporting and the level of media coverage of the status of the disease, which tends to amplify the social alarm. it is assumed to range from a minimum of . (i.e. the public awareness is of %) to . the latter ranges from the hypothetical case of prompt communication (a = days − ) to a delay of days. we tune these two parameters within their values range in order to reproduce the curves that best t with the number of hospitalized individuals (i s ) and the cumulative deaths as released every day at p.m. (utc+ h) since february , by the italian civil protection department and archived on github [ ] . we nd a good approximation by setting k = . and a = / days − , meaning a level of awareness about the daily number of quarantined and hospitalized of %, resulting from the balance between underestimates and media amplication and inevitably aected by rumors and misinformation spreading on the web (the socalled`infodemic' [ ] ). such awareness is not immediate, but information takes on average days to be publicly disseminated, being the communication slowed by a series of articulated procedures: timing for swab tests results, notication of cases, reporting delays between surveillance and public health authorities, and so on. of course, parameters setting is inuenced by the choice of curves to t. available data seem to provide an idea about the number of identied infectious people who have developed mild/moderate symptoms (the fraction that mandatorily stays in q) or more serious symptoms (the hospitalized, i s ) and the number of deaths, but much less about those asymptomatics or with very mild symptoms who are not always subjected to a screening test. in order to provide appropriate initial conditions, we consider the ocial national data at february , archived on [ ] . in particular, we take the number of mandatorily quarantined individuals (at that time, they coincide with q being the voluntary component negligible) and the hospitalized people (i s ). then, we simulate the temporal evolution of the epidemics prior to february , by imposing an initial condition of one exposed case ∆t days before in a population ofn individuals, withn given in ( ) . we assume β = and γ as in table (no social distance restrictions were initially implemented, but quarantine eorts were active since then) and disregard the eect of information on the human social . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . tables , and . behaviors in this phase (α = d = in ( )( )). the length of temporal interval ∆t is tuned in order to reproduce the ocial values released for q and i s at february and provide estimations for the other state variables, as reported in table . we obtain ∆t = . , indicating that the virus circulated since the end of january, as predicted also in [ , ] . let us consider the time frame [t , t], where t ≤ t ≤ t f . we consider two relevant quantities, the cumulative incidence ci(t), i.e. the total number of new cases in [t , t], and the cumulative deaths cd(t), i.e. the diseaseinduced deaths in [t , t]. for model ( )( ) we have, respectively: where β(m ) is given in ( ), and in fig. the time evolution in [t , t f ] of ci(t) and cd(t) is shown (rst and fourth panel from the left), along with that of quarantined individuals q(t) (second panel) and hospitalized i s (t) (third panel). the role played by information on the public compliance with mitigation measures is stressed by the comparison with the absolute unresponsive case (α = d = in ( )( )). corresponding dynamics are . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. other parameter values are given in tables , and . labelled by black solid and red dashed lines, respectively. in absence of reactivity to information, the cumulative incidence would have been much less impacted by lockdowns restrictions ( . · vs . · on may ) and the number of quarantined would have been reduced to those forced by surveillance authorities. as a consequence, the peak of hospitalized patients would have been about % higher and days timedelayed, with a corresponding increase in cumulative death of more than %. for all reported dynamics, the detachment between the responsive and unresponsive case starts to be clearly distinguishable after the rst step reduction of % in transmission rate (on march ). trends are also compared with ocially disseminated data [ ] (fig. , blue dots), which seem to conform accordingly for most of the time horizon, except for ci, that suers from an inevitable and probably high underestimation [ , , , ] . as of may , , we estimate about , contagions, whereas the ocial count of conrmed infections is , [ ] . we now investigate how the information parameters k and a may aect the epidemic course. more precisely, we assess how changing these parameters aects some relevant quantities: the peak of quarantined individuals, max(q) (i.e., the maximum value reached by the quarantined curve in [t , t f ]), the peak of hospitalized individuals, max(i s ), the cumulative incidence ci(t f ) evaluated at the last day of the considered time frame, i.e. t f = (corresponding to , and the cumulative deaths cd(t f ). the results are shown in the contour plots in fig. . as expected, ci(t f ), max(i s ) and cd(t f ) decrease proportionally to the information coverage k and inversely to the information delay a − : they reach the minimum for k = and a − = days. dierently, the quantity max(q) may not monotonically depend . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. tables , and . on k and a − as it happens for k ≥ . and a − ≤ days (see fig. , second panel, lower right corner). in such parameter region, for a given value of k (resp. a) there are two dierent values of a (resp. k) which correspond to the same value of max(q). the absolute maximum (max [k,a − ] (max(q))) is obtained for k = and a − ≈ days. note that the couple of values k = , a − = days corresponds to the less severe outbreak, but not with the highest peak of quarantined individuals. in the next, we compare the relative changes for these quantities w.r.t the case when circulating information does not aect disease dynamics. in other words, we introduce the index fig. . however, we report in table three exemplary cases, the baseline and two extremal ones: (i) the baseline scenario k = . , a − = days, representing a rather accurate and shortdelayed communication; (ii) the case of highest information coverage and lowest information delay, k = , a − = days; (iii) the case of lowest information coverage and highest information delay, k = . , a − = days. . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . . https://doi.org/ . / . . . doi: medrxiv preprint . · . · . · . · k = . . · - . . · . . · - . . · - . a − = days k = . · - . . · . . · - . . · - . a − = days k = . . · - . . · . . · - . . · - . a − = days table : exact and relative values of nal cumulative incidence, ci(t f ), quarantined peak, max(q), under circumstances of very quick and fully accurate communication (case (ii)), ci(t f ), max(i s ) and cd(t f ) may reduce till %, % and %, respectively (see table , third line). on the other hand, even in case of low coverage and high delay (case (iii)), the information still has a not negligible impact on disease dynamics: nal cumulative incidence and hospitalized peak reduce till %, nal cumulative deaths till % and quarantined peak increases of % about (table , fourth line). as mentioned above, information and rumors regarding the status of the disease in the community aect the transmission rate β(m ) (as given in ( )) and the quarantine rate γ(m ) (as given in ( )). in our last simulation we want to emphasize the role of the information coverage on the quarantine and transmission rates. in fig. a comparison with the case of low information coverage, k = . , is given assuming a xed information delay a − = days (blue dotted lines). it can be seen that more informed people react and quarantine: an increasing of the maximum quarantine rate from . to . days − (which is also reached a week earlier) can be observed when by increasing the value of k till k = (fig. , second panel) . the eect of social behavioral changes is less evident in the transmission rate where increasing the information coverage produces a slight reduction of the transmission rate mainly during the full lockdown phase (fig. , rst panel) . this reects the circumstance that the citizens compliance with social distancing is not enhanced by the informationinduced behavioral changes during the rst stages of the epidemic. on the other hand, a widespread panic reaction may lead people to`do it as long as you can' (see, for example, the case of stormed supermarkets at early stage of the epidemic [ ]). in this work we propose a mathematical approach to investigate the eects on the covid epidemic of social behavioral changes in response to lockdowns. starting from a seirlike model, we assumed that the transmission and quarantine rates are partially determined on voluntary basis and depend on the circulating information and rumors about the disease, modeled by a suitable timedependent information index. we focused on the case of covid epidemic in italy and explicitly incorporated the progressively stricter restrictions enacted by italian government, by considering two step reductions in contact rate (the partial and full lockdowns). the main results are the followings: • we estimated two fundamental informationrelated parameters: the information coverage regarding . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. tables , , . the daily number of quarantined and hospitalized individuals (i.e. the parameter k) and the information delay (the quantity a − ). the estimate is performed by tting the model's solution with ocial data. we found k = . , which means that the public was aware of % of real data and a − = days, the time lag necessary to information to reach the public; • social behavioral changes in response to lockdowns played a decisive role in curbing the epidemic curve: the combined action of voluntary compliance with social distance and quarantine resulted in preventing a duplication of deaths and about % more contagions (i.e. approximately , more infections and , more deaths compared with the total unresponsive case, as of may , ); • even under circumstances of low information coverage and high information delay (k = . , a = / days − ), there would have been a benecial impact of social behavioral response on disease containment: as of may , cumulative incidence would be reduced of % and deaths of % about. shaping the complex interaction between circulating information, human behavior and epidemic disease is challenging. in this manuscript we give a contribution in this direction. we provide an application of the information index to a specic eldcase, the covid epidemic in italy, where the information dependent model is parametrized and the solutions compared with ocial data. our study presents limitations that leave the possibility of future developments. in particular: (i) the model captures the epidemics at a country level but it does not account for regional or local dierences and for internal human mobility (the latter having been crucial in italy at early stage of covid epidemic); (ii) the model does not explicitly account of icu admissions. the limited number of icu beds constituted a main issue during the covid pandemics [ ] . this study did not focus on this aspect . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted may , . . but icu admissions could be certainly included in the model; (iii) the model could be extended to include age structure. age has been particularly relevant for covid lethality rate (in italy the lethality rate for people aged or over is more than double the average value for the whole population [ ] ). further developments may also concern the investigation of optimal intervention strategies during the covid epidemics and, to this regard, the assessment of the impact of vaccine arrival. in this case, the approach of informationdependent vaccination could be employed [ , , , ] . consider the scenario of an epidemic outbreak that can be addressed by the public health system through campaigns aimed at raising public awareness regarding the use of protective tools (for example, vaccination, social distancing, bednet in case of mosquitoborne diseases, etc). assume also that the protective actions are not mandatory for the individuals (or else, they are mandatory but local authorities are unable to ensure a fully respect of the rules). then, the nal choice to use or not use the protective tools is therefore partially or fully determined by the available information on the state of the disease in the community. the idea is that such information takes time to reach the population (due to timeconsuming procedures such as clinical tests, notication of cases, the collecting and propagation of information and/or rumors, etc) and the population keeps the memory of the past values of the infection (like prevalence or incidence). therefore, according to the idea of informationdependent epidemic models [ , ] , an information index m should be considered, which is dened in terms of a delay τ , a memory kernel k and a functiong which describes the information that is relevant to the public in determining its nal choice to adopt or not to adopt the protective measure (the message function). therefore, the information index is given by the following distributed delay: m (t) = t −∞g (x (τ ), x (τ ), . . . , x n (τ )) k(t − τ )dτ here, the message functiong depends generically on the state variables, say x , x , . . . , x n but it may specically depend only on prevalence [ , ] , incidence [ ] or other relevant quantities like the vaccine side eects [ ] . one may assume that: the delay kernel k(·) in ( ) is a positive function such that +∞ k(t)dt = . it represents the weight given to past history of the disease. the erlangian family erl n,a (t) is a good candidate for delay kernel since it may represent both an exponentially fading memory (when n = ) and a memory more focused in the past (when n > ). moreover, when an erlangian memory kernel is used, one can apply the so called linear chain trick [ ] to obtain a system ruled by ordinary dierential equations. for example, in the case of exponentially fading memory (or weak kernel erl ,a (t)), the dynamics of the information index is ruled byṀ = a (g (x , x , . . . , x n ) − m ) . for further details regarding the information index, see [ , ] . . cc-by-nc-nd . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted may , . . https://doi.org/ . / . . . doi: medrxiv preprint b the next generation matrix method following the procedure and the notations in [ , ] , we prove that the control reproduction number of model ( )( ), r c , is given by ( ) . similarly one can prove that the basic reproduction number is given by ( ). let us consider the r.h.s. of equations ( b)( c)( d)( e)( f), and distinguish the new infections appearance from the other rates of transfer, by dening the vectors as proved in [ , ] , the control reproduction number is given by the spectral radius of the next generation matrix f v − . it is easy to check that f v − has positive elements on the rst row, being the other ones null. thus, r c = (f v − ) , as given in ( ) . coronavirus: conte tightens lockdown, closes all nonessential businesses, after almost deaths in hours estimating the burden of united states workers exposed to infection or disease: a key factor in containing risk of covid infection evaluating covid public health messaging in italy bbc, british broadcasting corporation. coronavirus: more than british broadcasting corporation. coronavirus: the world in lockdown in maps and charts assessing the impact of the coronavirus lockdown on unhappiness, loneliness, and boredom using google trends. arxiv eects of informationdependent vaccination behavior on coronavirus outbreak: insights from a siri model oscillations and hysteresis in an epidemic model with information dependent imperfect vaccination modeling of pseudorational exemption to vaccination for seir diseases the early phase of the covid outbreak in lombardy, italy. arxiv expected impact of lockdown in Îledefrance and possible exit strategies. medrxiv on the denition and the computation of the basic reproduction ratio r in models for infectious diseases in heterogeneous populations heterogeneous social interactions and the covid lockdown outcome in a multigroup seir model. arxiv informationrelated changes in contact patterns may trigger oscillations in the endemic prevalence of infectious diseases vaccinating behaviour, information, and the dynamics of sir vaccine preventable diseases fatal sir diseases and rational exemption to vaccination vaccinating behaviour and the dynamics of vaccine preventable infections centre for disease prevention and control. disease background of covid contact rate epidemic control of covid : an equilibrium view. arxiv repatriation of eu citizens during the covid- crisis dynamics of icu patients and deaths in italy and lombardy due to covid analysis updated to march, day # evening preventing intrahospital infection and transmission of coronavirus disease in healthcare workers. safety and health at work spread and dynamics of the covid epidemic in italy: eects of emergency containment measures modelling the covid- epidemic and implementation of populationwide interventions in italy modelling strategies for controlling sars outbreaks temporal dynamics in viral shedding and transmissibility of covid sars-cov- latest available data italian civil protection department. chronology of main steps and legal acts taken by the italian government for the containment of the covid epidemiological emergency italian ministry of foreign aairs and international cooperation phase two: what's opening and what you can do italian ministry of health. faq -covid , questions and answers beyond just "attening the curve analysis of the mitigation strategies for covid : from mathematical modelling perspective. medrxiv nonlinear dynamics of infectious diseases via informationinduced vaccination and saturated treatment correcting underreported covid case numbers. medrxiv biological delay systems: linear stability theory modeling the interplay between human behavior and the spread of infectious diseases matlab. matlab release b. the mathworks mrc, centre for global infectious disease analysis. report -estimating the number of infections and the impact of nonpharmaceutical interventions on covid in european countries world tourism organization. covid related travel restrictions. a global review for tourism. second report as of reproduction numbers and subthreshold endemic equilibria for compartmental models of disease transmission the covid epidemic coronavirusscientic insights and societal aspects statistical physics of vaccination coronavirus disease (covid- ): situation report coronavirus disease (covid ) pandemic acknowledgements. the present work has been performed under the auspices of the italian national group for the mathematical physics (gnfm) of national institute for advanced mathematics (indam). [ ] b. j. quilty, s. cliord, cmmid ncov working group , s. flasche, and r. m. eggo. eectiveness of airport screening at detecting travellers infected with novel coronavirus ( ncov). eurosurveillance, ( ) : , .[ ] rijs, reuters institute for the study of journalism. types, sources, and claims of covid misinformation. fact sheet april , .[ ] g. sebastiani, m. massa, and e. riboli. covid epidemic in italy: evolution, projections and impact of government measures. european journal of epidemiology, : , .[ ] m. supino, a. d'onofrio, f. luongo, g. occhipinti, and a. dal co. world governments should protect their population from covid pandemic using italy and lombardy as precursor. medrxiv, .[ ] the guardian. italy charges more than , people with violating lockdown. https://www.theguardian.com/world/ /mar/ / italy-charges-more-than- -people-violating-lockdown-coronavirus.(accessed may , ).[ ] theguardian. italians struggle with`surreal' lockdown as coronavirus cases rise. https://www.theguardian.com/world/ /feb/ / italians-struggle-with-surreal-lockdown-as-coronavirus-cases-rise, .(accessed may , ).[ ] the guardian. leaked coronavirus plan to quarantine m sparks chaos in italy. https://www.theguardian.com/world/ /mar/ / leaked-coronavirus-plan-to-quarantine- m-sparks-chaos-in-italy, .(accessed may , ).[ ] who, world health organization.home care for patients with covid presenting with mild symptoms and management of their contacts. https://www.who.int/ publications-detail/home-care-for-patients-with-suspected-novel-coronavirus-(ncov) -infection-presenting-with-mild-symptoms-and-management-of-contacts, . (accessed may , ).[ ] who, world health organization. immunity passports in the context of covid . https://www.who.int/news-room/commentaries/detail/ immunity-passports-in-the-context-of-covid- , . (scientic brief april , , accessed may , ).[ ] l. zhu, x. zhou, y. li, and y. zhu. stability and bifurcation analysis on a delayed epidemic model with information-dependent vaccination. physica scripta, ( ): , . key: cord- -zs ldm authors: depuccio, matthew j.; di tosto, gennaro; walker, daniel m.; mcalearney, ann scheck title: patients’ perceptions about medical record privacy and security: implications for withholding of information during the covid- pandemic date: - - journal: j gen intern med doi: . /s - - - sha: doc_id: cord_uid: zs ldm nan some patients may withhold relevant medical information from their provider because of concerns about the security and privacy of their information, or about how their information will be used. with increasing reliance on telemedicine and telehealth tools (e.g., patient portals) in response to the coronavirus disease (covid- ) pandemic, this issue may be particularly problematic. as withholding information can compromise providers' ability to deliver appropriate care, the accuracy of public health surveillance system data, and even population health efforts to mitigate the spread of covid- , we need to understand how patients' concerns about the privacy and security of their medical information may lead to information-withholding behaviors. data for the present study came from a survey administered to patients hospitalized at a large academic medical center (amc) enrolled in a pragmatic randomized controlled trial (rct). the rct studied the relationship between inpatient portal use and patients' care experience. one survey section asked about patients' attitudes toward use of health information technology, including their perceptions about information security risks and privacy. these questions were adapted from the national cancer institute's health information technology national trends survey. the institutional review board of the amc approved this study. the dependent variable for this study was the answer to the question "have you ever kept information from your healthcare provider because you were concerned about the privacy or security of your medical record?" (yes/no). on the basis of previous research, we included four independent variables derived from answers to questions about medical information privacy and security: . "if your medical information is sent electronically from one health provider to another, how concerned are you that an unauthorized person will see it?" . "how confident are you that you have some say in who is allowed to collect, use, and share your medical information?" . "how confident are you that safeguards (including the use of technology) are in place to protect your medical records from being seen by people who aren't permitted to see them?" . "how interested are you in exchanging medical information electronically with a healthcare provider?" a multivariable logistic regression model was used to test the relationship of the independent variables with informationwithholding behavior, adjusting for patient demographics. table summarizes patient characteristics and survey responses of our study participants. results of our regression analysis ( table ) show that for patients who were concerned that their medical information would be compromised if it was sent electronically between providers, the odds of withholding information from their provider was three times that of patients without concerns. conversely, for patients who were confident about the privacy of their medical information, the odds of keeping information from their provider was approximately half of those who were not confident. black patients were generally more likely to withhold information compared with white patients. patients who were older, married, employed, and in good mental health and who had healthcare coverage were less likely to keep information from their provider. similar to previous research conducted in the general population, , our findings suggest that many hospitalized patients are concerned about who has access to their medical information, and we found an association between these concerns and patients' reported information-withholding behavior. while these findings were limited to the perceptions of patients from a single amc, they are nonetheless important for providers to consider given relaxation of health insurance portability and accountability act (hipaa) protections in response to covid- . specifically, the u.s. office for civil rights has granted business associates (e.g., healthcare clearinghouses) the ability to make good-faith disclosures of personal medical information for public health activities as long as the patient is informed within days. in order to protect against potential adverse impacts of this rule on disclosure, providers likely need to reinforce technological safeguards, such as secure and encrypted communication, and clearly communicate about how patients' medical information is accessed, stored, and used in order to honor patient privacy preferences and potentially address patients' concerns in this area. monitoring the impact of these changes on patients' information-withholding behavior will be critical to ensure providers have the appropriate information to enable delivery of high-quality care. *definitions for each variable can be found in table putting the focus back on the patient: how privacy concerns affect personal health information sharing intentions high touch and high tech (ht ) proposal: transforming patient engagement throughout the continuum of care by engaging patients with portal technology at the bedside concern about security and privacy, and perceived control over collection and use of health information are related to withholding of health information from healthcare providers trust me, i'm a doctor: examining changes in how privacy concerns affect patient withholding behavior notification of enforcement discretion under hipaa to allow uses and disclosures of protected health information by business associates for public health and health oversight activities in response to covid- ( cfr parts and ) ethical practice in telehealth and telemedicine the authors wish to thank alice gaughan, conflict of interest: the authors declare that they do not have a conflict of interest.disclaimer: while this research was funded by the agency for healthcare research and quality, the study sponsor had no involvement in the collection, analysis, or interpretation of data; in the writing of this manuscript; or in the decision to submit the manuscript for publication. key: cord- -q u k authors: hofkirchner, wolfgang title: a paradigm shift for the great bifurcation date: - - journal: biosystems doi: . /j.biosystems. . sha: doc_id: cord_uid: q u k this paper is an attempt to achieve an understanding of the situation the evolution of humanity is confronted with in the age of global challenges. since global challenges are problems of unprecedented complexity, it is argued that a secular paradigm shift is required away from the overemphasis on allegedly neutral standpoints, on a mechanistic picture of the world and on deductive logics towards accounts of emergence, of systemicity, informationality and conviviality, building upon each other and providing together a transdisciplinary edifice of the sciences, in the end, for, and by the inclusion of, citizens. viewed from such a combined perspective, the current social evolution is punctuated by a great bifurcation similar to bifurcations other emergent systems have been facing. on the one hand, humankind is on the brink of extinction. it is the world occurrence of the enclosure of commons that is detrimental to sharing the systemic synergy effects and thus to the cohesion of social systems. on the other hand, humanity is on the threshold of a planetary society. another leap in integration would be the appropriate response to the complexity confronted with. humans and their social systems are informational agents and, as such, they are able to generate requisite information and use it to catch up with the complex challenges. they can establish convivial rules of living together in that they disclose the commons world-wide. by doing so, they would accomplish another evolutionary step in anthroposociogenesis. the concept of the global sustainable information society describes the framework of necessary conditions of conviviality under the new circumstances. the seemingly disruptive advent of the covid- pandemic outshone the climate change that has been gaining obvious momentum since the last fifty years to an extent that it threatens with a much more decisive rupture if science, technology and society are not unwilling to learn from the pandemic that there is nonlinear growth with complex challenges be they small or large and that human actors are not completely doomed to helplessness though. for such a lesson to learn, a secular shift in thinking and acting throughout sciences and everyday life is required because human actors need to be capacitated to cope with complex challenges such as the global problems. a shift is already underway though not yet hegemonic. this shift has to overcome three prejudices of conventional science: • the outdated ideology of value-free scientific research. the absence of values would make science distinct from biased everyday thought. but that's not the distinction. any research is driven by societal interests even if mediated by personal curiosity. any research implies particular values, reflected or not if not even camouflaged. of course, these values must not divert the findings of research, quite the contrary, they shall stimulate evidence-based research -and that's the distinction from biased opinion of everyday. science can critique opinions. in the last two decades, several labels have become aspirational for scientists: research shall be responsible, university research shall be aware of its third mission, namely, to serve the common good, applied research shall be replaced by use-inspired basic research, research shall become practically transdisciplinary in that it transcends science towards, and include in science, the values of everyday people that are affected by the results of research, best by letting them participate in research. these are attempts in the right direction: the acceptance that there is a limited controllability of what can be done. as everyday-thinking and -acting, science has limited controllability over its impact on society but within that certain limit it is capable of controlling and thus it must aim at doing so in a precautious way. neither phantasies of omnipotence nor of impotence are called for but a deliberate activism is (hofkirchner, a, - ) . not everything that might be feasible is also desirable and not everything that might be desirable is also feasible. the feasible and the desirable need to be made compatible with each other. • the outdated mechanistic picture of the world. cause-effect relationships are fancied as if pertaining to a machine constructed by humans. cause and effect would obey laws of strict determinacy such that the effect would follow necessarily from the cause. but that is true only for a small subdivision of effective causality -causes can have different effects and effects can be brought about by different causes -, let alone final, material and formal causality (hofkirchner, a, - ) . laws of nature and other parts of the world are not given for eternity. the late karl raimund popper called those laws propensities -an asymmetrical, contingent behaviour of the universe ( ) . this shows the right direction. strict determinism as well as indeterminism are false alternatives. less-than-strict determinism avoids both fallacies. • the outdated preponderance of methodologies that are based on deductive logics. deductivism is an attempt to deduce that which shall be explained or predicted from premises such that the phenomenon can be subsumed under a proposition of the form of a universal implication that covers the phenomenon. the premises suffice by definition for the conclusion, the phenomenon is thus reduced to a sufficient condition. but, in fact, those conditions are rarely sufficient. there is the search for alternative logics such as the hype for abduction or claims for a trans-classical logic (günther, ) . and there is the logic in reality of joseph brenner ( ) who grounds logic in reality, that is, he gives primacy to how reality works in principle when postulating logical principles. anyway, it needs to be accepted that explanation and prediction are incomplete and that they should focus on the adjacent necessary instead, that is, a necessary condition that might rarely be sufficient but should form a basis of understanding as close as possible to the phenomenon that shall be explained or predicted (hofkirchner, a, - ; ) . neither deductivism nor irrationalism -for which anything would go -can convince but a reflexive rationalism that accepts incomplete deducibility with an ascendance from the abstract to the concrete where by each step a new assumption is introduced without deduction. the build-up of such a specification hierarchy is important for the transdisciplinarity in its theoretical sense -the consideration of as many facets of the phenomena as possible in order to achieve a unified understanding. the removal of those impediments for the progress of science are the milestones that the paradigm shift has to master. only if they will have been achieved, humanity will be ready to confront the global challenges in a way that safeguards mankind against man-made extinction. the next sections substantiate how, on the three pillars of deliberate activism, less-than-strict determinism and reflexive rationalism, a new understanding of the current situation of world society can be erected and how it can make the disciplines of the whole edifice of science responsive to that task. the sections will proceed from a systemic level to an informational level to a convivialist level. emergentist systemism 'emergentist systemism' is a term introduced by poe yu-ze wan ( ) in the context of social theory, after it had been used in the field of social work in switzerland (e.g., obrecht, ) to characterise the approach of philosopher of science, mario bunge. bunge himself was rather used to terms such as 'emergentist materialism' to signify, e.g., his position in the field of the mind-body problem. however, he defined systemism in a broader sense, namely as 'ontology: everything is either a system or a component of some system. epistemology: every piece of knowledge is or ought to become a member of a conceptual system, such as a theory. axiology: every value is or ought to become a component of a system of interrelated values ' ( , ) . and he defined emergence as '[…] advent of qualitative novelty. a property of systems ' ( , ) . thus, emergentism is 'a world view or an approach' that focuses on emergence ( , ) . 'systemism, or emergentism', as he said ( , - ) , 'is seem to subsume four general but one-sided approaches: . each of these four views holds a grain of truth. in putting them together, systemism (or emergentism) helps avoid four common fallacies.' this is by and large the sense in which emergentist systemism is understood here (hofkirchner, a ) -as weltanschauung, that is, a world view that is not value-free (a german term by which mark davidson summed up ludwig von bertalanffy's general system theory and which will be subsumed here under the term praxiology), as conception of the world (ontology) and as way of thinking to generate knowledge about the world (epistemology). in a nutshell, emergent systemism means that, practically, humans intervene as a rule for the betterment of social life in real-world systems they conceive of by patterns they have already identified and that these systems are emergent from the co-operation of other systems that become or are their elements. this is called selforganisation. since the idea of emergent systems implies a kind of evolution, these systems are also known by the term evolutionary systems. evolutionary systems theory -a term coined by ervin laszlo ( ) , vilmos csanyi ( ) and susantha goonatilake ( ) but extended here to cover the meaning it received after the seminar held at the konrad lorenz institute for evolution and cognition research in vienna (van de vijver et al., ) -is the proper theory of self-organisation, a merger of systems theory and evolutionary theory by which the first was enabled to include more than ideas of maintaining systems only and the latter could emancipate from mechanistic interpretations of the darwinian model. self-organisation is characterised by evolvability and systemicity. that means that matter, nature, real-world events or entities evolve such that systems materialise as organisation of components (hofkirchner, a, - ) . applying emergentist systemism to the edifice of science(s) brings a profound change (hofkirchner, a) . how is the old paradigm's view of that edifice? let's start with philosophy, composed of epistemology, ontology and praxiology (ethics, aesthetics and else) as the most abstract enterprise and put it in the background. before that background, you have the three categories of formal sciences, real world sciences and applied sciences in juxtaposition. the formal sciences include logics and mathematics as disciplines. the real-world sciences comprise disciplines that investigate nature, on the one hand, and were called typically physics, chemistry, biology and else, and disciplines that investigate the social world, on the other, nowadays summarised under the term social and human sciences including sociology, cultural, political, and economic sciences and else. applied sciences assemble engineering, management, arts and else. every discipline is divided by sub-disciplines and sub-sub-disciplines besides all having their own legitimation as basic research, formal sciences are known for providing instruments for gaining knowledge in the real-world sciences, real-world sciences are needed for the provision of evidence for developing technologies, organisation, pieces of art. however, what makes the co-operation of sciences difficult is that they are siloed against each other by impermeable boundaries. connection between those mono-disciplines can be attempted only by heaping some of them together in a multi-disciplinary approach, which is no connection at all, or by peripheral exchanges in an interdisciplinary approach, which does not admit internal changes and keeps the disciplines as alien to each other as they have been before. driven by the confrontation with complex problems, things have begun to change already in the direction of semi-permeability of disciplinary boundaries, which, in the long run, paves the way for the establishment of new stable relations between them. emergentist systemism is not another discipline that just adds to the picture of the old disciplines. it causes rather a paradigm shift that has the potential to transform the whole edifice of science(s). philosophy that was deprived of fruitful relations to the disciplines of science in what had become normal science turns into systems philosophy now; formal sciences turn into formal as well as non-formal systems methodology; realworld sciences turn into sciences of real-world systems, that is, material, living or social systems; and, finally, applied sciences turn into a science that makes artefacts by designing systems and, in doing so, integrates them with social systems. so, at one blow, connectedness is unveiled between all inhabitants of the edifice. transgressions from one scientific endeavour to another can be mediated via jumping forth and back over shared levels of scientific knowledge. those levels form now a specification hierarchy. jumping from one specific level to a more general level allows comparison and adjustment of both levels. it allows the initial level to instigate knowledge adaptations on the target level or adoption of knowledge on re-entry from the target level. in addition, a more general level works as bridge for jumping up the ladder to even higher levels or down to different lower levels so as to help understand that their knowledge is just another specification of the knowledge they share at a higher level. this makes the sciences of evolutionary systems a transdiscipline and its inherent emergentist systemism makes the edifice of disciplines a transdisciplinary, common endeavour of all science. semi-permeability does not lift the boundaries. relative autonomy of disciplines is maintained in the overall transdisciplinary network. paraphrasing bunge, informationism is a term used here to denote a praxiological perspective on, an ontological conception of, and an epistemological way of thinking about, information, which takes centre stage in this tenet. for the sake of consistence, information is set to be based upon, and concretise further, systems, in particular, emergent information shall easily relate to emergent systems. this can be achieved through the assumption of informational agents -agents being emergent systems. thus, the generation of information is enshrined in the self-organisation of systems. any time a system self-organises, its agency brings forth information. an evolutionary system can be defined as 'a collection of ( ) elements e that interact such that ( ) relations r emerge that -because of providing synergistic effects -dominate their interaction in ( ) a dynamics d. this yields a distinction between micro-level (e) and macro-level (r) and a process (d) that links both levels in a feedback loop' (hofkirchner, a, ) . with reference to, but in modification of, a triadic semiotics after charles sanders peirce ( ) , laying emphasis on the intrinsic connection of self-organisation with negentropy after edgar morin ( , and ) and by usage of the term 'perturbation' introduced by humberto maturana and francesco varela ( ), information can be defined as 'relation such that ( ) the order o built up spontaneously (signans; the sign) ( ) reflects some perturbation p (signandum/signatum; (to-be-)signified) ( ) in the negentropic perspective of an evolutionary system s e (signator; the signmaker' (hofkirchner, a, ) . 'information is generated if self-organising systems relate to some external perturbation by the spontaneous build-up of order they execute when exposed to this perturbation. in the terms of triadic semiotics, the self-organising systems, by doing so, assign a signification to the order and make it a sign which stands for the so signified perturbation' (hofkirchner, a, ) . this is the approach of a unified theory of information (hofkirchner, ) . it is worth noting that those assumptions attribute information generability to emergent systems according to the evolutionary type they represent. not only social systems and their inhabitants are qualified as informational agents (this would import the acceptance of umberto eco's threshold of semiosis applicable to the realm of human culture exclusively), not only biotic systems (this is the threshold of biosemiotics) but also physical systems in so far as they are able to self-organise are qualified to generate information in shades -as far as the respective evolutionary stages allow. emergent systems of any kind produce emergent information. as to the new scientific edifice, informationism is mounting systemism. systems philosophy becomes a systemic philosophy of information; systems methodology becomes a systemic information methodology; the sciences of real-world systems become sciences of information of real-world systems, that is, of material information, living information and social information; and the science of designing artificial systems becomes the science of designing information in artificial systems. all that information is emergent information. convivialism is a term denoting a social perspective (praxiology), a conception of the social world (ontology) and a social-scientific way of thinking (epistemology) for which conviviality is key. conviviality as term was introduced by the austrian-american writer, ivan illich, who published a book with the title tools for conviviality ( ) . it contained a philosophy of technology according to which technology should be socially controlled so as to reclaim personal freedom that is restricted by uncontrolled technological development. conviviality -illich was familiar with the spanish term convivialidad -has latin origins and means the quality of living together in the manner of dining together (convivor) of hosts (convivator) and guests (conviva) at common feasts (convivium). in the last decade, that term gained new attention when mainly about fourty french intellectuals -among them serge latouche, edgar morin or chantal mouffe -opened the discussion on a political manifesto for the redesign of social relations. the first manifesto was followed by a second, up-dated one in (internationale convivialiste). according to the latter, convivialism 'is the name given to everything that in doctrines and wisdom, existing or past, secular or religious, contributes to the search for principles that allow human beings to compete without massacring each other in order to cooperate better, and to advance us as human beings in a full awareness of the finiteness of natural resources and in a shared concern for the care of the world. philosophy of the art of living together, it is not a new doctrine that would replace others by claiming to cancel them or radically overcome them. it is the movement of their mutual questioning based on a sense of extreme urgency in the face of multiple threats to the future of humanity. it intends to retain the most precious principles enshrined in the doctrines and wisdom which were handed down to us.' convivialism is emergentist if seen in the context of emergent systems and emergent information. social systems are here considered as evolutionary systems, which is in stark contrast to how german sociologist niklas luhmann ( ) considered them (wan, ). though luhmann originally claimed to start with general system theory when elaborating his theory of social systems, a revisiting of bertalanffy would lead to different conclusions . such an approach has been pursued not only by bunge but also by members of ervin laszlo's general evolution research group, among them robert artigiani ( ) , by representatives of critical realism, in particular margaret s. archer ( ; ; and her project group on social morphogenesis at the centre for social ontology, and workers departing from us sociologist walter f. buckley ( ) , including the economist tony lawson ( ) and the relational sociologist pierpaolo donati (donati, ; donati and archer, ) . of course, many other sociologists are worth mentioning; even if they do not explicitly share a systems approach, they have nevertheless contributed with important insights to such a framework (giddens, ; alexander, ; mouzelis, ; reckwitz ) . social systems are the evolutionary product of living systems but contain living systems that get a social shape. the elements of social systems as conceived here are social agents -humans called actors -and their organisational relations are social relations -called structure. actors inhabit the micro-level of social systems, while the macro-level is where the structure is located. the structure is produced by the actors and it exerts a downward causation on the actors and their agency (hofkirchner, b, ; lawson, ). thus, social systems self-organise as living and material systems do, but they differ from living and material systems as to their mode of self-organisation. 'social self-organisation goes beyond biotic self-organisation, which, in turn, goes beyond physicochemical self-organisation' (hofkirchner, , ) . social self-organisation does so in that it transcends, re-invents, creates the social systems through the action, interaction and co-action of their actors, who -as informational agents -cognise, communicate and co-operate mindfully when reproducing and transforming their social systems, which, in turn, can be considered as higher-level informational agents. social self-organisation is not conceivable without the generation of specific social information. the triple-c model postulates a hierarchy of information processes such that cognition is the necessary condition for the functioning of communication and communication the necessary condition for the functioning of co-operative information, in short, co-operation (hofkirchner, ) . psychic functions such as thought and others, the ability to speak and the ability to devise and manufacture artefacts, in particular, tools, are characteristic of humans. all of them are knit together in social information: 'human thought is part of human cognition […]; human language is part of communication […] ; human tools are part of work that belongs to human co-operation' (hofkirchner, , ) . starting with work at the top (which refers to the structure on the system's macro-level), it is about constituting common goals and instituting common goals. work is consensual. co-operation involves finding and building consensus. what is needed here, are common intentions. common intentionality provides the perspective of the whole we, the perspective of the social system. consensualisation, in turn, presupposes a certain collaboration that designs specific tasks for reaching the shared goals and assigns these tasks to certain actors. that is done on the social information level below, on the level of language (which refers to the network of interactions the actors form on the system's micro-level). communication functions as the means to realise that kind of collaboration that is needed for the upper level. that is, taking the perspective of the other facilitates collaboration. however, taking the perspective of the other is promoted by taking the perspective of the whole in which one's own and the others' roles are included. what is required here, is readiness for a dialogue with sense for consilience. collaboration, in turn, presupposes a certain co-ordination that devises certain operations for fulfilling the tasks and supervises certain actors in performing the operations. that is worked out on the lowermost level, on the level of thought (which refers to the actions of the individual actors who are also located on the system's micro-level). cognition allows the actors to understand what kind of co-ordination is needed by the upper level. it enables the actors to reflect upon the relationship between operations, tasks and goals. what is necessitated here, is reflexivity, the capacity to reflect the social context in which the cognising actor is embedded (archer, ) , and conceptuality, the capacity to use concepts, all of which are influenced by verbal language (logan, ; . the rationale of every complex system is synergy (corning, ; . agents produce synergetic effects when co-operating systemically -effects they could not produce when in isolation. in social systems, synergy 'takes on the form of some social good. actors contribute together to the good and are common beneficiaries of that good -the good is a common good, it is a commons' (hofkirchner, , ) . the social relations are commoning relations. conviviality then, as a feature of emergent social systems with emergent social information, can be determined as the historical-concrete shape of the commoning relations. it is a social value that is highly esteemed, it is a theoretical conceptualisation of a social practice, and it is a measurand the value of which can be estimated by empirical studies -it is all of them in one because it is an expression of the quality of the commoning relations. it expresses how just those relations are constructed, how equitable, free and solidary, and to which degree they enclose or disclose the commons. conviviality is visionary and longs for actualisation. its actualisation would 'make the social systems inclusive through the disclosing of the enclosed commons and, by doing so, […] warrant eudaimonia, a good life in a good society, the flourishing of happy individuals in convivial social relations' (hofkirchner, b, ) . having defined the commons as social synergy and conviviality as measure of the actualisation of envisioned commoning relations, the critical theory perspective becomes apparent (hofkirchner, , - ) -the perspective of a critical social systems theory as part of the social systems sciences in the new edifice of disciplines. conviviality is emergent. it develops over time and changes its forms in a contingent way. referring to michael tomasello's shared intentionality hypothesis and his interdependence hypothesis (tomasello et al., ; tomasello, ; , there have been two key steps in anthroposociogenesis (the becoming of humans and society) so far and, following the new systemic, informational and convivialist paradigm, a possible third one is imminent. the next subsections discuss those steps. leaps in quality emerge in systems as novel organisation due to a change in the organisational relations. thus, changes on the top-most levels of information generation and usage are decisive. all of them are shifts in co-operation. if work, language and thought build the human/social hierarchical levels of information from a synchronic point of view, then, from the diachronic point of view, it may well be assumed 'that it is conditions of co-operation that made the difference in evolution. evolutionary pressure unfolded a ratchet effect that yielded ever higher complex co-operation' (hofkirchner, , ) . the state of co-operation in the ancestors of humans is the origin of anthroposociogenesis. self-reinforcing processes came about. changes in the state of co-operation proliferated down to provoke changes on the level of communication -the development of human language -in order to propel co-operation and changes in the state of communication proliferated down to provoke changes on the cognition level -the development of thinking -in order to propel communication. in the beginning, so tomasello, there was a shift from individual to joint intentionality. individual intentionality of common ancestors of chimpanzees and humans was the point of departure about six million years ago. as living together was driven by self-interest of animal monads, there was no need for taking in consideration common goals, no need for thinking on a level beyond the actual egocentric perspective (tomasello, , , ) . early humans began to speciate only when they took advantage of going beyond individual intentionality and adopted 'more complex forms of cooperative sociality' ( ). a first step occurred 'in the context of collaborative foraging' ( ), that is, the hunting of large game and gathering of plant foods, around million years ago. this step culminated about . years ago, when joint intentionality emerged. hunters and gatherers developed dyadic cooperations driven by a 'second-person morality ' ( , ) . hence a need for acknowledging a common goal, that is, an understanding that the partner shares the goal and both are committed to act according to its achievement. multiple and vanishing dyadic relationships formed in which early humans shared a joint goal. in order to support the negotiation of joint goals and the coordination of collaboration, human communication originated with 'a commitment to informing others of things honestly and accurately, that is, truthfully ' ( , ) . cognitively, 'when early humans began engaging in obligate collaborative foraging, they schematized a cognitive model of the dual-level collaborative structure comprising a joint goal with individual roles and joint attention with individual perspectives' ( ). this was a premature state of conviviality. dyadic co-operation guaranteed the common good for the included actors. the shift from individual to joint intentionality was followed by a shift from joint to collective intentionality. collective intentionality emerged with early humans about . to . years ago. this shift occurred with the advent of culture, that is, of separate and distinct cultural groups, the interdependence that caused co-operation reigned 'not just at the level of the collaborating dyad, and not just in the domain of foraging, but at the level of the entire cultural group, and in all domains of life' (tomasello, , ) . this step created objective morality ( ). co-operation became triadic. since then a need for group-thinking has become characteristic of humanity, that is, knowing that any person belonging to the same group culture can be expected to share the same values and norms -by constructing a meta-level such that any group member can imagine the whole of the group, the roles taken, her own as well as others' replaceability. in line with that, communication was to start with discourses about 'objective' facts in need of compelling arguments and cognition had to turn into fullblown human reasoning; 'the individual no longer contrasted her own perspective with that of a specific other […]; rather, she contrasted her own perspective with some kind of generic perspective of anyone and everyone about things that were objectively real, true and right from any perspective whatsoever ' ( , ) . cognition involved a new feature of generalisation capacity. this was the next step of conviviality. the third of the triad is relations of society that relate individuals to each other with respect to the common good -even if the concrete content of the common good became a matter of disputation and conflict. and today, a third step of anthroposociogenesis can be hypothesised. there might be a shift from collective intentionality to one that is shared universally, that is, on a planetary scale. that would be the transition to another convivial regime -an extension of the triad to the whole of humanity, an omniad. this extension would be necessary because the conflict over the commons has reached an extension that endangers conviviality at all and the curbing of the extension of the conflict over the commons by extending the triad to an omniad can be considered possible, which is discussed in the sub-subsections to follow. another step is necessary, given that it is agreed that there shall be a human future, which is tantamount with a humane future (hofkirchner b ). in the course of evolution, complex systems move on trajectories on which bifurcations occur. they occur if and when the provision of synergy effects becomes problematic. bifurcations force the systems to change their trajectory. the old one cannot be continued any more. it bifurcates into a variety of possible future trajectories. there are two of them that span the whole variety in the possibility space between two extremes: systems might be able to achieve a leap from the previous level of evolution on which they could enjoy a steady state onto a higher level which forms part of a successful mega-evolution (haefner, , ; oeser, , - ) -a breakthrough to a path that transforms the systems into what is called meta-system (the metasystem transition) or supra-system -or they might even not be in the position to avert devolution -a path that leads to the breakdown of the systems. amplified fluctuations of parameters indicate the crossroads that demand taking one path or another. the nonlinear dynamics of complex systems make the crossroads appear in one as windows of opportunity to spiral upwards and as tipping points that let the systems spiral downwards into oblivion (laszlo ; ) . complex systems that can be observed today are those that could manage so far to harness synergy. the evolution of social systems is no exception. 'today, enclosures of the commons have been aggravated to such a degree that all of them morphed into global challenges. global challenges drive an accumulation of crises that mark a decisive bifurcation' (hofkirchner, a, ) . not only do global challenges cause a multi-crisis in the tension between, and among, social systems from the granularity of today's nation states down to the granularity of the smallest units made up by individualised actors, cutting across all social areas such as the cultural, the political, the economic, the ecological (eco-social) and the technological (techno-social) area and affect so humanity as a whole, but they also threaten, for the first time in human evolution, with the ultimate impact for humanity -with extinction. thus, that decisive bifurcation, in which a branch is much sought after to lead out of a dead-end branch, is called here the great bifurcation. that term resembles karl polanyi's term of the great transformation ( ) in that it embeds the conflict of market capitalism with democracy -the point that was of utmost importance to polanyi -in the complex systems context of anthroposociogenesis. 'either the social systems that together constitute mankind undergo a metasystem transition onto a higher level of organisation that allows the continuation of social evolution on earth or they, eventually, fall apart and discontinue anthropogenesis; either they succeed in rising their complexity such that they break through to a new step in the mega-evolution of humanity or there is a decline in complexity, a breakdown and devolution; either their differences can be integrated or they disintegrate themselves' (hofkirchner, a, ) . another step is not only necessary for a surviving and flourishing humankind. it is also possible and the reason for that is not only that such a step is grounded objectively in the possibility space given by the bifurcation. moreover, humans can be conceded the subjective potency to find the right way out of the crossroads, in particular, since the problems they come across are of anthropogenic origin and can be solved by a proper re-organisation of the social systems. they can view the evolution of humanity from the inside, explore and anticipate the way out and, finally, intervene accordingly (laszlo, ; ; ) . they belong to the first species on earth that can overcome self-made problems in the sharing of synergy. any emergent system can boost emergent information to catch up with the complexity of the challenges. 'if there is a mismatch between the complexity of a system and the complexity of the problems faced by the system, that system can catch up. […] intelligence is the capability of selforganising systems to generate that information which contributes in the best way to solving problems. the better their collective intelligence, that is, the better their problem-solving capacity and the better their capability to generate information, the better their handling of the crisis and the order they can reach. higher complexity not only signifies a higher degree of differentiation. at least as importantly, it signifies a new quality of integration. only a new level of integration can deal with an intensification of differentiation' (hofkirchner, a, ) . this can be called the law of requisite information (hofkirchner, a, ) that is elaborated on the basis of w. ross ashby's law of requisite variety (ashby, ) . according to the latter, a system is said to be able to steer another system, if the variety it disposes of corresponds, if not surpasses, the variety of the system to be steered. by departing from the narrow cyberneticist view through connecting variety with complexity and complexity with information and extending the reach of that which is to be steered from the outside to the inside, it can be concluded: 'requisite information is that appropriate information a system has about the complexity of the exterior and interior environment. requisite information safeguards the functioning of the system' (hofkirchner, a, ) . humanity entered the great bifurcation because 'the social relations of any partition of humanity are based on the principle of othering of partitions that are considered outside of them, thus not doing justice to legitimate self-interests of the rest of the partitions. frictions […] are caused by the lack of relations that would be valid for all partitions from a bird's eye view, that is, from a meta-level perspective. the establishment of such relations would mean the abolition of those frictions by a new supra-system in which all existing systems take part and shape according to the new relations on a higher level, following the application of the subsidiary principle as a basis for the preservation of diversity and autonomous agency" (hofkirchner et al., , ) . despite some literature based on biases due to biologism unable to imagine a transgression of the conceptual framework of the nationstate we, transnational relations have been taking shape. there is empirical evidence of co-operation between culturally homogeneous groups several tens of thousands of years ago, between cities around five thousand years ago, and between modern states since the seventeenth century (messner and weinlich, ; neumann, ; grimalda, ) . 'this co-operation between collective actors like groups, cities and states has already been paving the way for co-operation among the whole of humankind in the same way that dyadic, interpersonal co-operation between individual actors opened up the space of possibilities for triadic, societal co-operation' (hofkirchner et al., , ) . the term information society is gaining a new meaning. it does not mean a society that is only informatised, that is, penetrated by information technology as a report to france's president originally insinuated (nora and minc, ) . it means an informatised society only if that society uses its informatisation for becoming informational in a non-technical sense. becoming informational entails becoming sustainable and becoming sustainable entails, in turn, becoming global. such is information society on the point of obtaining another meaning in the context of critical social systems theory, which crystallises as critical information society theory. it points towards the global sustainable information society as a framework (hofkirchner, c) : • informationality of the global sustainable information society means 'the envisioned state of informedness of informational actors and systems in which they will catch up with the complexity they are challenged by the great bifurcation to such an extent that […] they will dispose of the capacity to recognise the causes of the global challenges in order to accordingly re-organise human life on earth to master those challenges' (hofkirchner, a, ) . informationalisation signifies the provision of social information for the installation of safeguards against the deprivation of commons world-wide and thus a new step in the evolution of conviviality. • the provision of such safeguards, in turn, is the process of executing sustainability. sustainability in the framework of the global sustainable information society does so receive a new meaning too. it means 'the envisioned state of the world system that will be shaped and shaping the social relationships between all parts, and throughout any part, of humanity pursuant to the commoning relations on the higher level' such that 'dysfunctions in the working of the organisation of the social system are kept below a threshold the transgression of which would discontinue social evolution' (hofkirchner, a , ). • the higher level on which the commons shall be provided, is 'the envisioned state of world society as an integrated meta-and supra-system in which social relationships will connect all parts of humanity in all fields of human life', which, eventually, conveys a new meaning to globality in the context of the global sustainable information society. 'these commoning relations need to be lifted onto the planetary level, and the emerging superordinate system will nest all actors and systems according to the new expanded commoning relations. by that, global governance is carried out' (hofkirchner, a, ) . globalisation signifies the provision of the commons worldwide. the notion of the global sustainable information society is far from a blueprint of the future society but describes which necessary conditions need to be met if the great bifurcation shall be successfully passed. there are three imperatives of social information that must be obeyed so as to enable actors to take that next step in the evolution of convivial humanity. on the co-operative level, normative, valueladen information must become hyper-commonist, that is, it must orient the consciousness and conscience of the actors towards the reclaiming of the commons in a universal manner; on the communicative level, dialogical information must become all-inclusive, that is, it must not exclude any actor in a universal conversation about the common good; on the cognitive level, reflexive information must become meta-reflexive, that is, it must be concerned about changes of the meta-level that is a universe for all actors (hofkirchner, c) . in order to accomplish that third step in conviviality, those imperatives, investigated by social sciences and humanities, need to be provided to civil society by translational sciences, all of them integrated and implemented by the new paradigm shift as transdisciplinary basis. scientific thinking as well as everyday thinking need to support each other in the comprehension and tackling of the next step. thus, emergentist systemism, informationism and convivialism, shifting research to a remedy for the global challenges, to a reconciliation of determinacy and indeterminacy, and to a logic of emergence, are no academic exercise of no avail. also, common sense is, in principle, capable of understanding those issues of the paradigm shift as well as becoming activist on that premises. the step will be an unprecedented revolutionary one. revolutionary thinking 'needs to focus on future social relations that are not yet actualised. it needs to anticipate them ideationally on a new meta-level, it needs to anticipate the meta-/suprasystem transition of the social systems.' and, taking up an idea of ernst bloch ( ), it 'does not only need to anticipate what is desirable but needs to explore which desirable is also possible in the here and now. only what is potential can be actualised. thus, it looks in the space of possibilities now for the foreshadowing of something that might become a future third' (hofkirchner, b, ) . the conclusion is that the current state of human evolution has been reached as emergent response to requirements of co-operation through two steps in anthroposociogenesis, namely, from the living together of individual monads towards a joint interaction in dyads and from that to a collective working together that was mediated by social relations -which are the social system's relations of the organisation of the commons -such that a triad has taken over the co-action of humans: a meta-level was constructed as a third that relates the interaction of the group members as a second and any action of a member as a first. now that global conditions require global co-operation, the third needs to be extended to another level ushering in a new phase. current discourses on whether or not our time shall be called anthropocene or whether or not we can stop the climate change or prevent pandemics like the covid- one, are dominated by pejorative connotations and negative imaginaries of the future. they lack a focus on the real potentialities of humanity that is just on the point of going through a possible next step of social evolution. extermination is the risk of the crisis. meta-reflexive global citizens, engaging in a global dialogue can kick off the emergence of global governance and thus solve the crisis. fin de siecle social theory -relativism, reduction, and the problem of reason realist social theory -the morphogenetic approach structure, agency and the internal conversation making our way through the world -human reflexivity and social mobility the reflexive imperative in late modernity social evolution: a nonequilibrium systems model an introduction to cybernetics sociology and modern systems theory the synergism hypothesis -a theory of progressive evolution nature's magic -synergy in evolution and the fate of humankind evolutionary systems and society -a general theory of life, mind and culture uncommon sense. tarcher relational sociology -a new paradigm for the social sciences the possibilities of global we-identities die tradition der logik und das konzept einer transklassischen rationalität emergence and the logic of explanation emergent information -a unified theory of information framework self-organisation as the mechanism of development and evolution on the validity of describing 'morphogenic society' as a system and justifyability of thinking about it as a social formation generative mechanisms transforming the social order ethics from systems -origin, development and current state of normativity transdisciplinarity needs systemism. systems , creating common good -the global sustainable information society as the good society information for a global sustainable information society social relations -building on ludwig von bertalanffy intelligence, artificial intelligence and wisdom in the global sustainable information society taking the perspective of the third -a contribution to the origins of systems thinking icts connecting global citizens, global dialogue and global governance -a call for needful designs seconde manifeste convivialiste -pour une monde post-néolibéral macroshift -navigating the transformation to a sustainable world emergence and morphogenesis -causal reduction and downward causation the extended mind. the origin of language and culture what is information? propagating organisation in the biosphere, symbolosphere, technosphere and econosphere social systems the evolution of human cooperation the nature of nature sociological theory: what went wrong? routledge global cooperation and the human factor in international relations l'informatisation de la société mega-evolution of information processing systems semiotische schriften, vols the great transformation -the political and economic origins of our time struktur -zur sozialwissenschaftlichen analyse von regeln und regelmäßigkeiten the direction of evolution -the rise of cooperative organization the metasystem transition a natural history of human thinking two key steps in the evolution of human cooperation -the interdependence hypothesis evolutionary systems -biological and epistemological perspectives on selection and self-organization such an account can be reached by a paradigm shift towards emergentist systemism, on the basis of which emergentist informationism is elaborated, on the basis of which, in turn, emergentist convivialism is elaborated. from that perspective, the great bifurcation can be regarded as a problem of coming-of-age of humanity. by accomplishing that evolutionary step, the rise of co-operative organisation would enable 'the emergence of a coordinated and integrated global entity' (stewart, , ) not seen before. this research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. this research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. key: cord- - x qs i authors: gupta, abhishek; lanteigne, camylle; heath, victoria; ganapini, marianna bergamaschi; galinkin, erick; cohen, allison; gasperis, tania de; akif, mo; institute, renjie butalid montreal ai ethics; microsoft,; university, mcgill; commons, creative; college, union; rapid ,; global, ai; university, ocad title: the state of ai ethics report (june ) date: - - journal: nan doi: nan sha: doc_id: cord_uid: x qs i these past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivalled pace has left the internet and technology watchers aghast. artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the covid- pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. it has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. with this inaugural edition of the state of ai ethics we hope to bring forward the most important developments that caught our attention at the montreal ai ethics institute this past quarter. our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. this pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of ai-enabled solutions. we cover a wide set of areas in this report spanning agency and responsibility, security and risk, disinformation, jobs and labor, the future of ai ethics, and more. our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain. these past few months have been especially challenging, and the deployment of technology in ways hitherto untested at an unrivaled pace has left the internet and technology watchers aghast. artificial intelligence has become the byword for technological progress and is being used in everything from helping us combat the covid- pandemic to nudging our attention in different directions as we all spend increasingly larger amounts of time online. it has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other. with this inaugural edition of the state of ai ethics we hope to bring forward the most important developments that caught our attention at the montreal ai ethics institute this past quarter. our goal is to help you navigate this ever-evolving field swiftly and allow you and your organization to make informed decisions. this pulse-check for the state of discourse, research, and development is geared towards researchers and practitioners alike who are making decisions on behalf of their organizations in considering the societal impacts of ai-enabled solutions. we cover a wide set of areas in this report spanning agency and responsibility, security and risk, disinformation, jobs and labor, the future of ai ethics , and more. our staff has worked tirelessly over the past quarter surfacing signal from the noise so that you are equipped with the right tools and knowledge to confidently tread this complex yet consequential domain. to stay up-to-date with the work at maiei, including our public competence building, we encourage you to stay tuned on https://montrealethics.ai which has information on all of our research. we hope you find this useful and look forward to hearing from you! wishing you well, abhishek gupta the state of ai ethics, june the debate when ethicists ask for rights to be granted to robots is based on notions of biological chauvinism and that if robots display the same level of agency and autonomy, not doing so would not only be unethical but also cause a setback for the rights that were denied to disadvantaged groups. by branding robots as slaves and implying that they don't deserve rights has fatal flaws in that they both use a term, slave, that has connotations that have significantly harmed people in the past and also that dehumanization of robots is not possible because it assumes that they are not human to begin with. while it may be possible to build a sentient robot in the distant future, in such a case there would be no reason to not grant it rights but until then, real, present problems are being ignored for imaginary future ones. the relationship between machines and humans is tightly intertwined but it's not symmetrical and hence we must not confound the "being" part of human beings with the characteristics of present technological artifacts. technologists assume that since there is a dualism to a human being, in the sense of the mind and the body, then it maps neatly such that the software is the mind and the robot body maps to the physical body of a human, which leads them to believe that a sentient robot, in our image, can be constructed, it's just a very complex configuration that we haven't completely figured out yet. the more representative view of thinking about robots at present is to see them as objects that inhabit our physical and social spaces. objects in our environment take on meaning based on the purpose they serve to us, such as a park bench meaning one thing to a skateboarder and another to a casual park visitor. similarly, our social interactions are always situated within a larger ecosystem and that needs to be taken into consideration when thinking about the interactions between humans and objects. in other words, things are what they are, because of the way they configure our social practices and how technology extends the biological body.our conception of human beings, then, is that we are and have always been fully embedded and enmeshed with our the state of ai ethics, june designed surroundings, and that we are critically dependent on this embeddedness for sustaining ourselves. because of this deep embedding, instead of seeing the objects around us merely as machines or on the other end as 'intelligent others', we must realize that they are very much a part of ourselves because of the important role they play in defining both our physical and social existence. some argue that robots take on a greater meaning when they are in a social context like care robots and people might be attached to them, yet that is quite similar to the attachment one develops to other artifacts like a nice espresso machine or a treasured object handed down for generations. they have meaning to the person but that doesn't mean that the robot, as present technology, needs to be granted rights. while a comparison to slaves and other disenfranchised groups is made when robots are denied rights because they are seen as 'less' than others, one mustn't forget that it happens to be the case that it is so because they are perceived as instruments and means to achieve an end. by comparing these groups to robots, one dehumanizes actual human beings. it may be called anthropocentric to deny rights to robots but that's what needs to be done: to center on the welfare of humans rather than inanimate machines. an interesting analogue that drives home the point when thinking about this is the milgram prison experiment where subjects who thought they had inflicted harms on the actors, who were a part of the experiment, were traumatized even after being told that the screams they heard were from the actors. from an outside perspective, we may say that no harm was done because they were just actors but to the person who was the subject of the experiment, the experience was real and not an illusion and it had real consequences. in our discussion, the robot is an actor and if we treat it poorly, then that reflects more so on our interactions with other artifacts than on whether robots are "deserving" of rights or not. taking care of various artifacts can be thought of as something that is done to render respect to the human creators and the effort that they expended to create it. discussion of robot rights for an imaginary future that may or may not arrive takes away focus and perhaps resources from the harms being done to real humans today as part of the ai systems being built with bias and fairness issues in them. invasion of privacy, bias against the disadvantaged, among other issues are just some of the few already existing harms that are being leveled on humans as intelligent systems percolate into the everyday fabric of social and economic life. the state of ai ethics, june from a for-profit perspective, such systems are poised and deployed with the aims of boosting the bottom line without necessarily considering the harms that emerge as a consequence. in pro-social contexts, they are seen as a quick fix solution to inherently messy and complex problems. the most profound technologies are those that disappear into the background and in subtle ways shape and form our existence. we already see that with intelligent systems pervading many aspects of our lives. so we're not as much in threat from a system like sophia which is a rudimentary chatbot hidden behind a facade of flashy machinery but more so from roomba which impacts us more and could be used as a tool to surveil our homes. taking ethical concerns seriously means considering the impact of weaving in automated technology into daily life and how the marginalized are disproportionately harmed. in the current dominant paradigm of supervised machine learning, the systems aren't truly autonomous, there is a huge amount of human input that goes into enabling the functioning of the system, and thus we actually have human-machine systems rather than just pure machinic systems. the more impressive the system seems, the more likely that there was a ton of human labor that went into making it possible. sometimes, we even see systems that started off with a different purpose such as recaptcha that are used to prevent spam being refitted to train ml systems. the building of ai systems today doesn't just require highly skilled human labor but it must be supplemented with mundane jobs of labeling data that are poorly compensated and involve increasingly harder tasks as, for example, image recognition systems become more powerful, leading to the labeling of more and more complex images which require greater effort. this also frames the humans doing the low skilled work squarely in the category of being dehumanized because of them being used as a means to an end without adequate respect, compensation and dignity. an illustrative example where robots and welfare of humans comes into conflict was when a wheelchair user wasn't able to access the sidewalk because it was blocked by a robot and she mentioned that without building for considering the needs of humans, especially those with special needs, we'll have to make debilitating compromises in our shared physical and social spaces. ultimately, realizing the goals of the domain of ai ethics needs to reposition our focus on humans and their welfare, especially when conflicts arise between the "needs" of automated systems compared to those of humans. what happens when ai starts to take over the more creative domains of human endeavour? are we ready for a future where our last bastion, the creative pursuit, against the rise of machines is violently snatched away from us? in a fitting start to feeling bereft in the times of global turmoil, this article starts off with a story created by a machine learning model called gpt- that utilizes training data from more than million documents online and predicts iteratively the next word in a sentence given a prompt. the story is about "life in the time of coronavirus" that paints a desolate and isolating picture of a parent who is following his daily routine and feels different because of all the changes happening around them. while the short story takes weird turns and is not completely coherent, it does give an eerie feeling that blurs the line between what could be perceived as something written by a human compared to that by a machine. a news-styled article on the use of facial recognition systems for law enforcement sounds very believable if presented outside of the context of the article. the final story, a fictional narrative, presents a fractured, jumpy storyline of a girl with a box that has hallucinatory tones to its storytelling. the range of examples from this system is impressive but it also highlights how much further these systems have to go before they can credibly take over jobs. that said, there is potential to spread disinformation via snippets like the second example we mention and hence, something to keep in mind as you read things online. technology, in its widest possible sense, has been used as a tool to supplement the creative process of an artist, aiding them in exploring the adjacent possible in the creative phasespace. for decades we've had computer scientists and artists working together to create software that can generate pieces of art that are based on procedural rules, random perturbations of the audience's input and more. off late, we've had an explosion in the use of ai to do the same, with the whole ecosystem being accelerated as people collide with each other serendipitously on platforms like twitter creating new art at a very rapid pace. but, a lot of people have been debating whether these autonomous systems can be attributed artistic agency and if they can be called artists in their own right. the author here argues that it isn't the case because even with the push into using technology that is more the state of ai ethics, june automated than other tools we've used in the past, there is more to be said about the artistic process than the simple mechanics of creating the artwork. drawing on art history and other domains, there is an argument to be made as to what art really is -there are strong arguments in support of it playing a role in servicing social relationships between two entities. we, as humans, already do that with things like exchanging gifts, romance, conversation and other forms of social engagement where the goal is to alter the social relationships. thus, the creative process is more so a co-ownership oriented model where the two entities are jointly working together to create something that alters the social fabric between them. as much as we'd like to think some of the ai-enabled tools today have agency, that isn't necessarily the case when we pop open the hood and see that it is ultimately just software that for the most part still relies heavily on humans setting goals and guiding it to perform tasks. while human-level ai might be possible in the distant future, for now the ai-enabled tools can't be called artists and are merely tools that open up new frontiers for exploration. this was the case with the advent of the camera that de-emphasized the realistic paint form and spurred the movement towards modern art in a sense where the artists are more focused on abstract ideas that enable them to express themselves in novel ways. art doesn't even have to be a tangible object but it can be an experience that is created. ultimately, many technological innovations in the past have been branded as having the potential to destroy the existing art culture but they've only given birth to new ideas and imaginings that allow people to express themselves and open up that expression to a wider set of people. the state of ai ethics, june ranking and retrieval systems for presenting content to consumers are geared towards enhancing user satisfaction, as defined by the platform companies which usually entails some form of profit-maximization motive, but they end up reflecting and reinforcing societal biases, disproportionately harming the already marginalized. in fairness techniques applied today, the outcomes are focused on the distributions in the result set and the categorical structures and the process of associating values with the categories is usually de-centered. instead, the authors advocate for a framework that does away with rigid, discrete, and ascribed categories and looks at subjective ones derived from a large pool of diverse individuals. focusing on visual media, this work aims to bust open the problem of underrepresentation of various groups in this set that can render harm on to the groups by deepening social inequities and oppressive world views. given that a lot of the content that people interact with online is governed by automated algorithmic systems, they end up influencing significantly the cultural identities of people. while there are some efforts to apply the notion of diversity to ranking and retrieval systems, they usually look at it from an algorithmic perspective and strip it of the deep cultural and contextual social meanings, instead choosing to reference arbitrary heterogeneity. demographic parity and equalized odds are some examples of this approach that apply the notion of social choice to score the diversity of data. yet, increasing the diversity, say along gender lines, falls into the challenge of getting the question of representation right, especially trying to reduce gender and race into discrete categories that are one-dimensional, third-party and algorithmically ascribed. the authors instead propose sourcing this information from the individuals themselves such that they have the flexibility to determine if they feel sufficiently represented in the result set. this is contrasted with the degree of sensitive attributes that are present in the result sets which is what prior approaches have focused on. from an algorithmic perspective, the authors advocate for the use of a technique called determinantal point process (dpp) that assigns a higher probability score to sets that have higher spreads based on a predefined distance metric. how dpp works is that for items that the individual feels represents them well, the algorithm clusters those points closer together, for points that they feel don't represent them well, it moves those away from the ones that represent them well in the embedding space. optimizing for the triplet loss helps to achieve the goals of doing this separation. but, the proposed framework still leaves open the question of sourcing in a reliable manner these ratings from the individuals about what represents and doesn't represent them well and then encoding them in a manner that is amenable to being learned by an algorithmic system. while large-scale crowdsourcing platforms which are the norm in seeking such ratings in the machine learning world, given that their current structuring precludes raters' identities and perceptions from consideration, this framing becomes particularly challenging in terms of being able to specify the rater pool. nonetheless, the presented framework provides an interesting research direction such that we can obtain more representation and inclusion in the algorithmic systems that we build. in maryland, allstate, an auto insurer, filed with the regulators that the premium rates needed to be updated because they were charging prices that were severely outdated. they suggested that not all insurance premiums be updated at once but instead follow recommendations based on an advanced algorithmic system that would be able to provide deeper insights into the pricing that would be more appropriate for each customer based on the risk that they would file a claim. this was supposed to be based on a constellation of data points collected by the company from a variety of sources. because of the demand from the regulators for documentation supporting their claim, they submitted thousands of pages of documentation that showed how each customer would be affected, a rare window into the pricing model which would otherwise have been veiled under privacy and trade secret arguments. a defense that is used by many companies that utilize discriminatory pricing strategies using data sourced beyond what they should be using to make pricing decisions. according to the investigating journalists, the model boiled down to something quite simple: the more money you had and the higher your willingness to not budge from the company, the more the company would try to squeeze from you in terms of premiums. driven by customer retention and profit motives, the company pushed increases on those that they knew could afford them and would switch to save dollars. but, for those policies that had been overpriced, they offered less than . % in terms of a discount limiting their downsides while increases were not limited, often going up as high as %. while they were unsuccessful in getting this adopted in maryland where it was deemed discriminatory, the model has been approved for use in several states thus showing that opaque models can be deployed not just in high-tech industries but anywhere to provide individually tailored pricing to extract away as much of the consumer surplus as possible based on the purportedly undisclosed willingness of the customer to pay (as would be expressed by their individual demand curves which aren't directly discernible to the producer). furthermore, the insurers aren't mandated to make disclosures of how they are pricing their policies and thus, in places where they should have offered discounts, they've only offered pennies on the dollar, disproportionately impacting the poorest for whom a few hundred dollars a year can mean having sufficient meals on the table. sadly, in the places where their customer retention model was accepted, the regulators declined to answer why they chose to accept it, except in arkansas where they said such pricing schemes aren't discriminatory unless the customers are grouped by attributes like race, color, creed or national origin. this takes a very limited view of what price discrimination is, harkening back to an era where access to big data about the consumer was few and far between. in an era dominated by data brokers that compile thick and rich data profiles on consumers, price discriminaton extends far beyond the basic protected attributes and can be tailored down to specificities of each individual. other companies in retail goods and online learning services have been following this practice of personalized pricing for many years, often defending it as the cost of doing in business when they based the pricing on things like zip codes, which are known proxies for race and paying capacity. personalized pricing is different from dynamic pricing, as seen when booking plane tickets, which is usually based on the timing of purchase whereas here the prices are based on attributes that are specific to the customer which they often don't have any control over. a obama administration report mentioned that, "differential pricing in insurance markets can raise serious fairness concerns, particularly when major risk factors are outside an individual customer's control." why the case of auto insurance is so much more predatory than, say buying stationery supplies, is that it is mandatory in almost all states and not having the vehicle insured can lead to fines, loss of licenses and even incarceration. transport is an essential commodity for people to get themselves to work, children to school and a whole host of other daily activities. in maryland, the regulators had denied the proposal by allstate to utilize their model but in official public records, the claim is marked as "withdrawn" rather than "denied" which the regulators claim makes no internal difference but allstate used this difference to get their proposal past the regulators in several other states. they had only withdrawn their proposal after being denied by the regulators in maryland. the national association of insurance commissioners mentioned that most regulators don't have the right data to be able to meaningfully evaluate rate revision proposals put forth by insurers and this leads to approvals without review in a lot of cases. even the data journalists had to spend a lot of time and effort to discern what the underlying models were and how they worked, essentially summing up that the insurers don't necessarily lie but don't give you all the information unless you know to ask the right set of questions. allstate has defended its price optimization strategy, called complementary group rating (cgr) as being more objective, and based on mathematical rigor, compared to the ad-hoc, judgemental pricing practices that have been followed before, ultimately citing better outcomes for their customers. but, this is a common form of what is called "mathwashing" in the ai ethics domain where discriminatory solutions are pushed as fair under the veneer of mathematical objectivity. regulators in florida said that setting prices based on the "modeled reaction to rate changes" was "unfairly discriminatory." instead of being cost-based, as is advocated by regulators for auto-insurance premiums because they support an essential function, allstate was utilizing a model that was heavily based on the likelihood of the customer sticking with them even in the face of price rises which makes it discriminatory. these customers are price-inelastic and hence don't change their demand much even in the face of dramatic price changes. consumer behaviour when purchasing insurance policies for the most part remains static once they've made a choice, often never changing insurers over the course of their lifetime which leads them to not find the optimal price for themselves. this is mostly a function of the fact that the decisions are loaded with having to judge complex terms and conditions across a variety of providers and the customers are unwilling to have to go through the exercise again and again at short intervals. given the opacity of the pricing models today, it is almost impossible to find out what the appropriate pricing should be for a particular customer and hence the most effective defense is to constantly check for prices from the competitors. but, this unduly places the burden on the shoulders of the consumer. google had announced its ai principles on building systems that are ethical, safe and inclusive, yet as is the case with so many high level principles, it's hard to put them into practice unless there is more granularity and actionable steps that are derived from those principles. here are the principles: • be socially beneficial this talk focused on the second principle and did just that in terms of providing concrete guidance on how to translate this into everyday practice for design and development teams. humans have a history of making product design decisions that are not in line with the needs of everyone. examples of the crash dummy and band-aids mentioned above give some insight into the challenges that users face even when the designers and developers of the products and services don't necessarily have ill intentions. products and services shouldn't be designed such that they perform poorly for people due to aspects of themselves that they can't change. for example, when looking at the open image dataset, searching for images marked with wedding indicate stereotypical western weddings but those from other cultures and parts of the world are not tagged as such. from a data perspective, the need for having more diverse sources of data is evident and the google team made an effort to do this by building an extension to the open images data set by providing users from across the world to snap pictures from their surroundings that captured diversity in many areas of everyday life. this helped to mitigate the problem that a lot of open image data sets have in being geographically skewed. biases can enter at any stage of the ml development pipeline and solutions need to address them at different stages to get the desired results. additionally, the teams working on these solutions need to come from a diversity of backgrounds including ux design, ml, public policy, social sciences and more. so, in the area of fairness by data which is one of the first steps in the ml product lifecycle and it plays a significant role in the rest of the steps of the lifecycle as well since data is used to both train and evaluate a system. google clips was a camera that was designed to automatically find interesting moments and capture them but what was observed was that it did well only for a certain type of family, under particular lighting conditions and poses. this represented a clear bias and the team moved to collect more data that better represented the situations for a variety of families that would be the target audience for the products. quickdraw was a fun game that was built to ask users to supply their quickly sketched hand drawings of various commonplace items like shoes. the aspiration from this was that given that it was open to the world and had a game element to it, it would be utilized by many people from a diversity of backgrounds and hence the data so collected would have sufficient richness to capture the world. on analysis, what they saw was that most users had a very particular concept of a shoe in mind, the sneaker which they sketched and there were very few women's shoes that were submitted. what this example highlighted was that data collection, especially when trying to get diverse samples, requires a very conscious effort that can account for what the actual distribution the system might encounter in the world and make a best effort attempt to capture their nuances. users don't use systems exactly in the way we intend them to, so reflect on who you're able to reach and not reach with your system and how you can check for blindspots, ensure that there is some monitoring for how data changes over time and use these insights to build automated tests for fairness in data. the second approach that can help with fairness in ml systems is looking at measurement and modeling. the benefits of measurement are that it can be tracked over time and you can test for both individuals and groups at scale for fairness. different fairness concerns require different metrics even within the same product. the primary categories of fairness concerns are disproportionate harms and representational harms. the jigsaw api provides a tool where you can input a piece of text and it tells you the level of toxicity of that piece of text. an example in the earlier version of the system rated sentences of the form "i am straight" as not toxic while those like "i am gay" as toxic. so what was needed to be able to see what was causing this and how it could be addressed. by removing the identity token, they monitored for how the prediction changed and then the outcomes from that measurement gave indications on where the data might be biased and how to fix it. an approach can be to use block lists and removals of such tokens so that sentences that are neutral are perceived as such without imposing stereotypes from large corpora of texts. these steps prevent the model from accessing information that can lead to skewed outcomes. but, in certain places we might want to brand the first sentence as toxic if it is used in a derogatory manner against an individual, we require context and nuance to be captured to make that decision. google undertook project respect to capture positive identity associations from around the world as a way of improving data collection and coupled that with active sampling (an algorithmic approach that samples more from the training data set in areas where it is under performing) to improve outputs from the system. another approach is to create synthetic data that mimics the problematic cases and renders them in a neutral context. adversarial training and updated loss functions where one updates a model's loss function to minimize difference in performance between groups of individuals can also be used to get better results. in their updates to the toxicity model, they've seen improvements, but this was based on synthetic data on short sentences and it is still an area of improvement. some of the lessons learned from the experiments carried out by the team: • test early and test often • develop multiple metrics (quantitative and qualitative measures along with user testing is a part of this) for measuring the scale of each problem • possible to take proactive steps in modeling that are aware of production constraints from a design perspective, think about fairness in a more holistic sense and build communication lines between the user and the product. as an example, turkish is a gender neutral language, but when translating to english, sentences take on gender along stereotypes by attributing female to nurse and male to doctor. say we have a sentence, "casey is my friend", given no other information we can't infer what the gender of casey is and hence it is better to present that choice to the user from a design perspective because they have the context and background and can hence make the best decision. without that, no matter how much the model is trained to output fair predictions, they will be erroneous without the explicit context that the user has. lessons learned from the experiments include: • context is key • get information from the user that the model doesn't have and share information with the user that the model has and they don't • how do you design so the user can communicate effectively and have transparency so that can you get the right feedback? • get feedback from a diversity of users • see the different ways in how they provide feedback, not every user can offer feedback in the same way • identify ways to enable multiple experiences • we need more than a theoretical and technical toolkit, there needs to be rich and context-dependent experience putting these lessons into practice, what's important is to have consistent and transparent communication and layering on approaches like datasheets for data sets and model cards for model reporting will aid in highlighting appropriate uses for the system and where it has been tested and warn of potential misuses and where the system hasn't been tested. the paper starts by setting the stage for the well understood problem of building truly ethical, safe and inclusive ai systems that are increasingly leveraging ubiquitous sensors to make predictions on who we are and how we might behave. but, when these systems are deployed in socially contested domains, for example, "normal" behaviour where loosely we can think of normal as that defined by the majority and treating everything else as anomalous, then they don't make value-free judgements and are not amoral in their operations. by viewing the systems as purely technical, the solutions to address these problems are purely technical which is where most of the fairness research has focused and it ignores the context of the people and communities where these systems are used. the paper serves to question the foundations of these systems and to take a deeper look at unstated assumptions in the design and development of the systems. it urges the readers, and the research community at large, to consider this from the perspective of relational ethics. it makes key suggestions: • center the focus of development on those within the community that will face a disproportionate burden or negative consequences from the use of the system • instead of optimizing for prediction, it is more important to think about how we gain a fundamental understanding of why we're getting certain results which might be arising because of historical stereotypes that were captured as a part of the development and design of the system • the systems end up creating a social and political order and then reinforcing it, meaning we should involve a larger systems based approach to designing the systems • given that the terms of bias, fairness, etc evolve over time and what's acceptable at some time becomes unacceptable later, the process asks for constant monitoring, evaluation and iteration of the design to most accurately represent the community in context. at maiei, we've advocated for an interdisciplinary approach leveraging the citizen community spanning a wide cross section to best capture the essence of different issues as closely as possible from those who experience them first hand. placing the development of an ml system in context of the larger social and political order is important and we advocate for taking a systems design approach (see a primer in systems thinking by donna meadows) which creates two benefits: one is that several ignored externalities can be considered and second to involve a wider set of inputs from people who might be affected by the system and who understand how the system will sit in the larger social and political order in which it will be deployed. also, we particularly enjoyed the point on requiring a constant iterative process to the development and deployment of ai systems borrowing from cybersecurity research on how security of the system is not done and over with, requiring constant monitoring and attention to ensure the safety of the system. underrepresentation of disabilities in datasets and how they are processed in nlp tasks is an important area of discussion that is often not studied empirically in the literature that primarily focuses on other demographic groups. there are many consequences of this, especially as it relates to how text related to disabilities is classified and has impacts on how people read, write, and seek information about this. research from the world bank indicates that about billion people have disabilities of some kind and often these are associated with strong negative social connotations. utilizing linguistic expressions as they are used in relation to disabilities and classifying them into recommended and non-recommended uses (following the guidelines from anti-defamation league, acm sigaccess, and ada national network), the authors seek to study how automated systems classify phrases that indicate disability and whether usages split by recommended vs. non-recommended uses make a difference in how these snippets of text are perceived. to quantify the biases in the text classification models, the study uses the method of perturbation. it starts by collecting instances of sentences that have naturally occurring pronouns he and she. then, they replace them with the phrases indicating disabilities as identified in the previous paragraph and compare the change in the classification scores in the original and perturbed sentences. the difference indicates how much of an impact the use of a disability phrase has on the classification process. using the jigsaw tool that gives the toxicity score for sentences, they test these original and perturbed sentences and observe that the change in toxicity is lower for recommended phrases vs. the non-recommended ones. but, when disaggregated by categories, they find that some of them elicit a stronger response than others. given that the primary use of such a model might in the case of online content moderation (especially given that we now have more automated monitoring happening as human staff has been thinning out because of pandemic related closures), there is a high rate of false positives where it can suppress content that is non-toxic and is merely discussing disability or replying to other hate speech that talks about disability. to look at sentiment scores for disability related phrases, the study looks at the popular bert model and adopts a template-based fill-in-the-blank analysis. given a query sentence with a missing word, bert produces a ranked list of words that can fill the blank. using a simple template perturbed with recommended disability phrases, the study then looks at how the predictions from the bert model change when disability phrases are used in the sentence. what is observed is that a large percentage of the words that are predicted by the model have negative sentiment scores associated with them. since bert is used quite widely in many different nlp tasks, such negative sentiment scores can have potentially hidden and unwanted effects on many downstream tasks. such models are trained on large corpora, which are analyzed to build "meaning" representations for words based on co-occurrence metrics, drawing from the idea that "you shall know a word by the company it keeps". the study used the jigsaw unintended bias in toxicity classification challenge dataset which had a mention of a lot of disability phrases. after balancing for different categories and analyzing toxic and non-toxic categories, the authors manually inspected the top terms in each category and found that there were key types: condition, infrastructure, social, linguistic, and treatment. in analyzing the strength of association, the authors found that condition phrases had the strongest association, and was then followed by social phrases that had the next highest strongest association. this included topics like homelessness, drug abuse, and gun violence all of which have negative valences. because these terms are used when discussing disability, it leads to a negative shaping of the way disability phrases are shaped and represented in the nlp tasks. the authors make recommendations for those working on nlp tasks to think about the socio-technical considerations when deploying such systems and to consider the intended, unintended, voluntary, and involuntary impacts on people both directly and indirectly while accounting for long-term impacts and feedback loops. such indiscriminate censoring of content that has disability phrases in them leads to an underrepresentation of people with disabilities in these corpora since they are the ones who tend to use these phrases most often. additionally, it also negatively impacts the people who might search for such content and be led to believe that the prevalence of some of these issues are smaller than they actually are because of this censorship. it also has impacts on reducing the autonomy and dignity of these people which in turn has a larger implication of how social attitudes are shaped. the second wave of algorithmic accountability the article dives into explaining how the rising interest in ensuring fair, transparent, ethical ai systems that are held accountable via various mechanisms advocated by research in legal and technical domains constitutes the "first wave" of algorithmic accountability that challenges existing systems. actions as a part of this wave need to be carried out incessantly with constant vigilance of the deployment of ai systems to avoid negative social outcomes. but, we also need to challenge why we have these systems in the first place, and if they can be replaced with something better. as an example, instead of making the facial recognition systems more inclusive, given the fact that they cause social stratification perhaps they shouldn't be used at all. a great point made by the article is that under the veneer of mainstream economic and ai rationalizations, we obscure broken social systems which ultimately harm society at a more systemic level. the trolley problem is a widely touted ethical and moral dilemma wherein a person is asked to make a split-second choice to save one or more than one life based on a series of scenarios where the people that need to be saved have different characteristics including their jobs, age, gender, race, etc. in recent times, with the imminent arrival of self-driving cars, people have used this problem to highlight the supposed ethical dilemma that the vehicle system might have to grapple with as it drives around. this article makes a point about the facetious nature of this thought experiment as an introduction to ethics for people that will be building and operating such autonomous systems. the primary argument being that it's a contrived situation that is unlikely to arise in the real-world setting and it distracts from other more pressing concerns in ai systems. moral judgments are relativistic and depend on cultural values of the geography where the system is deployed. the nature paper cited in the article showcases the differences in how people respond to this dilemma. there is an eeriness to this whole experimental setup, the article gives some examples on how increasingly automated environments, devoid of human social interactions and language, are replete with the clanging and humming of machines that give an entirely inhuman experience. for most systems, they are going to be a reflection of the biases and stereotypes that we have in the world, captured in the system because of the training and development paradigms of ai systems today. we'd need to make changes and bring in diversity to the development process, creating awareness of ethical concerns, but the trolley problem isn't the most effective way to get started on it. most of us have a nagging feeling that we're being forced into certain choices when we interact with each other on various social media platforms. but, is there a way that we can grasp that more viscerally where such biases and echo chambers are laid out bare for all to see? the article details an innovative game design solution to this problem called monster match that highlights how people are trapped into certain niches on dating websites based on ai-powered systems like collaborative filtering. striking examples of that in practice are how your earlier choices on the platform box you into a certain category based on what the majority think and then recommendations are personalized based on that smaller subset. what was observed was that certain racial inequalities from the real world are amplified on platforms like these where the apps are more interested in keeping users on the platform longer and making money rather than trying to achieve the goal as advertised to their users. more than personal failings of the users, the design of the platform is what causes failures in finding that special someone on the platform. the creators of the solution posit that through more effective design interventions, there is potential for improvement in how digital love is realized, for example, by offering a reset button or having the option to opt-out of the recommendation system and instead relying on random matches. increasingly, what we're going to see is that reliance on design and other mechanisms will yield better ai systems than purely technical approaches in improving socially positive outcomes. the article presents the idea of data feminism which is described as the intersection between feminism and data practices. the use of big data in today's dominant paradigm of supervised machine learning lends itself to large asymmetries that reflect the power imbalances in the real world. the authors of the new book data feminism talk about how data should not just speak for itself, for behind the data, there are a large number of structures and assumptions that bring it to the stage where they are collated into a dataset. they give examples of how sexual harassment numbers, while mandated to be reported to a central agency from college campuses might not be very accurate because they rely on the atmosphere and degree of comfort that those campuses promote which in turn influences how close the reported numbers will be to the actual cases. the gains and losses from the use of big data are not distributed evenly and the losses disproportionately impact the marginalized. there are a number of strategies that can be used to mitigate the harms from such flawed data pipelines. not an exhaustive list but it includes the suggestion of having more exposure for technical students to the social sciences and moving beyond having just a single ethics class as a check mark for having educated the students on ethics. secondly, having more diversity in the people developing and deploying the ai systems would help spot biases by asking the hard questions about both the data and the design of the system. the current covid- numbers might also suffer from similar problems because of how medical systems are structured and how people who don't have insurance might not utilize medical facilities and get themselves tested thus creating an underrepresentation in the data. this recent work highlights how commercial speech recognition systems carry inherent bias because of a lack of representation from diverse demographics in the underlying training datasets. what the researchers found was that even for identical sentences spoken by different racial demographics, the systems had widely differing levels of performance. as an example, for black users, the error rates were much higher than those for white users which probably had something to do with the fact that there is specific vernacular language used by black people which wasn't adequately represented in the training dataset for the commercial systems. this pattern has a tendency to be amplifying in nature, especially for systems that aren't frozen and continue to learn with incoming data. a vicious cycle is born where because of poor performance from the system, black people will be disincentivized from using the system because it takes a greater amount of work to get the system to work for them thus lowering utility. as a consequence of lower use, the systems get fewer training samples from black people thus further aggravating the problem. this leads to amplified exclusionary behavior mirroring existing fractures along racial lines in society. as a starting point, collecting more representative training datasets will aid in mitigating at least some of the problems in these systems. algorithmic bias at this point is a well-recognized problem with many people working on ways to address issues, both from a technical and policy perspective. there is potential to use demographic data to serve better those who face algorithmic discrimination but the use of such data is a challenge because of ethical and legal concerns. primarily, a lot of jurisdictions don't allow for the capture and use of protected class attributes or sensitive data for the fear of their misuse. even within jurisdictions, there is a patchwork of recommendations which makes compliance difficult. even with all this well established, proxy attributes can be used to predict the protected data and in a sense, according to some legislations, they become protected data themselves and it becomes hard to extricate the non-sensitive data from the sensitive data. because of such tensions and the privacy intrusions on data subjects when trying to collect demographic data, it is hard to align and advocate for this collection of data over the other requirements within the organization, especially when other bodies and leadership will look to place privacy and legal compliance over bias concerns. even if there was approval and internal alignment in collecting this demographic data, if there is voluntary provision of this data from data subjects, we run the risk of introducing a systemic bias that obfuscates and mischaracterizes the whole problem. accountability will play a key role in evoking trust from people to share their demographic information and proper use of it will be crucial in ongoing success. potential solutions are to store this data with a non-profit third-party organization that would meter out the data to those who need to use it with the consent of the data subject. to build a better understanding, partnership on ai is adopting a multistakeholder approach leveraging diverse backgrounds, akin to what the montreal ai ethics institute does, that can help inform future solutions that will help to address the problems of algorithmic bias by the judicious use of demographic data. detection and removal of hate speech is particularly problematic, something that has been exacerbated as human content moderators have been scarce in the pandemic related measures as we covered here. so are there advances in nlp that we could leverage to better automate this process? recent work from facebook ai research shows some promise. developing a deeper semantic understanding across more subtle and complex meanings and working across a variety of modalities like text, images and videos will help to more effectively combat the problem of hate speech online. building a pre-trained universal representation of content for integrity problems and improving and utilizing post-level, self-supervised learning to improve whole entity understanding has been key in improving hate speech detection. while there are clear guidelines on hate speech, when it comes to practice there are numerous challenges that arise from multi-modal use, differences in cultures and context, differences in idioms, language, regions, and countries. this poses challenges even for human reviewers who struggle with identifying hate speech accurately. a particularly interesting example shared in the article points out how text which might seem ambiguous when paired with an image to create a meme can take a whole new meaning which is often hard to detect using traditional automated tooling. there are active efforts from malicious entities who craft specific examples with the intention of evading detection which further complicates the problem. then there is the counterspeech problem where a reply to hate speech that contains the same phrasing but is framed to counter the arguments presented can be falsely flagged to be brought down which can have free speech implications. the relative scarcity of examples of hate speech in its various forms in relation to the larger non-hate speech content also poses a challenge for learning, especially when it comes to capturing linguistic and cultural nuances. the new method proposed utilizes focal loss which aims to minimize the impact of easy-to-classify examples on the learning process which is coupled with gradient blending which computes an optimal blend of modalities based on their overfitting patterns. the technique called xlm-r builds on bert by using a new pretraining recipe called roberta that allows training on orders of magnitude more data for longer periods of time. additionally, nlp performance is improved by learning across languages using a single encoder that allows learning to be transferred across languages. since this is a self-supervised method, they can train on large unlabeled datasets and have also found some universal language structures that allow vectors with similar meanings across languages to be closer together. facial recognition technology (frt) continues to get mentions because of the variety of ways that it can be misused across different geographies and contexts. with the most recent case where frt is used to determine criminality, it brings up an interesting discussion around why techniques that have no basis in science, those which have been debunked time and time again keep resurfacing and what we can do to better educate researchers on their moral responsibilities in pursuing such work. the author of this article gives some historical context for where the state of ai ethics, june phrenology started, pointing to the work of francis galton who used the "photographic composite method" to try and determine characteristics of one's personality from a picture. prior, measurements of skull size and other facial features wasn't deemed as a moral issue and the removal of such techniques from discussion was done on the objection that claims around the localization of different brain functions was seen as antithetical to the unity of the soul according to christianity. the authors of the paper that is being discussed in the article saw only empirical concerns with the work that they put forth and didn't see any of the moral shortcomings that were pointed out. additionally, they justified the work as being only for scientific curiosity. they also failed to realize the various statistical biases introduced in the collection of data as to the disparate rates of arrests, and policing, the perception of different people by law enforcement, juries, and judges and historical stereotypes and biases that confound the data that is collected.thus, the labeling itself is hardly value-neutral. more so, the authors of the study framed criminality as an innate characteristic rather than the social and other circumstances that lead to crime. especially when a project like this resurrects class structures and inequities, one must be extra cautious of doing such work on the grounds of "academic curiosity". the author of this article thus articulates that researchers need to take their moral obligations seriously and consider the harm that their work can have on people. while simply branding this as phrenology isn't enough, the author mentions that identifying and highlighting the concerns will lead to more productive conversations. an increase in demand for workers for various delivery services and other gig work has accelerated the adoption of vetting technology like those that are used to do background checks during the hiring process. but, a variety of glitches in the system such as sourcing out-of-date information to make inferences, a lack of redressal mechanisms to make corrections, among others has exposed the flaws in an overreliance on automated systems especially in places where important decisions need to be made that can have a significant impact on a person's life such as employment. checkr, the company that is profiled in this article claims to use ai to scan resumes, compare criminal records, analyze social media accounts, and examine facial expressions during the interview process. during a pandemic, when organizations are short-staffed and need to make rapid decisions, checkr offers a way to streamline the process, but this comes at a cost. two supposed benefits that they offer are that they are able to assess a match between the criminal record and the person being actually concerned, something that can especially be fraught with errors in cases where the person has a common name. secondly, they are also able to correlate and resolve discrepancies in the different terms that may be used for crimes across different jurisdictions. a person spoke about his experience with another company that did these ai-powered background checks utilizing his public social media information and bucketed some of his activity into categories that were too coarse and unrepresentative of his behaviour, especially when such automated judgements are made without a recourse to correct, this can negatively affect the prospects of being hired. another point brought up in the article is that social media companies might themselves be unwilling to tolerate scraping of their users' data to do this sort of vetting which against their terms of use for access to the apis. borrowing from the credit reporting world, the fair credit reporting act in the us offers some insights when it mentions that people need to be provided with a recourse to correct information that is used about them in making a decision and that due consent needs to be obtained prior to utilizing such tools to do a background check. though it doesn't ask for any guarantees of a favorable outcome post a re-evaluation, at least it does offer the individual a bit more agency and control over the process. the toxic potential of youtube's feedback loop on youtube everyday, more than a billion hours of video are watched everyday where approximately % of those are watched by automated systems that then provide recommendations on what videos to watch next for human users in the column on the side. there are more than billion users on the youtube platform so this has a significant impact on what the world watches. guillaume had started to notice a pattern in the recommended videos which tended towards radicalizing, extreme and polarizing content which were underlying the upward trend of watch times on the platform. on raising these concerns with the team, at first there were very few incentives for anyone to address issues of ethics and bias as it related to promoting this type of content because they feared that it would drive down watch time, the key business metric that was being optimized for by the team. so maximizing engagement stood in contrast to the quality of time that was spent on the platform. the vicious feedback loop that it triggered was that as such divisive content performed better, the ai systems promoted this to optimize for engagement and subsequently content creators who saw this kind of content doing better created more of such content in the hopes of doing well on the platform. the proliferation of conspiracy theories, extreme and divisive content thus fed its own demand because of a misguided business metric that ignored social externalities. flat earthers, anti-vaxxers and other such content creators perform well because the people behind this content are a very active community that spend a lot of effort in creating these videos, thus meeting high quality standards and further feeding the toxic loop. content from people like alex jones and trump tended to perform well because of the above reasons as well. guillaume's project algotransparency essentially clicks through video recommendations on youtube to figure out if there are feedback loops. he started this with the hopes of highlighting latent problems in the platforms that continue to persist despite policy changes, for example with youtube attempting to automate the removal of reported and offensive videos. he suggests that the current separation of the policy and engagement algorithm leads to problems like gaming of the platform algorithm by motivated state actors that seek to disrupt democratic processes of a foreign nation. the platforms on the other hand have very few incentives to make changes because the type of content emerging from such activity leads to higher engagement which ultimately boosts their bottom line. guillaume recommends having a combined system that can jointly optimize for both thus helping to minimize problems like the above. a lot of the problems are those of algorithmic amplification rather than content curation. many metrics like number of views, shares, and likes don't capture what needs to be captured. for example, the types of comments, reports filed, and granularity of why those reports are filed. that would allow for a smarter way to combat the spread of such content. however, the use of such explicit signals compared to the more implicit ones like number of views comes at the cost of breaking the seamlessness of the user experience. again we run into the issue of a lack of motivation on part of the companies to do things that might drive down engagement and hurt revenue streams. the talk gives a few more examples of how people figured out ways to circumvent checks around the reporting and automated take-down mechanisms by disabling comments on the videos which could previously be used to identify suspicious content. an overarching recommendation made by guillaume in better managing a more advanced ai system is to understand the underlying metrics that the system is optimizing for and then envision scenarios of what would happen if the system had access to unlimited data. thinking of self-driving cars, an ideal scenario would be to have full conversion of the traffic ecosystem to one that is autonomous leading to fewer deaths but during the transition phase, having the right incentives is key to making a system that will work in favor of social welfare. if one were to imagine a self-driving car that shows ads while the passenger is in the car, it would want to have a longer drive time and would presumably favor longer routes and traffic jams thus creating a sub-optimal scenario overall for the traffic ecosystem. on the other hand, a system that has the goal of getting from a to b as quickly and safely as possible wouldn't fall into such a trap. ultimately, we need to design ai systems such that they help humans flourish overall rather than optimize for monetary incentives which might run counter to the welfare of people at large. the article provides a taxonomy of communities that spread misinformation online and how they differ in their intentions and motivations. subsequently, different strategies can be deployed in countering the disinformation originating from these communities. there isn't a one-size-fits-all solution that would have been the case had the distribution and types of the communities been homogenous. the degree of influence that each of the communities wield is a function of types of capital: economic, social, cultural, time and algorithmic, definitions of which are provided in the article. understanding all these factors is crucial in combating misinformation where different capital forms can be used in different proportions to achieve the desired results, something that will prove to be useful in addressing disinformation around the current covid- situation. the social media platform offers a category of pseudoscience believers which advertisers can purchase and target. according to the markup, this category has million people in it and attempts to purchase ads targeting this category were approved quite swiftly. there isn't any information available as to who has purchased ads targeting this category. the journalist team was able to find at least one advertiser through the "why am i seeing this ad?" option and they reached out to that company to investigate and they found that the company hadn't selected the pseudoscience category but it had been auto-selected by facebook for them. facebook allows users the option to change the interests that are assigned to each user but it is not something that many people know about and actively monitor. some other journalists had also unearthed controversy-related categories that amplified messages and targeted people who might be susceptible to such kind of misinformation. with the ongoing pandemic, misinformation is propagating at a rapid rate and there are many user groups that continue to push conspiracy theories. other concerns around being able to purchase ads to spread misinformation related to potential cures and remedies for the coronavirus continue to be approved. with the human content moderators being asked to stay home (as we covered here) and an increasing reliance on untested automated solutions, it seems that this problem will continue to plague the platform. there isn't a dearth of information available online, one can find confirmatory evidence to almost any viewpoint since the creation and dissemination of information has been democratized by the proliferation of the internet and ease of use of mass-media platforms. so in the deluge of information, what is the key currency that helps us sift through all the noise and identify the signal? this article lays out a well articulated argument for how reputation and being able to assess it is going to be a key skill that people will need to have in order to effectively navigate the information ecosystem effectively. we increasingly rely on other people's judgement of content (akin to how maiei analyzes the ecosystem of ai ethics and presents you with a selection), coupled with algorithmically-mediated distribution channels, we are paradoxically disempowered by more information and paralyzed into inaction and confusion without a reputable source to curate and guide us. there are many conspiracy theories, famous among them that we never visited the moon, flat earth and more recently that g is causing the spread of the coronavirus. as rational readers, we tend to dismiss this as misinformation yet we don't really spend time to analyze the evidence that these people present to support their claims. to a certain extent, our belief that we did land on the moon depends on our trust in nasa and other news agencies that covered this event yet we don't venture to examine the evidence first-hand. more so, with highly specialized knowledge becoming the norm, we don't have the right tools and skills to even be able to analyze the evidence and come to meaningful conclusions. so, we must rely on those who provide us with this information. instead of analyzing the veracity of a piece of information, the focus of a mature digital citizen needs to be on being able to analyze the reputation pathway of that information, evaluate the agendas of the people that are disseminating the information and critically analyze the intentions of the authorities of the sources. how we rank different pieces of information arriving to us via our social networks need to be appraised for this reputation and source tracing, in a sense a second-order epistemology is what we need to prepare people for. in the words of hayek, "civilization rests on the fact that we all benefit from the knowledge that we do not possess." our cyber-world can become civilized by evaluating this knowledge that we don't possess critically when mis/disinformation can spread just as easily as accurate information. a very clear way to describe the problem plaguing the us response to the coronavirus, the phenomenon of truth decay is not something new but has happened many times in the past when trust in key institutions deteriorated and led to a diffused response to the crisis at hand, extending the recovery period beyond what would be necessary if there was a unified response. in the us, the calls for reopening the economy, following guidance on using personal protective equipment, and other recommendations is falling along partisan lines. the key factor causing this is how the facts and data are being presented differently to different audiences. while this epidemic might have been the perfect opportunity for bringing people together, because it affects different segments of society differently, it hasn't been what everyone expected it to be. at the core is the rampant disagreement between different factions on facts and data. this is exacerbated by the blurring of facts and opinions. in places like newsrooms and tv shows, there is an intermingling of the two which makes it harder for everyday consumers to discern fact from opinion. the volume of opinion has gone up compared to facts and people's declining trust in public health authorities and other institutions is also aggravating the problem. put briefly, people are having trouble finding the truth and don't know where to go looking for it. this is also the worst time to be losing trust in experts; with a plethora of information available online, people are feeling unnecessarily empowered that they have the right information, comparable to that of experts. coupled with a penchant for confirming their own beliefs, there is little incentive for people to fact-check and refer to multiple sources of information. when different agencies come out with different recommendations and there are policy changes in the face of new information, something that is expected given that this is an evolving situation, people's trust in these organizations and experts erodes further as they see them as flip-flopping and not knowing what is right. ultimately, effective communication along with a rebuilding of trust will be necessary if we're to emerge from this crisis soon and restore some sense of normalcy. the deepfake detection challenge: synthetic media is any media (text, image, video, audio) that is generated by an ai system or that is synthesized. on the other hand, non-synthetic media is one that is crafted by humans using a panoply of techniques, including tools like photoshop. detecting synthetic media alone doesn't solve the media integrity challenges, especially as the techniques get more sophisticated and trigger an arms race between detection and evasion methods. these methods need to be paired with other existing techniques that fact checkers and journalists already use in determining whether something is authentic or synthesized. there are also pieces of content that are made through low tech manipulations like the nancy pelosi video from which showed her drunk but in reality it was just a slowed down video. other such manipulations include simpler things like putting fake and misleading captions below the true video and people without watching the whole thing are misled into believing what is summarized in the caption. in other cases, the videos might be value neutral or informative even when they are generated so merely detecting something as being generated doesn't suffice. a meaningful way to utilize automated tools is a triaging utility that flags content to be reviewed by humans in a situation where it is not possible to manually review everything on the platform. while tech platforms can build and utilize tools that help them with these tasks, the adjacent possible needs of the larger ecosystem need to be kept in mind such that they can be served at the same time, especially for those actors that are resource-constrained and don't have the technical capabilities to build it themselves. the tools need to be easy to use and shouldn't have high friction such that they become hard to integrate into existing workflows. through open sourcing and licensing, the tools can be made available to the wider ecosystem but it can create the opportunity for adversaries to strengthen their methods as well. this can be countered by responsible disclosure as we'll cover below. for any datasets created as a part of this challenge and otherwise to aid in detection, one must ensure that it captures sufficient diversity in terms of environment and other factors and reflects the type of content that might be encountered in the world. the scoring rules need to be such that they minimize gaming and overfitting and capture the richness of variation that a system might encounter. for example most datasets today in this domain aim to mitigate the spread of pornographic material. they also need to account for the vastly different frequencies of occurrence of authentic and generated content. solutions in this domain involve an inherent tradeoff between pro-social use and potential malicious use for furthering the quality of inauthentic content. the release of tools should be done in a manner that enhances pro-social use while creating deterrents for malicious use. the systems should be stress-tested by doing red team-blue team exercises to enhance robustness because this is inherently an adversarial exercise. such challenges should be held often to encourage updating of techniques because it is a fast evolving domain where progress happens in the span of a few months. results from such detection need to be accessible to the public and stakeholders and explanations for the research findings should be made available alongside the challenge to encourage better understanding by those that are trying to make sense of the digital content. responsible disclosure practices will be crucial in enabling the fight against disinformation to have the right tools while deterring adversaries from utilizing the same tools to gain an advantage. a delayed release mechanism where the code is instantly made available to parties in a non-open source manner while the research and papers are made public with the eventual release of the code as well after a - months delay which would help with the detectors having a headstart over the adversaries. such detection challenges can benefit from extensive multi-stakeholder consultations which require significant time and effort so budget for that while crafting and building such challenges. some of the allocation of prize money should be towards better design from a ux and ui perspective. it should also include explainability criteria so that non-technical users are able to make sense of the interventions and highlights of fake content such as bounding boxes around regions of manipulations. the process of multi-stakeholder input should happen at an early stage allowing for meaningful considerations to be incorporated and dataset design that can be done appropriately to counter bias and fairness problems. finally, strong, trusting relationships are essential to the success of the process and require working together over extended periods to have the hard conversations with each other. it is important to have clear readings ahead of meetings that everyone has to complete so that discussions come from an informed place. spending time scoping and coming to clearer agreement about projects goals and deliverables at the beginning of the process is also vital to success. there is a distinction between misinformation and disinformation -misinformation is the sharing of false information unintentionally where no harm is intended whereas disinformation is false information that is spread intentionally with the aims of causing harm to the consumers. this is also referred to as information pollution and fake news. it has massive implications that have led to real harms for people in many countries with one of the biggest examples being the polarization of views in the us presidential elections. meaningful solutions to this will only emerge when we have researchers from both technical and social sciences backgrounds working together to gain a deeper understanding of the root causes. this isn't a new problem and has existed for a very long time, it's just that with the advent of technology and more people being connected to each other we have a much more rapid dissemination of the false information and modern tools enable the creation of convincing fake images, text and videos, thus amplifying the negative effects. some of the features that help to delve deeper into the study of how mis/disinformation spreads are: • democratization of content creation: with practically anyone now having the ability to create and publish content, information flow has increased dramatically and there are few checks for the veracity of content and even fewer mechanisms to limit the flow rate of information. • rapid news cycle and economic incentives: with content being monetized, there is a strong incentive to distort information to evoke a response from the reader such that they click through and feed the money-generating apparatus. • wide and immediate reach and interactivity: by virtue of almost the entire globe being connected, content quickly reaches the furthest corners of the planet. more so, content creators are also able to, through quantitative experiments, determine what kind of content performs well and then tailor that to feed the needs of people. • organic and intentionally created filter bubbles: the selection of who to follow along with the underlying plumbing of the platforms permits for the creation of echo chambers that further strengthen polarization and do little to encourage people to step out and have a meaningful exchange of ideas. • algorithmic curation and lack of transparency: the inner workings of platforms are shrouded under the veil of ip protections and there is little that is well-known about the manipulative effects of the platforms on the habits of content consumers. • scale and anonymity of online accounts: given the weak checks for identity, people are able to mount "sybil" attacks that leverage this lack of strong identity management and are able to scale their impact through the creation of content and dispersion of content by automated means like bot accounts on the platform. what hasn't changed even with the introduction of technology are the cognitive biases which act as attack surfaces for malicious actors to inject mis/disinformation. this vulnerability is of particular importance in the examination and design of successful interventions to combat the spread of false information. for example, the confirmation bias shows that people are more likely to believe something that conforms with their world-view even if they are presented with overwhelming evidence to the contrary. in the same vein, the backfire effect demonstrates how people who are presented with such contrary evidence further harden their views and get even more polarized thus negating the intention of presenting them with balancing information. in terms of techniques, the adversarial positioning is layered into three tiers with spam bots that push out low-quality content, quasi-bots that have mild human supervision to enhance the quality of content and pure human accounts that aim to build up a large following before embarking on spreading the mis/disinformation. from a structural perspective, the alternate media sources often copy-paste content with source attribution and are tightly clustered together with a marked separation with other mainstream media outlets. on the consumer front, there is research that points to the impact that structural deficiencies in the platforms, say whatsapp where source gets stripped out in sharing information, create not only challenges for researchers trying to study the ecosystem but also exacerbate the local impact effect whereby a consumer trusts things coming from friends much more so than other potentially more credible sources from an upstream perspective. existing efforts to study the ecosystem require a lot of manual effort but there is hope in the sense that there are some tools that help automate the analysis. as an example, we have the hoaxy tool, a tool that collects online mis/disinformation and other articles that are fact-checking versions. their creators find that the fact-checked articles are shared much less than the original article and that curbing bots on a platform has a significant impact. there are some challenges with these tools in the sense that they work well on public platforms like twitter but on closed platforms with limited ability to deploy bots, automation doesn't work really well. additionally, even the metrics that are surfaced need to be interpreted by researchers and it isn't always clear how to do that. the term 'deepfake' originated in and since then a variety of tools have been released such as face face that allow for the creation of reanimations of people to forge identity, something that was alluded to in this paper here on the evolution of fraud. while being able to create such forgeries isn't new, what is new is that this can be done now with a fraction of the effort, democratizing information pollution and casting aspersions on legitimate content as one can always argue something was forged. online tracking of individuals, which is primarily used for serving personalized advertisements and monetizing the user behaviors on websites can also be used to target mis/disinformation in a fine-grained manner. there are a variety of ways this is done through third-party tracking like embedding of widgets to browser cookies and fingerprinting. this can be used to manipulate vulnerable users and leverage sensitive attributes gleaned from online behaviors that give malicious actors more ammunition to target individuals specifically. even when platforms provide some degree of transparency on why users are seeing certain content, the information provided is often vague and doesn't do much to improve the understanding for the user. earlier attempts at using bots used simplistic techniques such as tweeting at certain users and amplifying low-credibility information to give the impression that something has more support than it really does but recent attempts have become more sophisticated: social spambots. these slowly build up credibility within a community and then use that trust to sow disinformation either automatically or in conjunction with a human operator, akin to a cyborg. detection and measurement of this problem is a very real concern and researchers have tried using techniques like social network graph structure, account data and posting metrics, nlp on content and crowdsourcing analysis. from a platform perspective, they can choose to analyze the amount of time spent browsing posts vs. the time spent posting things. there is an arms race between detection and evasion of bot accounts: sometimes even humans aren't able to detect sophisticated social bots. additionally, there are instances where there are positive and beneficial bots such as those that aggregate news or help coordinate disaster response which further complicates the detection challenge. there is also a potential misalignment in incentives since the platforms have an interest in having higher numbers of accounts and activity since it helps boost their valuations while they are the ones that have the maximum amount of information to be able to combat the problem. this problem of curbing the spread of mis/disinformation can be broken down into two parts: enabling detection on the platform level and empowering readers to select the right sources. we need a good definition of what fake news is, one of the most widely accepted definitions is that it is something that is factually false and intentionally misleading. framing a machine learning approach here as an end-to-end task is problematic because it requires large amounts of labelled data and with neural network based approaches, there is little explanation offered which makes downstream tasks harder. so we can approach this by breaking it down into subtasks, one of which is verifying the veracity of information. most current approaches use human fact-checkers but this isn't a scalable approach and automated means using nlp aren't quite proficient at this task yet. there are attempts to break down the problem even further such as using stance detection to see if information presented agrees, disagrees or is unrelated to what is mentioned in the source. other approaches include misleading style detection whereby we try to determine if the style of the article can offer clues to the intent of the author but that is riddled with problems of not having necessarily a strong correlation with a misleading intent because the style may be pandering to hyperpartisanship or even if it is neutral that doesn't mean that it is not misleading. metadata analysis looking at the social graph structure, attributes of the sharer and propagation path of the information can lend some clues as well. while all these attempts have their own challenges and in the arms race framing, there is a constant battle between attack and defense, even if the problem is solved, we still have human cognitive biases which muddle the impacts of these techniques. ux and ui interventions might serve to provide some more information as to combating those. as a counter to the problems encountered in marking content as being "disputed" which leads to the implied truth effect leading to larger negative externalities, an approach is to show "related" articles when something is disputed and then use that as an intervention to link to fact-checking websites like snopes. other in-platform interventions include the change from whatsapp to show "forwarded" next to messages so that people had a bit more insight into the provenance of the message because there was a lot of misinformation that was being spread in private messaging. there are also third-party tools like surfsafe that are able to check images as people are browsing against other websites where they might have appeared and if they haven't appeared in many places, including verified sources, then the user can infer that the image might be doctored. education initiatives by the platform companies for users to spot misinformation are a method to get people to become more savvy. there have also been attempts to assign nutrition labels to sources to list their slant, tone of the article, timeliness of the article and the experience of the author which would allow a user to make a better decision on whether or not to trust an article. platforms have also attempted to limit the spread of mis/disinformation by flagging posts that encourage gaming of the sharing mechanisms on the platform, for example, downweighting posts that are "clickbait". the biggest challenges in the interventions created by the platforms themselves are that they don't provide sufficient information as to make the results scientifically reproducible. given the variety of actors and motivations, the interventions need to be tailored to be able to combat them such as erecting barriers to the rate of transmission of mis/disinformation and demonetization for actors with financial incentives but for state actors, detection and attribution might be more important. along with challenges in defining the problem, one must look at socio-technical solutions because the problem has more than just the technical component, including the problem with human cognitive biases. being an inherently adversarial setting, it is important to see that not all techniques being used by the attackers are sophisticated, some simple techniques when scaled are just as problematic and require attention. but, given that this is constantly evolving, detecting disinformation today doesn't mean that we can do so successfully tomorrow. additionally, disinformation is becoming more personalized, more realistic and more widespread. there is a misalignment in incentives as explored earlier in terms of what the platforms want and what's best for users but also that empowering users to the point of them being just skeptical of everything isn't good either, we need to be able to trigger legitimate and informed trust in the authentic content and dissuade them away from the fake content. among the recommendations proposed by the authors are: being specific about what a particular technological or design intervention means to achieve, breaking down the technological problems into smaller, concrete subproblems that have tractable solutions and then recombining them into the larger pipeline. we must also continue to analyze the state of the ecosystem and tailor defenses such that they can combat the actors at play. additionally, rethinking of the monetary incentives on the platform can help to dissuade some of the financially-motivated actors. educational interventions that focus on building up knowledge so that there is healthy skepticism and learning how to detect markers for bots, the capabilities of technology to create fakes today and discussions in "public squares" on this subject are crucial yet we mustn't place too much of a burden on the end-user that distracts them from their primary task which is interaction with others on the social network. if that happens, people will just abandon the effort. additionally, designing for everyone is crucial, if the interventions, such as installing a browser extension, are complicated, then one can only reach the technically-literate people and everyone else gets left out. on the platform end, apart from the suggestions made above, they should look at the use of design affordances that aid the user in judging the veracity, provenance and other measures to discern legitimate information vs. mis/disinformation. teaming up with external organizations that specialize in ux/ui research will aid in understanding the impacts of the various features within the platform. results from such research efforts need to be made public and accessible to non-technical audiences. proposed solutions also need to be interdisciplinary to have a fuller understanding of the root causes of the problem. also, just as we need tailoring for the different kinds of adversaries, it is important to tailor the interventions to the various user groups who might have different needs and abilities. the paper also makes recommendations for policymakers, most importantly that the work in regulations and legislations be grounded in technical realities that are facing the ecosystem so that they don't undershoot or overshoot the needs for successfully combating mis/disinformation. for users, there are a variety of recommendations provided in the references but notably being aware of our own cognitive biases and having a healthy degree of skepticism and checking information against multiple sources before accepting it as legitimate are the most important ones. disinformation is harmful even during times when we aren't going through large scale changes but this year the us has elections, the once in a decade census, and the covid- pandemic. malicious agents are having a field day disbursing false information, overwhelming people with a mixture of true and untrue pieces of content. the article gives the example of a potential lockdown and people reflecting on their experience with the boston marathon bombings including stockpiling essentials out of panic. this was then uncovered to have originated from conspiracy theorists, but in an environment where contact with the outside world has become limited and local touch points such as speaking with your neighbor have dwindled, we're struggling with our ability to combat this infodemic. social media is playing a critical role in getting information to people but if it's untrue, we end up risking lives especially if it's falsehoods on how to protect yourself from contracting a disease. but wherever there is a challenge lies a corresponding opportunity: social media companies have a unique window into discovering issues that a local population is concerned about and it can, if used effectively, be a source for providing crisis response to those most in need with resources that are specific and meaningful. when it comes to disinformation spreading, there isn't a more opportune time than now with the pandemic raging where people are juggling several things to manage and cope with lifestyle and work changes. this has increased the susceptibility of people to sharing news and other information about how to protect themselves and their loved ones from covid- . as the who has pointed out, we are combating both a pandemic and an infodemic at the same time. what's more important is that this might be the time to test out design and other interventions that might help curb the spread of disinformation. this study highlighted how people shared disinformation more often than they believed its veracity. in other words, when people share content, they care more about what they stand to gain (social reward cues) for sharing the content than whether the content they're sharing is accurate or not. to combat this, the researchers embarked on an experiment to see if asking users to check whether something was true before sharing -a light accuracy nudge, would change their behaviour. while there was a small positive effect in terms of them sharing disinformation less when prompted to check for accuracy, the researchers pointed out that the downstream effects could be much larger because of the amplification effects of how content propagates on social media networks. it points to a potentially interesting solution that might be scalable and could help fight against the spread of disinformation. the who has mentioned the infodemic as being one of the causes that is exacerbating the pandemic as people follow differing advice on what to do. communication by authorities has been persistent but at times ineffective and this article dives into how one could enhance the visibility of credible information by governments, health authorities and scientists so that the negative impacts of the infodemic can be curbed. but, spewing scientific facts from a soapbox alone isn't enough -one is competing with all the other pieces of information and entertainment for attention and that needs to be taken into account. one of the key findings is that starting a dialogue helps more than just sending a one-way communiqué. good science communication relies on the pillars of storytelling, cutting through the jargon and making the knowledge accessible. while online platforms are structured such that polarization is encouraged through the algorithmic underpinnings of the system, we should not only engage when there is something that we disagree with, instead taking the time to amplify good science is equally important. using platform-appropriate messaging, tailoring content to the audience and not squabbling over petty details, especially when they don't make a significant impact on the overall content helps to push out good science signals in the ocean of information pollution. clickbait-style headlines do a great job of hooking in people but when leading people into making a certain assumption and then debunking it, you stand the risk of spreading misinformation if someone doesn't read the whole thing, so in trying to make headlines engaging, it is important to consider what might be some unintended consequences if someone didn't read past the subtitle. science isn't just about the findings, the process only gets completed when we have effective communication to the larger audience of the results, and now more than ever, we need accurate information to overpower the pool of misinformation out there. there is a potential for ai to automate repetitive tasks and free up scarce resources towards more value-added tasks. with a declining business model and tough revenue situations, newsrooms and journalism at large are facing an existential crisis. cutting costs while still keeping up high standards of reporting will require innovation on the part of newsrooms to adapt emerging technologies like ai. for example, routine tasks like reporting on sports scores from games and giving updates on company earnings calls is already something that is being done by ai systems in several newsrooms around the world. this frees up time for journalists to spend their efforts on things like long-form journalism, data-driven and investigative journalism, analysis and feature pieces which require human depth and creativity. machine translation also offers a handy tool making the work of journalists accessible to a wider audience without them having to invest in a lot of resources to do the translations themselves. this also brings up the possibility of smaller and resource-constrained media rooms to use their limited resources for doing in-depth pieces while reaching a wider audience by relying on automation. transcription of audio interviews so that reporters can work on fact-checking and other associated pieces also helps bring stories to fruition faster, which can be a boon in the rapidly changing environment. in the case of evolving situations like the pandemic, there is also the possibility of using ai to parse through large reams of data to find anomalies and alert the journalist of potential areas to cover. complementing human skills is the right way to adopt ai rather than thinking of it as the tool that replaces human labor. the article gives an explanation for why truth labels on stories are not as effective as we might think them to be because of something called the implied truth effect. essentially, it states that when some things are marked as explicitly false and other false stories aren't, people believe them to be true even if they are outright false because of the lack of a label. fact checking all stories manually is an insurmountable task for any platform and the authors of the study mention a few approaches that could potentially mitigate the spread of false content but none are a silver bullet. there is an ongoing and active community that researches how we might more effectively dispel disinformation but it's nascent and with the proliferation of ai systems, more work needs to be done in this arms race of building tools vs increasing capabilities of systems to generate believable fake content. this paper by xiao ma and taylor w. brown puts forth a framework that extends the well studied social exchange theory (set) to study human-ai interactions via mediation mechanisms. the authors make a case for how current research needs more interdisciplinary collaboration between technical and social science scholars stemming from a lack of shared taxonomy that places research in similar areas on separate grounds. they propose two axes of human/ai and micro/macro perspectives to visualize how researchers might better collaborate with each other. additionally, they make a case for how ai agents can mediate transactions between humans and create potential social value as an emergent property of those mediated transactions. as the pace of research progress quickens and more people from different fields engage in work on the societal impacts of ai, it is essential that we build on top of each other's work rather than duplicating efforts. additionally, because of conventional differences in how research is published and publicized in the social sciences and technical domains, there's often a shallowness in the awareness of the latest work being done at the intersection of these two domains. what that means is that we need a shared taxonomy that allows us to better position research such that potential gaps can be discovered and areas of collaboration can be identified. the proposed two axes structure in the paper goes some distance in helping to bridge this current gap. ai systems are becoming ever more pervasive in many aspects of our everyday lives and we definitely see a ton of transactions between humans that are mediated by automated agents. in some scenarios, they lead to net positive for society when they enable discovery of research content faster as might be the case for medical research being done to combat covid- but there might be negative externalities as well where they can lead to echo chambers walling off content from a subset of your network on social media platforms thus polarizing discussions and viewpoints. a better understanding of how these interactions can be engineered to skew positive will be crucial as ai agents get inserted to evermore aspects of our lives, especially ones that will have a significant impact on our lives. we also foresee an emergence of tighter interdisciplinary collaboration that can shed light on these inherently socio-technical issues which don't have unidimensional solutions. with the rising awareness and interest from both social and technical sciences, the emerging work will be both timely and relevant to addressing challenges of the societal impacts of ai head on. as a part of the work being done at maiei we push for each of our undertakings to have an interdisciplinary team as a starting point towards achieving this mandate. most concerns when aiming to use technology within healthcare are along the lines of replacing human labor and the ones that are used in aiding humans to deliver care don't receive as much attention. with the ongoing pandemic, we've seen this come into the spotlight as well and this paper sets the stage for some of the ethical issues to watch out for when thinking about using ai-enabled technologies in the healthcare domain and how to have a discussion that is grounded in concrete moral principles. an argument put forth to counter the use of ai solutions is that they can't "care" deeply enough about the patients and that is a valid concern, after all machines don't have empathy and other abilities required to have an emotional exchange with humans. but, a lot of the care work in hospitals is routine and professionalism asks for maintaining a certain amount of emotional distance in the care relationship. additionally, in places where the ratio of patients/carers is high, they are unable to provide personalized attention and care anyways. in that respect, human-provided care is already "shallow" and the author cites research where care that is too deep actually hurts the carer when the patients become better and move out of their care or die. thus, if this is the argument, then we need to examine more deeply our current care practices. the author also posits that if this is indeed the state of care today, then it is morally less degrading to be distanced by a machine than by a human. in fact, the use of ai to automate routine tasks in the rendering of medical care will actually allow human carers to focus more on the emotional and human aspects of care. good healthcare, supposedly that provided by humans doesn't have firm grounding in the typical literature on the ethics of healthcare and technology. it's more so a list of things not to do but not positive guidance on what this kind of good healthcare looks like. thus, the author takes a view that it must, at the very least, respect, promote and preserve the dignity of the patient. yet, this doesn't provide concrete enough guidance and we can expand on this to say that dignity is a) treating the patient as a human b) treating them as a part of a culture and community and c) treating them as a unique human. to add even more concreteness, the author borrows from the work done in economics on the capabilities approach. this capabilities approach states that having the following capabilities in their entirety is necessary for a human to experience dignity in their living: life, bodily health, bodily integrity, being able to use your senses, imaginations and thoughts, emotions, practical reasoning, affiliation, other species, play, and control over one's environment. this list applied to healthcare gives us a good guideline for what might constitute the kind of healthcare that we deem should be provided by humans, with or without the use of technology. now, the above list might seem too onerous for healthcare professionals but we need to keep in mind that healthcare to achieve a good life as highlighted by the capabilities approach things that are dependent on things beyond just the healthcare professionals and thus, the needs as mentioned above need to be distributed accordingly. the threshold for meeting them should be high but not so high that they are unachievable. principles are only sufficient for giving us some guidance for how to act in difficult situations or ethical dilemmas, they don't determine the outcome because they are only one element in the decision making process. we have to rely on the context of the situation and the moral surroundings of it. the criteria proposed are to be used in moral deliberation and should address whether the criterion applies to the situation, is it satisfied and is it sufficiently met (which is in reference to the threshold). with the use of ai-enabled technology, privacy is usually cited as a major concern but the rendering of care is decidedly a non-private affair, imagine a scenario where the connection facilitated by technology allows for meeting the social and emotional needs of a terminal patient, if there is a situation where the use of technology allows for a better and longer life, then in these cases there can be an argument for sacrificing privacy to meet the needs of the patient. ultimately, a balance needs to be struck between the privacy requirements and other healthcare requirements and privacy should not be blindly touted as the most important requirement. framing the concept of the good life with a view of restoring, maintaining and enhancing the capabilities of the human, one mustn't view eudaimonia as happiness but rather the achievement of the capabilities listed because happiness in this context would fall outside of the domain of ethics. additionally, the author proposes the care experience machine thought experiment that can meet all the care needs of a patient and asks the question if it would be morally wrong to plug in a patient into such a machine. while intuitively it might seem wrong, we struggle when trying to come up with concrete objections. as long as the patient feels cared for and has, from an objective standpoint, their care needs met, it becomes hard to contest how such virtual care might differ from real care that is provided by humans. if one can achieve real capabilities, such as the need to have freedom of movement and interaction with peers outside of their care confinement and virtual reality technology enables that, then the virtual good life enhances the real good life -a distinction that becomes increasingly blurred as technology progresses. another moral argument put forward in determining whether to use technology-assisted healthcare is if it is too paternalistic to determine what is best for the patient. in some cases where the patient is unable to make decisions that restore, maintain and enhance their capabilities, such paternalism might be required but it must always be balanced with other ethical concerns and keeping in mind the capabilities that it enables for the patient. when we talk about felt care and how to evaluate whether care rendered is good or not, we should not only look at the outcomes of the process through which the patient exits the healthcare context but also the realization of some of the capabilities during the healthcare process. to that end, when thinking about felt care, we must also take into account the concept of reciprocity of feeling which is not explicitly defined in the capabilities approach but nonetheless forms an important aspect of experiencing healthcare in a positive manner from the patient's perspective. in conclusion, it is important to have an in-depth evaluation of technology assisted healthcare that is based on moral principles and philosophy, yet resting more on concrete arguments rather than just the high-level abstracts as they provide little practical guidance in evaluating different solutions and how they might be chosen to be used in different contexts. an a priori dismissal of technology in the healthcare domain, even when based on very real concerns like breach of privacy in the use of ai solutions which require a lot of personal data, begets further examination before arriving at a conclusion. the article brings up some interesting points around how we bond with things that are not necessarily sentient and how our emotions are not discriminating when it comes to reducing loneliness and imprinting on inanimate objects. people experience surges in oxytocin as a consequence of such a bonding experience which further reinforces the relationship. this has effects for how increasingly sentient-appearing ai systems might be used to manipulate humans into a "relationship" and potentially steer them towards making purchases, for example via chatbot interfaces by evoking a sense of trust. the article also makes a point about how such behaviour is akin to animism and in a sense forms a response to loneliness in the digital realm, allowing us to continue to hone our empathy skills for where they really matter, with other human beings. with more and more of our conversations being mediated by ai-enabled systems online, it is important to see if robots can be harnessed to affect positive behaviour change in our interactions with each other. while there have been studies that demonstrate the positive impact that robots can have on influencing individual behaviour, this study highlighted how the presence of robots can influence human to human interactions as well. what the researchers found was that having a robot that displayed positive and affective behavior triggered more empathy from humans towards other humans as well as other positive behaviors like listening more and splitting speaking time amongst members more fairly. this is a great demonstration of how robots can be used to improve our interactions with each other. another researcher pointed out that a future direction of interest would be to see how repeated exposure to such robot interactions can influence behaviour and if the effects so produced would be long-lasting even in the absence of the robot to participate in the interactions. since time immemorial there has been a constant tussle between making predictions and being able to understand the underlying fundamentals of how those predictions worked. in the era of big data, those tensions are exacerbated as machines become more inscrutable while making predictions using ever-more higher-dimensional data which lies beyond intuitive understanding of humans. we try to reason through some of that high-dimensional data by utilizing techniques that either reduce the dimensions or visualize into -or -dimensions which by definition will tend to lose some fidelity. bacon had proposed that humans should utilize tools to gain a better understanding of the world around them -until recently where the physical processes of the world matched quite well with our internal representations, this wasn't a big concern. but a growing reliance on tools means that we rely more on what is made possible by the tools as they measure and model the world. statistical intelligence and models often get things right but often they are hostile to reconstruction as to how they arrived at certain predictions. models provide for abstractions of the world and often don't need to follow exactly the real-world equivalents. for example, while the telescope allows us to peer far into the distance, its construction doesn't completely mimic a biological eye. more so, radio telescopes that don't follow optics at all give us a unique view into distant objects which are just not possible if we rely solely on optical observations. illusions present us with a window into the limits of our perceptual systems and bring into focus the tension between the reality and what we think is the reality. "in just the same way that prediction is fundamentally bounded by sensitivity of measurement and the shortcomings of computation, understanding is both enhanced and diminished by the rules of inference." in language models, we've seen that end-to-end deep learning systems that are opaque to our understanding perform quite a bit better than traditional machine translation approaches that rest on decades of linguistic research. this bears some resemblance to searle's chinese room experiment where if we just look at the inputs and the outputs, there isn't a guarantee that the internal workings of the system work in exactly the way we expect them to. "the most successful forms of future knowledge will be those that harmonise the human dream of understanding with the increasingly obscure echoes of the machine." abhishek gupta (founder of the montreal ai ethics institute) was featured in fortune where he detailed his views on ai safety concerns in rl systems, the "token human" problem, and automation surprise among other points to pay attention to when developing and deploying ai systems. especially in situations where these systems are going to be used in critical scenarios, humans operating in tandem with these systems and utilizing them as decision inputs need to gain a deeper understanding of the inherent probabilistic nature of the predictions from these systems and make decisions that take it into consideration rather than blindly trusting recommendations from an ai system because they have been accurate in % of the scenarios. with increasing capabilities of ai systems, and established research that demonstrates how human-machine combinations operate better than each in isolation, this paper presents a timely discussion on how we can craft better coordination between human and machine agents with the aim of arriving at the best possible understanding between them. this will enhance trust levels between the agents and it starts with having effective communication. the paper discusses how framing this from a human-computer interaction (hci) approach will lead to achieving this goal. this is framed with intention-, context-, and cognition-awareness being the critical elements which would be responsible for the success of effective communication between human and machine agents. intelligibility is a notion that is worked on by a lot of people in the technical community who seek to shed a light on the inner workings of systems that are becoming more and more complex. especially in the domains of medicine, warfare, credit allocation, judicial systems and other areas where they have the potential to impact human lives in significant ways, we seek to create explanations that might illuminate how the system works and address potential issues of bias and fairness. however, there is a large problem in the current approach in the sense that there isn't enough being done to meet the needs of a diverse set of stakeholders who require different kinds of intelligibility that is understandable to them and helps them meet their needs and goals. one might argue that a deeply technical explanation ought to suffice and others kinds of explanations might be derived from that but it makes them inaccessible to those who can't parse well the technical details, often those who are the most impacted by such systems. the paper offers a framework to situate the different kinds of explanations such that they are able to meet the stakeholders where they are at and provide explanations that not only help them meet their needs but ultimately engender a higher level of trust from them by highlighting better both the capabilities and limitations of the systems. ai value alignment is typically mentioned in the context of long-term agi systems but this also applies to the narrow ai systems that we have today. optimizing for the wrong metric leads to things like unrealistic and penalizing work schedules, hacking attention on video platforms, charging more money from poorer people to boost the bottomline and other unintended consequences. yet, there are attempts by product design and development teams to capture human well-being as metrics to optimize for. "how does someone feel about how their life is going?" is a pretty powerful question that gives a surprising amount of insight into well-being distanced from what might be influencing them at the moment because it makes them pause and reflect on what matters to them. but, capturing this subjective sentiment as a metric in an inherently quantitative world of algorithms is unsurprisingly littered with mines. a study conducted by facebook and supported by external efforts found that passive use of social media triggered feelings of ennui and envy while active use including interactions with others on the network led to more positive feelings. utilizing this as a guiding light, facebook strove to make an update that would be more geared towards enabling meaningful engagement rather than simply measuring the number of likes, shares and comments. they used user panels as an input source to determine what constituted meaningful interactions on the platform and tried to distill this into the well-being metrics. yet, this suffered from several flaws, namely that the evaluation of this change was not publicly available and was based on the prior work comparing passive vs. active use of social media. this idea of well-being optimization extends to algorithmic systems beyond social media platforms, for example, with how gig work might be better distributed on a platform such that income fluctuations are minimized for workers who rely on it as a primary source of earnings. another place could be amending product recommendations to also capture environmental impacts such that consumers can incorporate that into their purchasing decisions apart from just the best price deals that they can find. participatory design is going to be a key factor in the development of these metrics; especially given the philosophy of "nothing about us without us" as a north star to ensure that there isn't an inherent bias in how well-being is optimized for. often, we'll find that proxies will need to stand in for actual well-being in which case it is important to ensure that the metrics are not static and are revised in consultation with users at periodic intervals. tapping into the process of double loop learning, an organization can not only optimize for value towards its shareholders but also towards all its other stakeholders. while purely quantitative metrics have obvious limitations when trying to capture something that is inherently subjective and qualitative, we need to attempt something in order to start and iterate as we go along. in a world where increasing automation of cognitive labor due to ai-enabled systems will dramatically change the future of labor, it is now more important than ever that we start to move away from a traditional mindset when it comes to education. while universities in the previous century rightly provided a great value in preparing students for jobs, as jobs being bundle of tasks and those tasks rapidly changing with some being automated, we need to focus more on training students for things that will take much longer to automate, for example working with other humans, creative and critical thinking and driving innovation based on insights and aggregating knowledge across a diversity of fields. lifelong learning serves as a useful model that can impart some of these skills by breaking up education into modules that can be taken on an "at will" basis allowing people to continuously update their skills as the landscape changes. students will go in and out of universities over many years which will bring a diversity of experiences to the student body, encouraging a more close alignment with actual skills as needed in the market. while this will pose significant challenges to the university system, innovations like online learning and certifications based on replenishment of skills like in medicine could overcome some of those challenges for the education ecosystem. individual actions are powerful, they create bottom-up change and empower advocates with the ability to catalyze larger change. but, when we look at products and services with millions of users where designs that are inherently unethical become part of everyday practice and are met with a slight shrug of the shoulders resigning to our fates, we need a more systematized approach that is standardized and widely practiced. ethics in ai is having its moment in the spotlight with people giving talks and conferences focusing on it as a core theme yet it falls short of putting the espoused principles into practice. more often than not, you have individuals, rank and file employees who go out of their way, often on personal time, to advocate for the use of ethical, safety and inclusivity in the design of systems, sometimes even at the risk of their employment. while such efforts are laudable, they lack widespread impact and awareness that is necessary to move the needle, we need leaders at the top who can affect sweeping changes to adopt these guidelines not just in letter but in spirit and then transmit them as actionable policies to their workforce. it needs to arrive at a point where people advocating for this change don't need to do so from a place of moral and ethical obligations which customers can dispute but from a place of policy decisions which force disengagement for non-adherence to these policies. we need to move from talk to action not just at a micro but at a macro scale. the wrong kind of ai? artificial intelligence and the future of labor demand do increasing efficiency and social benefits stand in exclusion to each other when it comes to automation technology? with the development of the "right" kind of ai, this doesn't have to be the case. ai is a general purpose technology that has wide applications and being offered as a platform, it allows others to build advanced capabilities on top of existing systems creating an increasingly powerful abstraction layer with every layer. according to the standard approach in economics, a rise in productivity is often accompanied with an increase in the demand for labor and hence a rise in wages along with standards of living. but, when there is a decoupling between the deployment of technology and the associated productivity accrual, it can lead to situations where we see more output but not a corresponding increase in the standards of living as the benefits accrue to capital owners rather than wage-earning labor which is distanced from the production lifecycle. this unevenness in the distribution of gains causes losses of jobs in one sector while increasing productivity in others, often masking effects at an aggregate level through the use of purely economic focused indicators like gdp growth rates. the authors expound on how the current wave of automation is highly focused on labor replacement driven by a number of factors. when this comes in the form of automation that is just as good as labor but not significantly better, we get the negative effects as mentioned before, that is a replacement of labor without substantial increases in the standards of living. most of these effects are felt by those in the lower rungs of the socio-economic ladder where they don't have alternate avenues for employment and ascent. a common message is that we just have to wait as we did in the case of the industrial revolution and new jobs will emerge that we couldn't have envisioned which will continue to fuel economic prosperity for all. this is an egregious comparison that overlooks that the current wave of automation is not creating simultaneous advances in technology that allow the emergence of a new class of tasks within jobs for which humans are well-suited. instead, it's increasingly moving into domains that were strongholds of human skills that are not easily repeatable or reproducible. what we saw in the past was an avenue made available to workers to move out of low skill tasks in agriculture to higher skill tasks in manufacturing and services. some examples of how ai development can be done the "right" way to create social benefits: • in education, we haven't seen a significant shift in the way things are done for close to years. it has been shown that different students have different learning styles and can benefit from personalized attention. while it is infeasible to do so in a traditional classroom model, ai offers the potential to track metrics on how the student interacts with different material, where they make mistakes, etc., offering insights to educators on how to deliver a better educational experience. this is accompanied by an increase in the demand for teachers who can deliver different teaching styles to match the learning styles of students and create better outcomes. • a similar argument can be made in the field of healthcare where ai systems can allow medical staff to spend more time with patients offering them personalized attention for longer while removing the need for onerous and drudgery in the form of menial tasks like data entry. • industrial robots are being used to automate the manufacturing line often cordoning off humans for safety reasons. humans are also decoupled from the process because of a difference in the level of precision that machines can achieve compared to humans. but we can get the best of both worlds by combining human flexibility and critical thinking to address problems in an uncertain environment with the high degree of preciseness of machines by creating novel interfaces, for example, by using augmented reality. an important distinction that the authors point out in the above examples is that they are not merely the job of enablers, humans that are used to train machines in a transitory fashion, but those that genuinely complement machine skills. there are market failures when it comes to innovation and in the past, governments have helped mitigate those failures via public-private partnerships that led to the creation of fundamental technologies like the internet. but, this has decreased over the past two decades because of smaller amount of resources being invested by the government in basic research and the technology revolution becoming centered in silicon valley which has a core focus on automation that replaces labor, and with that bias and their funding of university and academic studies, they are causing the best minds of the next generation to have the same mindset. markets are also known to struggle when there are competing paradigms and once one pulls ahead, it is hard to switch to another paradigm even if it might be more productive thus leading to an entrenchment of the dominant paradigm. the social opportunity cost of replacing labor is lower than the cost of labor, pushing the ecosystem towards labor replacing automation. without accounting for these externalities, the ecosystem has little incentive to move towards the right kind of ai. this is exacerbated by tax incentives imposing costs on labor while providing a break on the use of capital. additionally, areas where the right kind of ai can be developed don't necessarily fall into the cool domain of research and thus aren't prioritized by the research and development community. let's suppose large advances were made in ai for health care. this would require accompanying retraining of support staff aside from doctors, and the high level bodies regulating the field would impose resistance, thus slowing down the adoption of this kind of innovation. ultimately, we need to lean on a holistic understanding of the way automation is going to impact the labor market and it will require human ingenuity to shape the social and economic ecosystems such that they create net positive benefits that are as widely distributed as possible. relying on the market to figure this out on its own is a recipe for failure. the labor impacts of ai require nuance in discussion rather than fear-mongering that veers between over-hyping and downplaying concerns when the truth lies somewhere in the middle. in the current paradigm of supervised machine learning, ai systems need a lot of data before becoming effective at their automation tasks. the bottom rung of this ladder consists of robotic process automation that merely tracks how humans perform a task (say, by tracking the clicks of humans as they go about their work) and ape them step by step for simple tasks like copying and pasting data across different places. the article gives an example of an organization that was able to minimize churn in their employees by more than half because of a reduction in data drudgery tasks like copying and pasting data across different systems to meet legal and compliance obligations. economists point out that white-collar jobs like these and those that are middle-tier in terms of skills that require little training are at the highest risk of automation. while we're still ways away from ai taking up all jobs, there is a slow march starting from automating the most menial tasks, potentially freeing us up to do more value-added work. with a rising number of people relying on social media for the news, the potential for hateful content and misinformation spreading has never been higher. content moderation on platforms like facebook and youtube is still largely a human endeavor where there are legions of contract workers that spend their days reviewing whether different pieces of content meet the community guidelines of the platform. due to the spread of the pandemic and offices closing down, a lot of these workers have been asked to leave (they can't do this work from home as the platform companies explained because of privacy and legal reasons), leaving the platforms in the hands of automated systems. the efficacy of these systems has always been questionable and as some examples in the article point out, they've run amok taking down innocuous and harmful content alike, seeming to not have very fine-tuned abilities. the problem with this is that legitimate sources of information, especially on subjects like covid- , are being discouraged because of their content being taken down and having to go through laborious review processes to have their content be approved again. while this is the perfect opportunity to experiment with the potential of using automated systems for content moderation given the traumatic experience that humans have to undergo as a part of this job, the chasms that need to be bridged still remain large between what humans have to offer and what the machines are capable of doing at the moment. workplace time management and accounting are common practices but for those of us who work in places where schedules are determined by automated systems, they can have many negative consequences, a lot of which could be avoided if employers paid more attention to the needs of their employees. clopening is the notion where an employee working at a retail location is asked to not only close the location at the end of the day but also arrive early the next day to open the location. this among other practices like breaks that are scheduled down to the minute and on-call scheduling (something that was only present in the realm of emergency services) wreak havoc on the physical and mental health of employees. in fact, employees surveyed have even expressed willingness to take pay cuts to have greater control over their schedules. in some places with ad-hoc scheduling, employees are forced to be spontaneous with their home responsibilities like taking care of their children, errands, etc. while some employees try to swap shifts with each other, often even that becomes hard to do because others are also in similar situations. some systems track customer demand and reduce pay for hours worked tied to that leading to added uncertainty even with their paychecks. during rush seasons, employees might be scheduled for back to back shifts ignoring their needs to be with families, something that a human manager could empathize with and accommodate for. companies supplying this kind of software hide behind the disclaimer that they don't take responsibility for how their customers use these systems which are often black-box and inscrutable to human analysis. this is a worrying trend that hurts those who are marginalized and those who require support when juggling several jobs just to make ends meet. relying on automation doesn't absolve the employers of their responsibility towards their employees. while the dominant form of discussion around the impacts of automation have been that it will cause job losses, this work from kevin scott offers a different lens into how jobs might be created by ai in the rust belt in the us where automation and outsourcing have been gradually stripping away jobs. examples abound of how entrepreneurs and small business owners with an innovative mindset have been able to leverage advances in ai, coupling them with human labor to repurpose their businesses from areas that are no longer feasible to being profitable. precision farming utilizes things like drones with computer vision capabilities to detect hotspots with pests, disease, etc. in the big farms that would otherwise require extensive manual labor which would limit the size of the farms. self-driving tractors and other automated tools also augment human effort to scale operations. the farm owners though highlight the opaqueness and complexity of such systems which make them hard to debug and fix themselves which sometimes takes away from the gains. on the other hand, in places like nursing homes that get reimbursed based on the resource utilization rates by their residents, tools using ai can help minimize human effort in compiling data and let them spend more of their effort on human contact which is not something that ai succeeds on yet. while automation has progressed rapidly, the gains haven't been distributed equally. in other places where old factories were shut down, some of them are now being utilized by ingenious entrepreneurs to bring back manufacturing jobs that cleverly combine human labor with automation to deliver high-quality, custom products to large enterprises. thus, there will be job losses from automation but the onus lies with us in steering the progress of ai towards economic and ethical values that we believe in. the state of ai ethics, june what's next for ai ethics, policy, and governance? a global overview in this ongoing piece of work, the authors present the landscape of ethical documents that has been flooded with guidelines and recommendations coming from a variety of sectors including government, private organizations, and ngos. starting with a dive into the stated and unstated motivations behind the documents, the reader is provided with a systematic breakdown of the different documents prefaced with the caveat that where the motivations are not made explicit, one can only make a best guess based on the source of origin and people involved in its creation. the majority of the documents from the governmental agencies were from the global north and western countries which led to a homogeneity of issues that were tackled and the recommendations often touted areas of interest that were specific to their industry and economical make up. this left research and development areas of interest like tourism and agriculture largely ignored which continue to play a significant role in the global south. the documents from the former category were also starkly focused on gaining a competitive edge, which was often stated explicitly, with a potential underlying goal of attracting scarce, high-quality ai talent which could trigger brain drain from other countries that are not currently the dominant players in the ai ecosystem. often, they were also positioning themselves to gain an edge and define a niche for themselves, especially in the case of countries that are non-dominant and thus overemphasizing the benefits while downplaying certain negative consequences that might arise from widespread ai use, like the displacement and replacement of labor. for documents from private organizations, they mostly focused on self and collective regulation in an effort to pre-empt stringent regulations from taking effect. they also strove to tout the economic benefits to society at large as a way of de-emphasizing the unintended consequences. a similar dynamic as in the case of government documents played out here where the interests of startups and small and medium sized businesses were ignored and certain mechanisms proposed would be too onerous for such smaller organizations to implement this further entrenching the competitive advantage of larger firms. the ngos on the other hand seemed to have the largest diversity both in terms of the participatory process of creation and the scope, granularity, and breadth of issues covered which gave technical, ethical, and policy implementation details making them actionable. some documents like the montreal declaration for responsible ai were built through an extensive public consultation process and consisted of an iterative and ongoing approach that the montreal ai ethics institute contributed to as well. the ieee document leverages a more formal standards making approach and consists of experts and concerned citizens from different parts of the world contributing to its creation and ongoing updating. the social motivation is clearly oriented towards creating larger societal benefits, internal motivation is geared towards bringing about change in the organizational structure, external strategic motivation is often towards creating a sort of signaling to showcase leadership in the domain and also interventional to shape policy making to match the interests of those organizations. judging whether a document has been successful is complicated by a couple of factors: discerning what the motivations and the goals for the document were, and the fact that most implementations and use of the documents is done in a pick-and-choose manner complicating attribution and weight allocation to specific documents. some create internal impacts in terms of adoption of new tools, change in governance, etc., while external impacts often relate to changes in policy and regulations made by different agencies. an example would be how the stem education system needs to be overhauled to better prepare for the future of work. some other impacts include altering customer perception of the organization as one that is a responsible organization which can ultimately help them differentiate themselves. at present, we believe that this proliferation of ethics documents represents a healthy ecosystem which promotes a diversity of viewpoints and helps to raise a variety of issues and suggestions for potential solutions. while there is a complication caused by so many documents which can overwhelm people looking to find the right set of guidelines that helps them meet their needs, efforts such as the study being done in this paper amongst other efforts can act as guideposts to lead people to a smaller subset from which they can pick and choose the guidelines that are most relevant to them. the white paper starts by highlighting the existing tensions in the definitions of ai as there are many parties working on advancing definitions that meet their needs. one of the most commonly accepted ones frames ai systems as those that are able to adapt their behavior in response to interactions with the world independent of human control. also, another popular framing is that ai is something that mimics human intelligence and is constantly shifting as a goal post as what was once perceived as ai, when sufficiently integrated and accepted in society becomes everyday technology. one thing that really stands out in the definitions section is how ethics are defined, which is a departure from a lot of other such documents. the authors talk about ethics as a set of principles of morality where morality is an assemblage of rules and values that guide human behavior and principles for evaluating that behavior. they take a neutral stand on the definition, a far cry from framing it as a positive inclination of human conduct to allow for diversity in embedding ethics into ai systems that are in concordance with local context and culture. ai systems present many advantages which most readers are now already familiar given the ubiquity of ai benefits as touted in everyday media. one of the risks of ai-enabled automation is the potential loss of jobs, the authors make a comparison with some historical cases highlighting how some tasks and jobs were eliminated creating new jobs while some were permanently lost. many reports give varying estimates for the labor impacts and there isn't yet a clear consensus on the actual impacts that this might have on the economy. from a liability perspective, there is still debate as to how to account for the damage that might be caused to human life, health and property by such systems. in a strict product liability regime like europe, there might be some guidance on how to account for this, but most regimes don't have specific liability allocations for independent events and decisions meaning users face coverage gaps that can expose them to significant harms. by virtue of the complexity of deep learning systems, their internal representations are not human-understandable and hence lack transparency, which is also called the black box effect. this is harmful because it erodes trust from the user perspective, among other negative impacts. social relations are altered as more and more human interactions are mediated and governed by machines. we see examples of that in how our newsfeeds are curated, toys that children play with, and robots taking care of the elderly. this decreased human contact, along with the increasing capability of machine systems, examples of which we see in how disinformation spreads, will tax humans in constantly having to evaluate their interactions for authenticity or worse, relegation of control to machines to the point of apathy. since the current dominant paradigm in machine learning is that of supervised machine learning, access to data is crucial to the success of the systems and in the state of ai ethics, june cases where there aren't sufficient protections in place for personal data, it can lead to severe privacy abuses. self determination theory states that autonomy of humans is important for proper functioning and fulfillment, so an overreliance on ai systems to do our work can lead to loss of personal autonomy, which can lead to a sense of digital helplessness. digital dementia is the cognitive equivalent where relying on devices for things like storing phone numbers, looking up information, etc. will over time lead to a decline in cognitive abilities. the echo chamber effect is fairly well studied, owing to the successful use of ai technologies to promulgate disinformation to the masses during the us presidential elections of . due to the easy scalability of the systems, the negative effects are multiplicative in nature and have the potential to become run-away problems. given that ai systems are built on top of existing software and hardware, errors in the underlying systems can still cause failures at the level of ai systems. more so, given the statistical nature of ai systems, behaviour is inherently stochastic and that can cause some variability in response which is difficult to account for. flash crashes in the financial markets are an example of this. for critical systems that require safety and robustness, there is a lot that needs to be done for ensuring reliability. building ethics compliance by design can take a bottom-up or top-down approach, the risk with a bottom-up approach is that by observing examples of human behaviour and extracting ethics principles from that, instead of getting things that are good for people, you get what's common. hence, the report advocates for a top-down approach where desired ethical behavior is directly programmed into the system. casuistic approaches to embedding ethics into systems would work well in situations where there are simple scenarios, such as in healthcare when the patient has a clear directive of do-not-resuscitate. but, in cases where there isn't one and where it is not possible to seek a directive from the patient, such an approach can fail and it requires that programmers either in a top-down manner embed rules or the system learns from examples. though, in a high-stakes situation like healthcare, it might not be ideal to rely on learning from examples because of skewed and limited numbers of samples. a dogmatic approach would also be ill-advised where a system might slavishly follow a particular school of ethical beliefs which might lead it to make decisions that might be unethical in certain scenarios. ethicists utilize several schools of thought when addressing a particular situation to arrive at a balanced decision. it will be crucial to consult with a diversity of stakeholders such that the nuances of different situations can be captured well. the wef is working with partners to come up with an "ethical switch" that will imbue flexibility on the system such that it can operate with different schools of thought based on the demands of the situation.the report also proposes the potential of utilizing a guardian ai system that can monitor other ai systems to check for compliance with different sets of ai principles. given that ai systems operate in a larger socio-technical ecosystem, we need to tap into fields like law and policy making to come up with effective ways of integrating ethics into ai systems, part of which can involve creating binding legal agreements that tie in with economic incentives.while policy making and law are often seen as slow to adapt to fast changing technology, there are a variety of benefits to be had, for example higher customer trust for services that have adherence to stringent regulations regarding privacy and data protection. this can serve to be a competitive advantage and counter some of the negative innovation barriers imposed by regulations. another point of concern with these instruments is that they are limited by geography which leads to a patchwork of regulation that might apply on a product or service that spans several jurisdictions. some other instruments to consider include: self-regulation, certification, bilateral investments treaties, contractual law, soft law, agile governance, etc. the report highlights the initiatives by ieee and wef in creating standards documents. the public sector through its enormous spending power can enhance the widespread adoption of these standards such as by utilizing them in procurement for ai systems that are used to interact with and serve their citizens. the report also advocates for the creation of an ethics board or chief values officer as a way of enhancing the adoption of ethical principles in the development of products and services. for vulnerable segments of the population, for example children, there need to be higher standards of data protection and transparency that can help parents make informed decisions about which ai toys to bring into their homes. regulators might play an added role of enforcing certain ethics principles as part of their responsibility. there also needs to be broader education for ai ethics for people that are in technical roles. given that there are many negative applications of ai, it shouldn't preclude us from using ai systems for positive use cases, a risk assessment and prudent evaluation prior to use is a meaningful compromise. that said, there are certain scenarios where ai shouldn't be used at all and that can be surfaced through the risk or impact assessment process. there is a diversity of ethical principles that have been put forth by various organizations, most of which are in some degree of accordance with local laws, regulations, and value sets. yet, they share certain universal principles across all of them. one concern highlighted by the report is on the subject of how even widely accepted and stated principles of human rights can be controversial when translated into specific mandates for an ai system. when looking at ai-enabled toys as an example, while they have a lot of privacy and surveillance related issues, in countries where there isn't adequate access to education, these toys could be seen as a medium to impart precision education and increase literacy rates. thus, the job of the regulator becomes harder in terms of figuring out how to balance the positive and negative impacts of any ai product. a lot of it depends on the context and surrounding socio-economic system as well. given the diversity in ethical values and needs across communities, an approach might be for these groups to develop and apply non-binding certifications that indicate whether a product meets the ethical and value system of that community. since there isn't a one size fits all model that works, we should aim to have a graded governance structure that has instruments in line with the risk and severity profile of the applications. regulation in the field of ai thus presents a tough challenge, especially given the interrelatedness of each of the factors. the decisions need to be made in light of various competing and sometimes contradictory fundamental values. given the rapid pace of technological advances, the regulatory framework needs to be agile and have a strong integration into the product development lifecycle. the regulatory approach needs to be such that it balances speed so that potential harms are mitigated with overzealousness that might lead to ineffective regulations that stifle innovation and don't really understand well the technology in question. ai is currently enjoying a summer of envy after having gone through a couple of winters of disenchantment, with massive interest and investments from researchers, industry and everyone else there are many uses of ai to create societal benefits but they aren't without their socio-ethical implications. ai systems are prone to biases, unfairness and adversarial attacks on their robustness among other real-world deployment concerns. even when ethical ai systems are deployed for fostering social good, there are risks that they cater to only a particular group to the detriment of others. moral relativism would argue for a diversity of definitions as to what constitutes good ai which would depend on the time, context, culture and more. this would be reflected in market decisions by consumers who choose products and services that align with their moral principles but it poses a challenge for those trying to create public governance frameworks for these systems. this dilemma would push regulators towards moral objectivism which would try and advocate for a single set of values that are universal making the process of coming up with a shared governance framework easier. a consensus based approach utilized in crafting the ec trustworthy ai guidelines settled on human rights as something that everyone can get on board with. given the ubiquity in the applicability of human rights, especially with their legal enshrinement in various charters and constitutions, they serve as a foundation to create legal, ethical and robust ai as highlighted in the ec trustworthy ai guidelines. stressing on the importance of protecting human rights, the guidelines advocate for a trustworthy ai assessment in case that an ai system has the potential to negatively impact the human rights of an individual, much like the better established data protection impact assessment requirement under the gdpr. additional requirements are imposed in terms of ex-ante oversight, traceability, auditability, stakeholder consultations, and mechanisms of redress in case of mistakes, harms or other infringements. the universal applicability of human rights and their legal enshrinement also renders the benefits of established institutions like courts whose function is to monitor and enforce these rights without prejudice across the populace. but they don't stand uncontested when it comes to building good ai systems; they are often seen as too western, individualistic, narrow in scope and abstract to be concrete enough for developers and designers of these systems. some arguments against this are that they go against the plurality of value sets and are a continued form of former imperialism imposing a specific set of values in a hegemonic manner. but, this can be rebutted by the signing of the original universal declaration of human rights that was done by nations across the world in an international diplomatic manner. however, even despite numerous infringements, there is a normative justification that they ought to be universal and enforced. while human rights might be branded as too individual focused, potentially creating a tension between protecting the rights of individuals to the detriment of societal good, this is a weak argument because stronger protection of individual rights has knock-on social benefits as free, healthy and well-educated (among other individual benefits) creates a net positive for society as these individuals are better aware and more willing to be concerned about societal good. while there are some exceptions to the absolute nature of human rights, most are well balanced in terms of providing for the societal good and the good of others while enforcing protections of those rights. given the long history of enforcement and exercises in balancing these rights in legal instruments, there is a rich jurisprudence on which people can rely when trying to assess ai systems. while human rights create a social contract between the individual and the state, putting obligations on the state towards the individual but some argue that they don't apply horizontally between individuals and between an individual and a private corporation. but, increasingly that's not the case as we see many examples where the state intervenes and enforces these rights and obligations between an individual and a private corporation as this falls in its mandate to protect rights within its jurisdiction. the abstract nature of human rights, as is the case with any set of principles rather than rules, allows them to be applied to a diversity of situations and to hitherto unseen situations as well. but, they rely on an ad-hoc interpretation when enforcing them and are thus subjective in nature and might lead to uneven enforcement across different cases. under the eu, this margin of appreciation is often criticized in the sense that it leads to weakening and twisting of different principles but this deferment to those who are closer to the case actually allows for a nuanced approach which would be lost otherwise. on the other hand we have rules which are much more concrete formulations and thus have a rigid definition and limited applicability which allows for uniformity but it suffers from inflexibility in the face of novel scenarios. yet, both rules and principles are complementary approaches and often the exercise of principles over time leads to their concretization into rules under existing and novel legal instruments. while human rights can thus provide a normative, overarching direction for the governance of ai systems, they don't provide the actual constituents for an applicable ai governance framework. for those that come from a non-legal background, often technical developer and designers of ai systems, it is essential that they understand their legal and moral obligations to codify and protect these rights in the applications that they build. the same argument cuts the other way, requiring a technical understanding of how ai systems work for legal practitioners such that they can meaningfully identify when breaches might have occurred. this is also important for those looking to contest claims of breaches of their rights in interacting with ai systems. this kind of enforcement requires a wide public debate to ensure that they fall within accepted democratic and cultural norms and values within their context. while human rights will continue to remain relevant even in an ai systems environment, there might be novel ways in which breaches might occur and those might need to be protected which require a more thorough understanding of how ai systems work. growing the powers of regulators won't be sufficient if there isn't an understanding of the intricacies of the systems and where breaches can happen, thus there is more of a need to enshrine some of those responsibilities in law to enforce this by the developers and designers of the system. given the large public awareness and momentum that built up around the ethics, safety and inclusion issues in ai, we will certainly see a lot more concrete actions around this in . the article gives a few examples of congressional hearings on these topics and advocates for the industry to come up with some standards and definitions to aid the development of meaningful regulations. currently, there isn't a consensus on these definitions and it leads to varying approaches addressing the issues at different levels of granularity and angles. what this does is create a patchwork of incoherent regulations across domains and geographies that will ultimately leave gaps in effectively mitigating potential harms from ai systems that can span beyond international borders. while there are efforts underway to create maps of all the different attempts of defining principle sets, we need a more coordinated approach to bring forth regulations that will ultimately protect consumer safety. in containing an epidemic the most important steps include quarantine and contact tracing for more effective testing. while before, this process of contact tracing was hard and fraught with errors and omissions, relying on memories of individuals, we now carry around smartphones which allow for ubiquitous tracking ability that is highly accurate. but such ubiquity comes with invasion of privacy and possible limits on freedoms of citizens. such risks need to be balanced with public interest in mind while using enhanced privacy preserving techniques and any other measures that center citizen welfare in both a collective and individual sense. for infections that can be asymptomatic in the early days, like the covid- , it is essential to have contact tracing, which identifies all people that came in close contact with an infected person and might spread the infection further. this becomes especially important when you have a pandemic at hand, burdening the healthcare system and testing every person is infeasible.an additional benefit of contact tracing is that it mitigates resurgence of the peaks of infection. r determines how quickly a disease will spread and is dependent on three factors (period of infection, contact rate and mode of transmission) out of which the first and third are fixed so we're only left with control over the contact rate.with an uptake of an application that facilitates contact tracing, the amount of reduction in contact rate is an increasing return because of the number of people that might come in contact with an infected person and thus, we get a greater reduction of r in terms of percentage compared to the percentage uptake of the application in the population. ultimately, reducing r to below leads to a slowdown in the spread of the infection thus helping the healthcare system cope up with the sudden stresses that are brought on by pandemic peaks. one of the techniques that governments or agencies responsible for public health use is broadcasting in which the information of diagnosed carriers is made public via various channels but it carries severe issues like exposing private information of individuals and businesses where they might have been which can trigger stigma, ostracization and unwarranted punitive harm. it also suffers from the problem of people needing to access this source of information of their own volition and then self-identify (and remember correctly) if they've been in recent contact with a diagnosed carrier. selective broadcasting is a more restricted form of the above where information about diagnosed carriers is shared to a select group of individuals based on location proximity in which case the user's location privacy would have to be compromised and in another vector of dissemination, messages are sent to all users but filtered on device for their specific location and is not reported back to the broadcaster. but, the other second-order negative effects remain the same as broadcasting. both though require the download of an application which might decrease the uptake of it by people. unicasting is when messages are sent tailored specifically to each user and they require the download of an app which needs to be able to track timestamps and location and has severe consequences in terms of government surveillance and abuse. participatory sharing is a method where diagnosed carriers voluntarily share their information and thus have more data control but it still relies on individual action both on the sender and receiver and its efficacy is questionable at best. there is also a risk of abuse by malicious actors to spread misinformation and seed chaos in society via false alarms. private kit: safe paths is an open-source solution developed by mit that allows for contact tracing in a privacy preserving way. it utilizes the encrypted location trail of a diagnosed carrier who chooses to share that with public health agencies and then other users who are also using the solution can pull this data and via their own logged location trail get a result of they've been in close contact with a diagnosed carrier. in the later phases of development of this solution, the developers will enable a mix of participatory sharing and unicasting to further prevent possible data access by third parties including governments for surveillance purposes. risks of contact tracing include possible public identification of the diagnosed carrier and severe social stigma that arises as a part of that. online witch hunts to try and identify the individual can often worsen the harassment and include spreading of rumors about their personal lives. the privacy risks for both individuals and businesses have potential for severe harm, especially during times of financial hardship, this might be very troublesome. privacy risks also extend to non-users because of proximal information that can be derived from location trails, such as employees that work at particular businesses that were visited by a diagnosed carrier. it can also bring upon the same stigma and ostracization to the family members of these people. without meaningful alternatives, especially in health and risk assessment during a pandemic, obtaining truly informed consent is a real challenge that doesn't yet have any clear solutions. along with information, be it through any of the methods identified above, it is very important to provide appropriate context and background to the alerts to prevent misinformation and panic from spreading especially for those with low health, digital and media literacy. on the other hand, some might not take such alerts seriously and increase the risk for public health by not following required measures such as quarantine and social distancing. given the nature of such solutions, there is a significant risk of data theft from crackers as is the case for any application that collects sensitive information like health status and location data. the solutions can also be used for fraud and abuse, for example, by blackmailing business owners and demanding ransom, failing to pay which they would falsely post information that they're diagnosed carriers and have visited their place of business. contact tracing technology requires the use of a smartphone with gps and some vulnerable populations might not always have such devices available like the elderly, homeless and people living in low-income countries who are at high risk of infection and negative health outcomes. ensuring that technology that works for all will be an important piece to mitigating the spread effectively. there is an inherent tradeoff between utility from the data provided and the privacy of the data subjects. compromises may be required for particularly severe outbreaks to manage the spread. the diagnosed carriers are the most vulnerable stakeholders in the ecosystem of contact tracing technology and they require the most protection. adopting open-source solutions that are examinable by the wider technology ecosystem can engender public trust. additionally, having proper consent mechanisms in place and exclusion of the requirement of extensive third party access to the location data can also help allay concerns. lastly, time limits on the storage and use of the location trails will also help address privacy concerns and increase uptake and use of the application in supporting public health measures. for geolocation data that might affect businesses, especially in times of economic hardship, information release should be done such that they are informed prior to the release of the information but there is little else in current methods that can both protect privacy and at the same time provide sufficient data utility. for those without access to smartphones with gps, providing them with some information on contact tracing can still help their communities. but, one must present information in a manner that accounts for variation in health literacy levels so an appropriate response is elicited from the people. alertness about potential misinformation and educational awareness are key during times of crises to encourage people to have measured responses following the best practices as advised by health agencies rather than those based on fear mongering by ill informed and/or malicious actors. encryption and other cybersecurity best practices for data security and privacy are crucial for the success of the solution. time limits on holding data for covid- is recommended at - days, the period of infection, but for an evolving pandemic one might need it for longer for more analysis. tradeoffs need to be made between privacy concerns and public health utility. different agencies and regions are taking different approaches with varying levels of efficacy and only time will tell how this change will be best managed. it does present an opportunity though for creating innovative solutions that both allow for public sharing of data and also reduce privacy intrusions. while the insights presented in this piece of work are ongoing and will continue to be updated, we felt it important to highlight the techniques and considerations compiled by the openmined team as it is one of the few places that adequately capture, in a single place, most of the technical requirements needed to build a solution that respects fundamental rights while balancing them with public health outcomes as people rush to make ai-enabled apps to combat covid- . most articles and research work coming out elsewhere are very scant and abstract in the technical details that would be needed to meet the ideals of respecting privacy and enabling health authorities to curb the spread of the pandemic. the four key techniques that will help preserve and respect rights as more and more people develop ai-enabled applications to combat covid- are: on-device data storage and computation, differential privacy, encrypted computation and privacy-preserving identity verification. the primary use cases, from a user perspective, for which apps are being built are to get: proximity alerts, exposure alerts, information on planning trips, symptom analysis and demonstrate proof of health. from a government and health authorities perspective, they are looking for: fast contact tracing, high-precision self-isolation requests, high-precision self-isolation estimation, high-precision symptomatic citizen estimation and demonstration of proof of health. while public health outcomes are at the top of the mind for everyone, the above use cases are trying to achieve the best possible tradeoff between economic impacts and epidemic spread. using the techniques highlighted in this work, it is possible to do so without having to erode the rights of citizens. this living body of work is meant to serve as a high-level guide along with resources to enable both app developers and verifiers implement and check for privacy preservation which has been the primary pushback from citizens and civil activists. evoking a high degree of trust from people will improve adoption of the apps developed and hopefully allow society and the economy to return to normal sooner while mitigating the harmful effects of the epidemic. there is a fair amount of alignment in the goals of both individuals and the government with the difference being that the government is looking at aggregate outcomes for society. some of the goals shared by governments across the world include: preventing the spread of the disease, eliminating the disease, protecting the healthcare system, protecting the vulnerable, adequately and appropriately distributing resources, preventing secondary breakouts, minimizing economic impacts and panic. the need for digital contact tracing is important because manual interventions are usually highly error prone and rely on human memory to trace how the person might have come in contact with. the requirement for high-precision self-isolation requests will avoid the need for geographic quarantines where everyone in an area is forced to self-isolate which leads to massive disruptions in the economy and can stall the delivery of essential services like food, electricity and water. the additional benefits of high-precision self-isolation is that it can help create an appropriate balance between economic harms and epidemic spread. high-precision symptomatic citizen estimation is a useful application in that it allows for more fine-grained estimation of the number of people that might be affected beyond what the test results indicate which can further strengthen the precision of other measures that are undertaken. a restoration of normalcy in society is going to be crucial as the epidemic starts to ebb, in this case, having proof of health that helps to determine the lowest risk individuals will allow for them to participate in public spaces again further bolstering the supply of essential services and relieving the burden from a small subset of workers who are participating. to service the needs of both what the users want and what the government wants, we need to be able to collect the following data: historical and current absolute location, historical and current relative position and verified group identity, where group refers to any demographic that the government might be interested in, for example, age or health status. to create an application that will meet these needs, we need to collect data from a variety of sources, compute aggregate statistics on that data and then set up some messaging architecture that communicates the results to the target population. the toughest challenges lie in the first and second parts of the process above, especially to do the second part in a privacy-preserving manner. for historical and current absolute location, one of the first options considered by app developers is to record gps data in the background. unfortunately, this doesn't work on ios devices and even then has several limitations including coarseness in dense, urban areas and usefulness only after the app has been running on the user device for some time because historical data cannot be sourced otherwise. an alternative would be to use wi-fi router information which can give more accurate information as to whether someone has been self-isolating or not based on whether they are connected to their home router. there can be historical data available here which makes it more useful though there are concerns with lack of widespread wi-fi connectivity in rural areas and tracking when people are outside homes. other ways of obtaining location data could be from existing apps and services that a user uses -for example, history of movements on google maps which can be parsed to extract location history. there is also historical location data that could be pieced together from payments history, cars that record location information and personal cell tower usage data. historical and current relative data is even more important to map the spread of the epidemic and in this case, some countries like singapore have deployed bluetooth broadcasting as a means of determining if people have been in close proximity. the device broadcasts a random number (which could change frequently) which is recorded by devices passing by close to each other and in case someone is tested positive, this can be used to alert people who were in close proximity to them. another potential approach highlighted in the article is to utilize gyroscope and ambient audio hashes to determine if two people might have been close together, though bluetooth will provide more consistent results. the reason to use multiple approaches is the benefit of getting more accurate information overall since it would be harder to fake multiple signals. group membership is another important aspect where the information can be used to finely target messaging and calculating aggregate statistics. but, for some types of group membership, we might not be able to rely completely on self-reported data. for example, health status related to the epidemic would require verification from an external third-party such as a medical institution or testing facility to minimize false information. there are several privacy preserving techniques that could be applied to an application given that you have: confirmed covid- patient data in a cloud, all other user data on each individual's device, and data on both the patients and the users including historical and current absolute and relative locations and group identifier information. private set intersections can be used to calculate whether two people were in proximity to each other based on their relative and absolute location information. private set intersection operates similarly to normal set intersection to find elements that are common between two sets but does so without disclosing any private information from either of the sets. this is important because performing analysis even on pseudonymized data without using privacy preservation can leak a lot of information. differential privacy is another critical technique to be utilized, dp consists of providing mathematical guarantees (even against future data disclosures) that analysis on the data will not reveal whether or not your data was part of the dataset. it asserts that from the analysis, one is not able to learn anything about your data that they wouldn't have been able to learn from other data about you. google's battle-tested c++ library is a great resource to start along with the python wrapper created by the openmined team. to address the need for verified group identification, one can utilize the concept of a private identity server. it essentially functions as a trusted intermediary between a user that wants to provide a claim and another party that wants to verify the claim. it functions by querying a service from which it can verify whether the claim is true and then serve that information up to the party wishing to verify the claim without giving away personal data. while it might be hard to trust a single intermediary, this role can be decentralized to provide for obtaining a higher degree of trust by relying on a consensus mechanism. building on theory from management studies by christensen et al. the authors of this article dive into how leaders of tech organizations, especially upstarts that are rapid in the disruption of incumbents should approach the accompanying responsibilities that come with a push into displacing existing paradigms of how an industry works. when there is a decoupling of different parts of the value chain in how a service is delivered, often the associated protections that apply to the entire pipeline fall by the wayside because of distancing from the end user and a diffusion of responsibility across multiple stakeholders in the value chain. while end users driven innovation will seek to reinforce such models, regulations and protections are never at the top of such demands and they create a burden on the consumers once they realize that things can go wrong and negatively affect them. the authors advocate for the leaders of the companies to proactively employ a systems thinking approach to identify different parts that they are disrupting, how that might affect users, what would happen if they become the dominant player in the industry and then apply lessons learned from such an exercise to pre-emptively design safeguards into the system to mitigate unintended consequences. many countries are looking at utilizing existing surveillance and counter-terrorism tools to help track the spread of the coronavirus and are urging tech companies and carriers to assist with this. the us is looking at how they can tap into location data from smartphones, following in the heels of israel and south korea that have deployed similar measures. while extraordinary measures might be justified given the time of crisis we're going through, we mustn't lose sight of what behaviors we are normalizing as a part of this response against the pandemic. russia and china are also using facial recognition technologies to track movements of people, while iran is endorsing an app that might be used as a diagnosis tool. expansion of the boundaries of surveillance capabilities and government powers is something that is hard to reign back in once a crisis is over. in some cases, like the signing of the freedom act in the usa reduced government agency data collection abilities that were expanded under the patriot act. but, that's not always the case and even so, the powers today exceed those that existed prior to the enactment of the patriot act. what's most important is to ensure that decisions policy makers take today keep in mind the time limits on such expansion of powers and don't trigger a future privacy crisis because of it. while no replacement for social distancing, a virus tracking tool putting into practice the technique of contact tracing is largely unpalatable to western democracies because of expectations of privacy and freedom of movement. a british effort underway to create an app that meets democratic ideals of privacy and freedom while also being useful in collecting geolocation data to aid in the virus containment efforts. it is based on the notion of participatory sharing, relying on people's sense of civic duty to contribute their data in case they test positive. while in the usa, discussions between the administration and technology companies has focused on large scale aggregate data collection, in a place like the uk with a centralized healthcare system, there might be higher trust levels in sharing data with the government. while the app doesn't require uptake by everyone to be effective, but a majority of the people would need to use it to bring down the rate of spread. the efficacy of the solution itself will rely on being able to collect granular location data from multiple sources including bluetooth, wi-fi, cell tower data, and app check-ins. a lot of high level cdc officials are advising that if people in the usa don't follow best practices of social distancing, sheltering in place, and washing hands regularly, the outbreak will not have peaked and the infection will continue to spread, especially hitting those who are the most vulnerable including the elderly and those with pre-existing conditions. on top of the public health impacts, there are also concerns of growing tech-enabled surveillance which is being seriously explored as an additional measure to curb the spread. while privacy and freedom rights are enshrined in the constitution, during times of crisis, government and justice powers are expanded to allow for extraordinary measures to be adopted to restore the safety of the public. this is one of those times and the us administration is actively exploring options in partnership with various governments on how to effectively combat the spread of the virus including the use of facial recognition technology. this comes shortly after the techlash and a potential bipartisan movement to curb the degree of data collection by large firms, which seem to have come to a halt as everyone scrambles to battle the coronavirus. regional governments are being imbued with escalated powers to override local legislations in an effort to curb the spread of the virus. the article provides details on efforts by various countries across the world, yet we only have preliminary data on the efficacy of each of those measures and we require more time before being able to judge which of them is the most effective. that said, in a pandemic that is fast spreading, we don't have the luxury of time and must make decisions as quickly as possible using the information at hand, perhaps using guidance from prior crises. but, what we've seen so far is minimal coordination from agencies across the world and that's leading to ad-hoc, patchy data use policies that will leave the marginalized more vulnerable. strategies that use public disclosure of those that have been tested positive in the interest of public health are causing harm to the individuals and other individuals that are close to them such as their families. as experienced by a family in new york, online vigilantes attempted to harass the individuals while their family pleaded and communicated measures that they had taken to isolate themselves to safeguard others. unfortunately, the virus might be bringing out the worst in all of us. an increasing number of tools and techniques are being used to track our behaviour online and while some may have potential benefits, for example, the use of contact tracing to potentially improve public health outcomes, if this is not done in a privacy-preserving manner, there can be severe implications for your data rights. but, barring special circumstances like the current pandemic, there are a variety of simple steps that you can take to protect your privacy online. these range from simple steps like using an incognito browser window which doesn't store any local information about your browsing on your device to using things like vpns which protect snooping of your browsing patterns even from your isp. when it comes to using the incognito function of your browser, if you're logged into a service online, there isn't any protection though it does prevent storing cookies on your device. with vpns, there is an implicit trust placed in the provider of that service to not store logs of your browsing activity. an even more secure option is to use a privacy-first browser like tor which routes your traffic requests through multiple locations making tracking hard. there is also an os built around this called tailsos that offers tracking protection from the device perspective as well not leaving any trace on the host machine allowing you to boot up from a usb. the eff also provides a list of tools that you can use to get a better grip on your privacy as you browse online. under the children's online privacy protection act, the ftc levied its largest fine yet of $ m on youtube last year for failing to meet requirements of limiting personal data collection for children under the age of . yet, as many advocates of youth privacy point out, the fines, though they appear to be large, don't do enough to deter such personal data collection. they advocate for a stronger version of the act while requiring more stringent enforcement from the ftc which has been criticized for slow responses and a lack of sufficient resources. while the current act requires parental consent for children below to be able to utilize a service that might collect personal data, there is no verification performed on the self-declared age provided at the time of sign up which weakens the efficacy of this requirement. secondly, the sharp threshold of years old immediately thrusts children into an adult world once they cross that age and some people are advocating for a more graduated approach to the application of privacy laws. given that such a large part of the news cycle is dominated by the coronavirus, we tend to forget that there might be censors at work that are systematically suppressing information in an attempt to diminish the seriousness of the situation. some people are calling github the last piece of free land in china and have utilized access to it to document news stories and people's first hand experiences in fighting the virus before they are scrubbed from local platforms like wechat and weibo. they hope that such documentation efforts will not only shed light on the reality and on the ground situation as it unfolds but also give everyone a voice and hopefully provide data to others who could use it to track the movement of the virus across the country. such times of crisis bring out creativity and this attempt highlights our ability as a species to thrive even in a severely hostile environment. there is a clear economic and convenience case to be made (albeit for the majority, not for those that are judged to be minorities by the system and hence get subpar performance from the system) where you get faster processing and boarding times when trying to catch a flight. yet, for those that are more data-privacy minded, there is an option to opt-out though leveraging that option doesn't necessarily mean that the alternative will be easy, as the article points out, travelers have experienced delays and confusion from the airport staff. often, the alternatives are not presented as an option to travelers giving a false impression that people have to submit to facial recognition systems. some civil rights and ethics researchers tested the system and got varying mileage out of their experiences but urge people to exercise the option to push back against technological surveillance. london is amongst a few cities that has seen public deployment of live facial recognition technology by law enforcement with the aim of increasing public safety. but, more often than not, it is done so without public announcement and an explanation as to how this technology works, and what impacts it will have on people's privacy. as discussed in an article by maiei on smart cities, such a lack of transparency erodes public trust and affects how people go about their daily lives. several artists in london as a part of regaining control over their privacy and to raise awareness are using the technique of painting adversarial patterns on their faces to confound facial recognition systems. they employ highly contrasting colors to mask the highlights and shadows on their faces and practice pattern use as created and disseminated by the cvdazzle project that advocates for many different styles to give the more fashion-conscious among us the right way to express ourselves while preserving our privacy. such projects showcase a rising awareness for the negative consequences of ai-enabled systems and also how people can use creative solutions to combat problems where laws and regulations fail them. there is mounting evidence that organizations are taking seriously the threats arising from malicious actors geared towards attacking ml systems. this is supported by the fact that organizations like iso and nist are building up frameworks for guidance on securing ml systems, that working groups from the eu have put forth concrete technical checklists for the evaluating the trustworthiness of ml systems and that ml systems are becoming key to the functioning of organizations and hence they are inclined to protect their crown jewels. the organizations surveyed as a part of this study spanned a variety of domains and were limited to those that have mature ml development. the focus was on two personas: ml engineers who are building these systems and security incident responders whose task is to secure the software infrastructure including the ml systems. depending on the size of the organization, these people could be in different teams, same team or even the same person. the study was also limited to intentional malicious attacks and didn't investigate the impacts of naturally occurring adversarial examples, distributional shifts, common corruption and reward hacking. most organizations that were surveyed as a part of the study were found to primarily be focused on traditional software security and didn't have the right tools or know-how in securing against ml attacks. they also indicated that they were actively seeking guidance in the space. most organizations were clustered around concerns regarding data poisoning attacks which was probably the case because of the cultural significance of the tay chatbot incident. additionally, privacy breaches were another significant concern followed by concerns around model stealing attacks that can lead to the loss of intellectual property. other attacks such as attacking the ml supply chain and adversarial examples in the physical domain didn't catch the attention of the people that were surveyed as a part of the study. one of the gaps between reality and expectations was around the fact that security incident responders and ml engineers expected that the libraries that they are using for ml development are battle-tested before being put out by large organizations, as is the case in traditional software. also, they pushed upstream the responsibility of security in the cases where they were using ml as a service from cloud providers. yet, this ignores the fact that this is an emergent field and that a lot of the concerns need to be addressed in the downstream tasks that are being performed by these tools. they also didn't have a clear understanding of what to expect when something does go wrong and what the failure mode would look like. in traditional software security, mitre has a curated repository of attacks along with detection cues, reference literature and tell-tale signs for which malicious entities, including nation state attackers are known to use these attacks. the authors call for a similar compilation to be done in the emergent field of adversarial machine learning whereby the researchers and practitioners register their attacks and other information in a curated repository that provides everyone with a unified view of the existing threat environment. while programming languages often have well documented guidelines on secure coding, guidance on doing so with popular ml frameworks like pytorch, keras and tensorflow is sparse. amongst these, tensorflow is the only one that provides some tools for testing against adversarial attacks and some guidance on how to do secure coding in the ml context. security development lifecycle (sdl) provides guidance on how to secure systems and scores vulnerabilities and provides some best practices, but applying this to ml systems might allow imperfect solutions to exist. instead of looking at guidelines as providing a strong security guarantee, the authors advocate for having code examples that showcase what constitutes security-and non-security-compliant ml development. in traditional software security there are tools for static code analysis that provide guidance on the security vulnerabilities prior to the code being committed to a repository or being executed while dynamic code analysis finds security vulnerabilities by executing the different code paths and detecting vulnerabilities at runtime. there are some tools like mlsec and cleverhans that provide white-and black-box testing; one of the potential future directions for research is to extend this to the cases of model stealing, model inversion, and membership inference attacks. including these tools as a part of the ide would further make it naturalized for developers to think about secure coding practices in the ml context. adapting the audit and logging requirements as necessitated for the functionality of the security information and event management (siem) system, in the field of ml, one can execute the list of attacks as specified in literature and ensure that the logging artifacts generated as a consequence are traced to an attack. then, having these incident logs be in a format that is exportable and integratable with siem systems will allow forensic experts to analyze them post-hoc for hardening and analysis. standardizing the reporting, logging and documentation as done by the sigma format in traditional software security will allow the insights from one analyst into defenses for many others. automating the possible attacks and including them as a part of the mlops pipeline is something that will enhance the security posture of the systems and make them pedestrian practice in the sdl. red teaming, as done in security testing, can be applied to assess the business impacts and likelihood of threat, something that is considered best practice and is often a requirement for supplying critical software to different organizations like the us government. transparency centers that allow for deep code inspection and help create assurance on the security posture of a software product/service can be extended to ml which would have to cover three modalities: ml platform is implemented in a secure manner, ml as a service meets the basic security and privacy requirements, and that the ml models embedded on edge devices meet basic security requirements. tools that build on formal verification methods will help to enhance this practice. tracking and scoring ml vulnerabilities akin to how they are done in software security testing done by registering identified vulnerabilities into a common database like cve and then assigning it an impact score like the cvss needs to be done for the field of ml. while the common database part is easy to set up, scoring them isn't something that has been figured out yet. additionally, on being alerted that a new vulnerability has been discovered, it isn't clear how the ml infrastructure can be scanned to see if the system is vulnerable to that. because of the deep integration of ml systems within the larger product/service, the typical practice of identifying a blast radius and containment strategy that is applied to traditional software infrastructure when alerted of a vulnerability is hard to define and apply. prior research work from google has identified some ways to qualitatively assess the impacts in a sprawling infrastructure. from a forensic perspective, the authors put forth several questions that one can ask to guide the post-hoc analysis, the primary problem there is that only some of the learnings from traditional software protection and analysis apply here, there are many new artifacts, paradigmatic, and environmental aspects that need to be taken into consideration. from a remediation perspective, we need to develop metrics and ways to ascertain that patched models and ml systems can maintain prior levels of performance while having mitigated the attacks that they were vulnerable to, the other thing to pay attention is that there aren't any surfaces that are opened up for attack. given that ml is going to be the new software, we need to think seriously about inheriting some of the security best practices from the world of traditional cybersecurity to harden defenses in the field of ml. all technology has implications for civil liberties and human rights, the paper opens with an example of how low-clearance bridges between new york and long island were supposedly created with the intention of disallowing public buses from crossing via the underpasses to discourage the movement of users of public transportation, primarily disadvantaged groups from accessing certain areas. in the context of adversarial machine learning, taking the case of facial recognition technology (frt), the authors demonstrate that harm can result on the most vulnerable, harm which is not theoretical and is gaining in scope, but that the analysis also extends beyond just frt systems. the notion of legibility borrowing from prior work explains how governments seek to categorize through customs, conventions and other mechanisms information about their subjects centrally. legibility is enabled for faces through frt, something that previously was only possible as a human skill. this combined with the scale offered by machine learning makes this a potent tool for authoritarian states to exert control over their populations. from a cybersecurity perspective, attackers are those that compromise the confidentiality, integrity and availability of a system, yet they are not always malicious, sometimes they may be pro-democracy protestors who are trying to resist identification and arrest by the use of frt. when we frame the challenges in building robust ml systems, we must also pay attention to the social and political implications as to who is the system being made safe for and at what costs. positive attacks against such systems might also be carried out by academics who are trying to learn about and address some of the ethical, safety and inclusivity issues around frt systems. other examples such as the hardening of systems against doing membership inference means that researchers can't determine if an image was included in the dataset, and someone looking to use this as evidence in a court of law is deterred from doing so. detection perturbation algorithms permit an image to be altered such that faces can't be recognized in an image, for example, this can be used by a journalist to take a picture of a protest scene without giving away the identities of people. but, defensive measures that disarm such techniques hinder such positive use cases. defense measures against model inversion attacks don't allow researchers and civil liberty defenders to peer into black box systems, especially those that might be biased against minorities in cases like credit allocation, parole decision-making, etc. the world of security is always an arms race whether that is in the physical or cyberspace. it is not that far-fetched to imagine how a surveillance state might deploy frt to identify protestors who as a defense might start to wear face masks for occlusion. the state could then deploy techniques that bypass this and utilize other scanning and recognition techniques to which the people might respond by wearing adversarial clothing and eyeglasses to throw off the system at which point the state might choose to use other biometric identifiers like iris scanning and gait detection. this constant arms battle, especially when defenses and offenses are constructed without the sense for the societal impacts leads to harm whose burden is mostly borne by those who are the most vulnerable and looking to fight for their rights and liberties. this is not the first time that technology runs up against civil liberties and human rights, there are lessons to be learned from the commercial spyware industry and how civil society organizations and other groups came together to create "human rights by design" principles that helped to set some ground rules for how to use this technology responsibly. researchers and practitioners in the field of ml security can borrow from these principles. we've got a learning community at the montreal ai ethics institute that is centered around these ideas that brings together academics and others from around the world to blend the social sciences with the technical sciences. recommendations on countering some of the harms centre around holding the vendors of these systems to the business standards set by the un, implementing transparency measures during the development process, utilizing human rights by design approaches, logging ml system uses along with possible nature and forms of attacks and pushing the development team to think about both the positive and negative use cases for the systems such that informed trade-offs can be made when hardening these systems to external attacks. in this insightful op-ed, two pioneers in technology shed light on how to think about ai systems and their relation to the existing power and social structures. borrowing the last line in the piece, " … all that is necessary for the triumph of an ai-driven, automation-based dystopia is that liberal democracy accept it as inevitable.", aptly captures the current mindset surrounding ai systems and how they are discussed in the western world. tv shows like black mirror perpetuate narratives showcasing the magical power of ai-enabled systems, hiding the fact that there are millions, if not billions of hours of human labor that undergird the success of modern ai systems, which largely fall under the supervised learning paradigm that requires massive amounts of data to work well. the chinese ecosystem is a bit more transparent in the sense that the shadow industry of data labellers is known, and workers are compensated for their efforts. this makes them a part of the development lifecycle of ai while sharing economic value with people other than the tech-elite directly developing ai. on the other hand, in the west, we see that such efforts go largely unrewarded because we trade in that effort of data production for free services. the authors give the example of audrey tang and taiwan where citizens have formed a data cooperative and have greater control over how their data is used. contrasting that, we have highly-valued search engines standing over community-run efforts like wikipedia which create the actual value for the search results, given that a lot of the highly placed search results come from wikipedia. ultimately, this gives us some food for thought as to how we portray ai today and its relation to society and why it doesn't necessarily have to be that way. mary shelly had created an enduring fiction which, unbeknownst to her, has today manifested itself in the digital realm with layered abstractions of algorithms that are increasingly running multiple aspects of our lives. the article dives into the world of black box systems that have become opaque to analysis because of their stratified complexity leading to situations with unpredictable outcomes. this was exemplified when an autonomous vehicle crashed into a crossing pedestrian and it took months of post-hoc analysis to figure out what went wrong. when we talk about intelligence in the case of these machines, we're using it in a very loose sense, like the term "friend" on facebook, which has a range of interpretations from your best friend to a random acquaintance. both terms convey a greater sense of meaning than is actually true. when such systems run amok, they have the potential to cause significant harm, case in point being the flash crashes the financial markets experienced because of the competitive behaviour of high frequency trading firm algorithms facing off against each other in the market. something similar has happened on amazon where items get priced in an unrealistic fashion because of buying and pricing patterns triggered by automated systems. while in a micro context the algorithms and their working are transparent and explainable, when they come together in an ecosystem, like finance, they lead to an emergent complexity that has behaviour that can't be predicted ahead of time with a great amount of certainty. but, such justifications can't be used as a cover for evading responsibility when it comes to mitigating harms. existing laws need to be refined and amended so that they can better meet the demands of new technology where allocation of responsibility is a fuzzy concept. ai systems are different from other software systems when it comes to security vulnerabilities. while traditional cybersecurity mechanisms rely heavily on securing the perimeter, ai security vulnerabilities run deeper and they can be manipulated through their interactions with the real world -the very mechanism that makes them intelligent systems. numerous examples of utilizing audio samples from tv commercials to trigger voice assistants have demonstrated new attack surfaces for which we need to develop defense techniques. visual systems are also fooled, especially in av systems where, according to one example, manipulating stop signs on the road with innocuous stripes of tape make it seem like the stop sign is a speed indicator and can cause fatal crashes. there are also examples of hiding these adversarial examples under the guise of white noise and other imperceptible changes to the human senses. we need to think of ai systems as inherently socio-technical to come up with effective protection techniques that don't just rely on technical measures but also look at the human factors surrounding them. some other useful insights are to utilize abusability testing, red-teaming, white hacking, bug bounty programs, and consulting with civic society advocates who have deep experience with the interactions of vulnerable communities with technology. reinforcement systems are increasingly moving from applications to beating human performance in games to safety-critical applications like self-driving cars and automated trading. a lack of robustness in the systems can lead to catastrophic failures like the $ m lost by knight capital and the harms to pedestrian and driver safety in the case of autonomous vehicles. rl systems that perform well under normal conditions can be vulnerable to adversarial agents that can exploit the brittleness of the systems when it comes to natural shifts in distributions and more carefully crafted attacks. in prior threat models, the assumptions for the adversary are that they can modify directly the inputs going into the rl agent but that is not very realistic. instead, here the authors focus more on a shared environment through which the adversary creates indirect impact on the target rl agent leading to undesirable behavior. for agents that are trained through self-play (which is a rough approximation of nash equilibrium), they are vulnerable to adversarial policies. as an example, masked victims are more robust to modifications in the natural observations by the adversary but that lowers the performance in the average case. furthermore, what the researchers find is that there is a non-transitive behavior between self-play opponent, masked victim, adversarial opponent and normal victim in that cyclic order. self-play being normally transitive in nature, especially when mimicking real-world scenarios is then no doubt vulnerable to these non-transitive styled attacks. thus, there is a need to move beyond self-play and apply iteratively adversarial training defense and population based training methods so that the target rl agent can become robust to a wider variety of scenarios. vehicle safety is something of paramount importance in the automotive industry as there are many tests conducted to test for crash resilience and other physical safety features before it is released to people. but, the same degree of scrutiny is not applied to the digital and connected components of cars. researchers were able to demonstrate successful proof of concept hacks that compromised vehicle safety. for example, with the polo, they were able to access the controller area network (can) which sends signals and controls a variety of aspects related to driving functions. given how the infotainment systems were updated, researchers were able to gain access into the personal details of the driver. they were also able to utilize the shortcomings in the operation of the key fob to gain access to the vehicle without leaving a physical trace. other hacks that were tried included being able to access and influence the collision monitoring radar system and the tire-pressure monitoring system which both have critical implications for passenger safety. on the focus, they found wifi details including the password for their production line in detroit, michigan. on purchasing a second-hand infotainment unit for purposes of reverse-engineering the firmware, they found the previous owner's home wifi details, phone contacts and a host of other personal information. cars store a lot of personal information including tracking information which, as stated on the privacy policy, can be shared with affiliates which can have other negative consequences like changes in insurance premiums based on driving behaviour. europe will have some forthcoming regulations for connected car safety but those are currently slated for release in . we've all experienced specification gaming even if we haven't really heard the term before. in law, you call it following the law to the letter but not in spirit. in sports, it is called unsportsman-like to use the edge cases and technicalities of the rules of the game to eke out an edge when it is obvious to everyone playing the game that the rules intended for something different. this can also happen in the case of ai systems, for example in reinforcement learning systems where the agent can utilize "bugs" or poor specification on the part of the human creators to achieve the high rewards for which it is optimizing without actually achieving the goal, at least in the way the developers intended them to and this can sometimes lead to unintended consequences that can cause a lot of harms. "let's look at an example. in a lego stacking task, the desired outcome was for a red block to end up on top of a blue block. the agent was rewarded for the height of the bottom face of the red block when it is not touching the block. instead of performing the relatively difficult maneuver of picking up the red block and placing it on top of the blue one, the agent simply flipped over the red block to collect the reward. this behaviour achieved the stated objective (high bottom face of the red block) at the expense of what the designer actually cares about (stacking it on top of the blue one)". this isn't because of a flaw in the rl system but more so a misspecification of the objective. as the agents become more capable, they find ever-more clever ways of achieving the rewards which can frustrate the creators of the system. this makes the problem of specification gaming very relevant and urgent as we start to deploy these systems in a lot of real-world situations. in the rl context, task specification refers to the design of the rewards, the environment and any other auxiliary rewards. when done correctly, we get true ingenuity out of these systems like move from the alphago system that baffled humans and ushered a new way of thinking about the game of go. but, this requires discernment on the part of the developers to be able to judge when you get a case like lego vs. move . as an example in the real-world, reward tampering is an approach where the agent in a traffic optimization system with an interest in achieving a high reward can manipulate the driver into going to alternate destinations instead of what they desired just to achieve a higher reward. specification gaming isn't necessarily bad in the sense that we want the systems to come up with ingenious ways to solve problems that won't occur to humans. sometimes, the inaccuracies can arise in how humans provide feedback to the system while it is training. ''for example, an agent performing a grasping task learned to fool the human evaluator by hovering between the camera and the object." incorrect reward shaping, where an agent is provided rewards along the way to achieving the final reward can also lead to edge-case behaviours when it is not analyzed for potential side-effects. we see such examples happen with humans in the real-world as well: a student asked to get a good grade on the exam can choose to copy and cheat and while that achieves the goal of getting a good grade, it doesn't happen in the way we intended for it to. thus, reasoning through how a system might game some of the specifications is going to be an area of key concern going into the future. the ongoing pandemic has certainly accelerated the adoption of technology in everything from how we socialize to buying groceries and doing work remotely. the healthcare industry has also been rapid in adapting to meet the needs of people and technology has played a role in helping to scale care to more people and accelerate the pace with which the care is provided. but, this comes with the challenge of making decisions under duress and with shortened timelines within which to make decisions on whether to adopt a piece of technology or not. this has certainly led to issues where there are risks of adopting solutions that haven't been fully vetted and using solutions that have been repurposed from prior uses that were approved to now combat covid- . especially with ai-enabled tools, there are increased risks of emergent behavior that might not have been captured by the previous certification or regulatory checks. the problems with ai solutions don't just go away because there is a pandemic and shortcutting the process of proper due diligence can lead to more harm than the benefits that they bring. one must also be wary of the companies that are trying to capitalize on the chaos and pass through solutions that don't really work well. having technical staff during the procurement process that can look over the details of what is being brought into your healthcare system needs to be a priority. ai can certainly help to mitigate some of the harms that covid- is inflicting on patients but we must keep in mind that we're not looking to bypass privacy concerns that come with processing vast quantities of healthcare data. in the age of adversarial machine learning (maiei has a learning community on machine learning security if you'd like to learn more about this area) there are enormous concerns with protecting software infrastructure as ml opens up a new attack surface and new vectors which are seldom explored. from the perspective of insurance, there are gaps in terms of what cyber-insurance covers today, most of it being limited to the leakage of private data. there are two kinds of attacks that are possible on ml systems: intentional and unintentional. intentional attacks are those that are executed by malicious agents who attempt to steal the models, infer private data or get the ai system to behave in a way that favors their end goals. for example, when tumblr decided to not host pornographic content, creators bypassed that by using green screens and pictures of owls to fool the automated content moderation system. unintended attacks can happen when the goals of the system are misaligned with what the creators of the system actually intended, for example, the problem of specification gaming, something that abhishek gupta discussed here in this fortune article. in interviewing several officers in different fortune companies, the authors found that there are key problems in this domain at the moment: the defenses provided by the technical community have limited efficacy, existing copyright, product liability, and anti-hacking laws are insufficient to capture ai failure modes. lastly, given that this happens at a software level, cyber-insurance might seem to be the way to go, yet current offerings only cover a patchwork of the problems. business interruptions and privacy leaks are covered today under cyber-insurance but other problems like bodily harm, brand damage, and property damage are for the most part not covered. in the case of model recreation, as was the case with the openai gpt- model, prior to it being released, it was replicated by external researchers -this might be covered under cyber-insurance because of the leak of private information. researchers have also managed to steal information from facial recognition databases using sample images and names which might also be covered under existing policies. but, in the case with uber where there was bodily harm because of the self-driving vehicle that wasn't able to detect the pedestrian accurately or similar harms that might arise if conditions are foggy, snowy, dull lighting, or any other out-of-distribution scenarios, these are not adequately covered under existing insurance terms. brand damage that might arise from poisoning attacks like the case with the tay chatbot or confounding anti-virus systems as was the case with an attack mounted against the cylance system, cyber-insurance falls woefully short in being able to cover these scenarios. in a hypothetical situation as presented in a google paper on rl agents where a cleaning robot sticks a wet mop into an electric socket, material damage that occurs from that might also be considered out of scope in cyber-insurance policies. traditional software attacks are known unknowns but adversarial ml attacks are unknown unknowns and hence harder to guard against. current pricing reflects this uncertainty, but as the ai insurance market matures and there is a deeper understanding for what the risks are and how companies can mitigate the downsides, the pricing should become more reflective of the actual risks. the authors also offer some recommendations on how to prepare the organization for these risks -for example by appointing an officer that works closely with the ciso and chief data protection officer, performing table-top exercises to gain an understanding of potential places where the system might fail and evaluating the system for risks and gaps following guidelines as put forth in the eu trustworthy ai guidelines. there are no widely accepted best practices for mitigating security and privacy issues related to machine learning (ml) systems. existing best practices for traditional software systems are insufficient because they're largely based on the prevention and management of access to a system's data and/or software, whereas ml systems have additional vulnerabilities and novel harms that need to be addressed. for example, one harm posed by ml systems is to individuals not included in the model's training data but who may be negatively impacted by its inferences. harms from ml systems can be broadly categorized as informational harms and behavioral harms. informational harms "relate to the unintended or unanticipated leakage of information." the "attacks" that constitute informational harms are: behavioral harms "relate to manipulating the behavior of the model itself, impacting the predictions or outcomes of the model." the attacks that constitute behavioral harms are: • poisoning: inserting malicious data into a model's training data to change its behavior once deployed • evasion: feeding data into a system to intentionally cause misclassification without a set of best practices, ml systems may not be widely and/or successfully adopted. therefore, the authors of this white paper suggest a "layered approach" to mitigate the privacy and security issues facing ml systems. approaches include noise injection, intermediaries, transparent ml mechanisms, access controls, model monitoring, model documentation, white hat or red team hacking, and open-source software privacy and security resources. finally, the authors note, it's important to encourage "cross-functional communication" between data scientists, engineers, legal teams, business managers, etc. in order to identify and remediate privacy and security issues related to ml systems. this communication should be ongoing, transparent, and thorough. beyond near-and long-term: towards a clearer account of this paper dives into how researchers can clearly communicate about their research agendas given ambiguities in the split of the ai ethics community into near and long term research. often a sore and contentious point of discussion, there is an artificial divide between the two groups that seem to take a reductionist approach to the work being done by the other. a major problem emerging from such a divide is a hindrance in being able to spot relevant work being done by the different communities and thus affecting effective collaboration. the paper highlights the differences arising primarily along the lines of timescale, ai capabilities, deeper normative and empirical disagreements. the paper provides for a helpful distinction between near-and long-term by describing them as follows: • near term issues are those that are fairly well understood and have concrete examples and relate to rêvent progress in the field of machine learning • long term issues are those that might arise far into the future and due to much more advanced ai systems with broad capabilities, it also includes long term impacts like international security, race relations, and power dynamics what they currently see is that: • issues considered 'near-term' tend to be those arising in the present/near future as a result of current/foreseeable ai systems and capabilities, on varying levels of scale/severity, which mostly have immediate consequences for people and society. • issues considered 'long-term' tend to be those arising far into the future as a result of large advances in ai capabilities (with a particular focus on notions of transformative ai or agi), and those that are likely to pose risks that are severe/large in scale with very long-term consequences. • the binary clusters are not sufficient as a way to split the field and not looking at underlying beliefs leads to unfounded assumptions about each other's work • in addition there might be areas between the near and long term that might be neglected as a result of this artificial fractions unpacking these distinctions can be done along the lines of capabilities, extremity, certainty and impact, definitions for which are provided in the paper. a key contribution aside from identifying these factors is that they lie along a spectrum and define a possibility space using them as dimensions which helps to identify where research is currently concentrated and what areas are being ignored. it also helps to well position the work being done by these authors. something that we really appreciated from this work was the fact that it gives us concrete language and tools to more effectively communicate about each other's work. as part of our efforts in building communities that leverage diverse experiences and backgrounds to tackle an inherently complex and muti-dimensional problem, we deeply appreciate how challenging yet rewarding such an effort can be. some of the most meaningful public consultation work done by maiei leveraged our internalized framework in a similar vein to provide value to the process that led to outcomes like the montreal declaration for responsible ai. the rise of ai systems leads to an unintended conflict between economic pursuits which seek to generate profits and value resources appropriately with the moral imperatives of promoting human flourishing and creating societal benefits from the deployment of these systems. this puts forth a central question on what the impacts of creating ai systems that might surpass humans in a general sense which might leave humans behind. technological progress doesn't happen on its own, it is driven by conscious human choices that are influenced by the surrounding social and economic institutions. we are collectively responsible for how these institutions take shape and thus impact the development of technology -submitting to technological-fatalism isn't a productive way to align our ethical values with this development. we need to ensure that we play an active role in the shaping of the most consequential piece of technology. while the economic system relies on the market prices to gauge what people place value on, by no means is that a comprehensive evaluation. for example, it misses out on the impact of externalities which can be factored in by considering ethical values as a complement in guiding our decisions on what to build and how to place value on it. when thinking about losses from ai-enabled automation, an outright argument that economists might make is that if replacing labor lowers the costs of production, then it might be market-efficient to invest in technology that achieves that. from an ethicist's perspective, there are severe negative externalities from job loss and thus it might be unethical to impose labor-saving automation on people. when unpacking the economic perspective more, we find that job loss actually isn't correctly valued by wages as price for labor. there are associated social benefits like the company of workplace colleagues, sense of meaning and other social structural values which can't be separately purchased from the market. thus, using a purely economic perspective in making automation technology adoption decisions is an incomplete approach and it needs to be supplemented by taking into account the ethical perspective. market price signals provide useful information upto a certain point in terms of the goods and services that society places value on. suppose that people start to demand more eggs from chickens that are raised in a humane way, then suppliers will shift their production to respond to that market signal. but, such price signals can only be indicated by consumers for the things that they can observe. a lot of unethical actions are hidden and hence can't be factored into market price signals. additionally, several things like social relations aren't tradable in a market and hence their value can't be solely determined from the market viewpoint. thus, both economists and ethicists would agree that there is value to be gained in steering the development of ai systems keeping in mind both kinds of considerations. pure market-driven innovation will ignore societal benefits in the interest of generating economic value while the labor will have to make unwilling sacrifices in the interest of long-run economic efficiency. economic market forces shape society significantly, whether we like it or not. there are professional biases based on selection and cognition that are present in either side making its arguments as to which gets to dominate based on their perceived importance. the point being that bridging the gap between different disciplines is crucial to arriving at decisions that are grounded in evidence and that benefit society holistically. there are also differences fundamentally between the economic and ethical perspective -namely that economic indicators are usually unidimensional and have clear quantitative values that make them easier to compare. on the other hand, ethical indicators are inherently multi-dimensional and are subjective which not only make comparison hard but also limit our ability to explain how we arrive at them. they are encoded deep within our biological systems and suffer from the same lack of explainability as decisions made by artificial neural networks, the so-called black box problem. why is it then, despite the arguments above, that the economic perspective dominates over the ethical one? this is largely driven by the fact that economic values provide clear, unambiguous signals which our brains, preferring ambiguity aversion, enjoy and ethical values are more subtle, hidden, ambiguous indicators which complicate decision making. secondly, humans are prosocial only upto a point, they are able to reason between economic and ethical decisions at a micro-level because the effects are immediate and observable, say for example polluting the neighbor's lawn and seeing the direct impact of that activity. on the other hand, for things like climate change where the effects are delayed and not directly observable (as a direct consequence of one's actions) that leads to behaviour where the individual prioritizes economic values over ethical ones. cynical economists will argue that there is a comparative advantage in being immoral that leads to gains in exchange, but that leads to a race to the bottom in terms of ethics. externalities are an embodiment of the conflict between economic and ethical values. welfare economics deals with externalities via various mechanisms like permits, taxes, etc. to curb the impacts of negative externalities and promote positive externalities through incentives. but, the rich economic theory needs to be supplemented by political, social and ethical values to arrive at something that benefits society at large. from an economic standpoint, technological progress is positioned as expanding the production possibilities frontier which means that it raises output and presumably standards of living. yet, this ignores how those benefits are distributed and only looks at material benefits and ignores everything else. prior to the industrial revolution, people were stuck in a malthusian trap whereby technological advances created material gains but these were quickly consumed by population growth that kept standards of living stubbornly low. this changed post the revolution and as technology improvement outpaced population growth, we got better quality of life. the last decades have had a mixed experience though, whereby automation has eroded lower skilled jobs forcing people to continue looking for jobs despite displacement and the lower demand for unskilled labor coupled with the inelastic supply of labor has led to lower wages rather than unemployment. on the other hand, high skilled workers have been able to leverage technological progress to enhance their output considerably and as a consequence the income and wealth gaps between low and high skilled workers has widened tremendously. typical economic theory points to income and wealth redistribution whenever there is technological innovation where the more significant the innovation, the larger the redistribution. something as significant as ai leads to crowning of new winners who own these new factors of production while also creating losers when they face negative pecuniary externalities. these are externalities because there isn't explicit consent that is requested from the people as they're impacted in terms of capital, labor and other factors of production. the distribution can be analyzed from the perspective of strict utilitarianism (different from that in ethics where for example bentham describes it as the greatest amount of good for the greatest number of people). here it is viewed as tolerating income redistribution such that it is acceptable if all but one person loses income as long as the single person making the gain has one that is higher than the sum of the losses. this view is clearly unrealistic because it would further exacerbate inequities in society. the other is looking at the idea of lump sum transfers in which the idealized scenario is redistribution, for example by compensating losers from technology innovation, without causing any other market distortions. but, that is also unrealistic because such a redistribution never occurs without market distortions and hence it is not an effective way to think about economic policy. from an ethics perspective, we must make value judgments on how we perceive dollar losses for a lower socio-economic person compared to the dollar gains made by a higher socio-economic person and if that squares with the culture and value set of that society. we can think about the tradeoff between economic efficiency and equality in society, where the level of tolerance for inequality varies by the existing societal structures in place. one would have to also reason about how redistribution creates more than proportional distortions as it rises and how much economic efficiency we'd be willing to sacrifice to make gains in how equitably income is distributed. thus, steering progress in ai can be done based on whether we want to pursue innovation that we know is going to have negative labor impacts while knowing full well that there aren't going to be any reasonable compensations offered to the people based on economic policy. given the pervasiveness of ai and by virtue of it being a general-purpose technology, the entrepreneurs and others powering innovation need to take into account that their work is going to shape larger societal changes and have impacts on labor. at the moment, the economic incentives are such that they steer progress towards labor-saving automation because labor is one of the most highly-taxed factors of production. instead, shifting the tax-burden to other factors of production including automation capital will help to steer the direction of innovation in other directions. government, as one of the largest employers and an entity with huge spending power, can also help to steer the direction of innovation by setting policies that encourage enhancing productivity without necessarily replacing labor. there are novel ethical implications and externalities that arise from the use of ai systems, an example of that would be (from the industrial revolution) that a factory might lead to economic efficiency in terms of production but the pollution that it generates is so large that the social cost outweighs the economic gain. biases can be deeply entrenched in the ai systems, either from unrepresentative datasets, for example, with hiring decisions that are made based on historical data. but, even if the datasets are well-represented and have minimal bias, and the system is not exposed to protected attributes like race and gender, there are a variety of proxies like zipcode which can lead to unearthing those protected attributes and discriminating against minorities. maladaptive behaviors can be triggered in humans by ai systems that can deeply personalize targeting of ads and other media to nudge us towards different things that might be aligned with making more profits. examples of this include watching videos, shopping on ecommerce platforms, news cycles on social media, etc. conversely, they can also be used to trigger better behaviors, for example, the use of fitness trackers that give us quantitative measurements for how we're taking care of our health. an economics equivalent of the paper clip optimizer from bostrom is how human autonomy can be eroded over time as economic inequality rises which limits control of those who are displaced over economic resources and thus, their control over their destinies, at least from an economic standpoint. this is going to only be exacerbated as ai starts to pervade into more and more aspects of our lives. labor markets have features built in them to help tide over unemployment with as little harm as possible via quick hiring in other parts of the economy when the innovation creates parallel demands for labor in adjacent sectors. but, when there is large-scale disruption, it is not possible to accommodate everyone and this leads to large economic losses via fall in aggregate demand which can't be restored with monetary or fiscal policy actions. this leads to wasted economic potential and welfare losses for the workers who are displaced. whenever there is a discrepancy between ethical and economic incentives, we have the opportunity to steer progress in the right direction. we've discussed before how market incentives trigger a race to the bottom in terms of morality. this needs to be preempted via instruments like technological impact assessments, akin to environmental impact assessments, but often the impacts are unknown prior to the deployment of the technology at which point we need to have a multi-stakeholder process that allows us to combat harms in a dynamic manner. political and regulatory entities typically lag technological innovation and can't be relied upon solely to take on this mantle. the author raises a few questions on the role of humans and how we might be treated by machines in case of the rise of superintelligence (which still has widely differing estimates for when it will be realized from the next decade to the second half of this century). what is clear is that the abilities of narrow ai systems are expanding and it behooves us to give some thought to the implications on the rise of superintelligence. the potential for labor-replacement in this superintelligence scenario, from an economic perspective, would have significant existential implications for humans, beyond just inequality, we would be raising questions of human survival if the wages to be paid to labor fall below subsistence levels in a wide manner. it would be akin to how the cost of maintaining oxen to plough fields was outweighed by the benefits that they brought in the face of mechanization of agriculture. this might be an ouroboros where we become caught in the malthusian trap again at the time of the industrial revolution and no longer have the ability to grow beyond basic subsistence, even if that would be possible. authors of research papers aspire to achieving any of the following goals when writing papers: to theoretically characterize what is learnable, to obtain understanding through empirically rigorous experiments, or to build working systems that have high predictive accuracy. to communicate effectively with the readers, the authors must: provide intuitions to aid the readers' understanding, describe empirical investigations that consider and rule out alternative hypotheses, make clear the relationship between theoretical analysis and empirical findings, and use clear language that doesn't conflate concepts or mislead the reader. the authors of this paper find that there are areas where there are concerns when it comes to ml scholarship: failure to distinguish between speculation and explanation, failure to identify the source of empirical gains, the use of mathematics that obfuscates or impresses rather than clarifies, and misuse of language such that terms with other connotations are used or by overloading terms with existing technical definitions. flawed techniques and communication methods will lead to harm and wasted resources and efforts hindering the progress in ml and hence this paper provides some very practical guidance on how to do this better. when presenting speculations or opinions of authors that are exploratory and don't yet have scientific grounding, having a separate section that quarantines the discussion and doesn't bleed into the other sections that are grounded in theoretical and empirical research helps to guide the reader appropriately and prevents conflation of speculation and explanation. the authors provide the example of the paper on dropout regularization that made comparisons and links to sexual reproduction but limited that discussion to a "motivation" section. using mathematics in a manner where natural language and mathematical expositions are intermixed without a clear link between the two leads to weakness in the overall contribution. specifically, when natural language is used to overcome weaknesses in the mathematical rigor and conversely, mathematics is used as a scaffolding to prop up weak arguments in the prose and give the impression of technical depth, it leads to poor scholarship and detracts from the scientific seriousness of the work and harms the readers. additionally, invoking theorems with dubious pertinence to the actual content of the paper or in overly broad ways also takes away from the main contribution of a paper. in terms of misuse of language, the authors of this paper provide a convenient ontology breaking it down into suggestive definitions, overloaded terminology, and suitcase words. in the suggestive definitions category, the authors coin a new technical term that has suggestive colloquial meanings and can slip through some implications without formal justification of the ideas in the paper. this can also lead to anthropomorphization that creates unrealistic expectations about the capabilities of the system. this is particularly problematic in the domain of fairness and other related domains where this can lead to conflation and inaccurate interpretation of terms that have well-established meanings in the domains of sociology and law for example. this can confound the initiatives taken up by both researchers and policymakers who might use this as a guide. overloading of technical terminology is another case where things can go wrong when terms that have historical meanings and they are used in a different sense. for example, the authors talk about deconvolutions which formally refers to the process of reversing a convolution but in recent literature has been used to refer to transpose convolutions that are used in auto-encoders and gans. once such usage takes hold, it is hard to undo the mixed usage as people start to cite prior literature in future works. additionally, combined with the suggestive definitions, we run into the problem of concealing a lack of progress, such as the case with using language understanding and reading comprehension to now mean performance on specific datasets rather than the grand challenge in ai that it meant before. another case that leads to overestimation of the ability of these systems is in using suitcase words which pack in multiple meanings within them and there isn't a single agreed upon definition. interpretability and generalization are two such terms that have looser definitions and more formally defined ones, yet because papers use them in different ways, it leads to miscommunication and researchers talking across each other. the authors identify that these problems might be occurring because of a few trends that they have seen in the ml research community. specifically, complacency in the face of progress where there is an incentive to excuse weak arguments in the face of strong empirical results and the single-round review process at various conferences where the reviewers might not have much choice but to accept the paper given the strong empirical results. even if the flaws are noticed, there isn't any guarantee that they are fixed in a future review cycle at another conference. as the ml community has experienced rapid growth, the problem of getting high-quality reviews has been exacerbated: in terms of the number of papers to be reviewed by each reviewer and the dwindling number of experienced reviewers in the pool. with the large number of papers, each reviewer has less time to analyze papers in depth and reviewers who are less experienced can fall easily into some of the traps that have been identified so far. thus, there are two levers that are aggravating the problem. additionally, there is the risk of even experienced researchers resorting to a checklist-like approach under duress which might discourage scientific diversity when it comes to papers that might take innovative or creative approaches to expressing their ideas. a misalignment in incentives whereby lucrative deals in funding are offered to ai solutions that utilize anthropomorphic characterizations as a mechanism to overextend their claims and abilities though the authors recognize that the causal direction might be hard to judge. the authors also provide suggestions for other authors on how to evade some of these pitfalls: asking the question of why something happened rather than just relying on how well a system performed will help to achieve the goal of providing insights into why something works rather than just relying on headline numbers from the results of the experiments. they also make a recommendation for insights to follow the lines of doing error analysis, ablation studies, and robustness checks and not just be limited to theory. as a guideline for reviewers and journal editors, making sure to strip out extraneous explanations, exaggerated claims, changing anthropomorphic naming to more sober alternatives, standardizing notation, etc. should help to curb some of the problems. encouraging retrospective analysis of papers is something that is underserved at the moment and there aren't enough strong papers in this genre yet despite some avenues that have been advocating for this work. flawed scholarship as characterized by the points as highlighted here not only negatively impact the research community but also impact the policymaking process that can overshoot or undershoot the mark. an argument can be made that setting the bar too high will impede new ideas being developed and slow down the cycle of reviews and publication while consuming precious resources that could be deployed in creating new work. but, asking basic questions to guide us such as why something works, in which situations it does not work, and have the design decisions been justified will lead to a higher quality of scholarship in the field. the article summarizes recent work from several microsoft researchers on the subject of making ai ethics checklists that are effective. one of the most common problems identified relate to the lack of practical applicability of ai ethics principles which sound great and comprehensive in the abstract but do very little to aid engineers and practitioners from applying them in their day to day work. the work was done by interviewing several practitioners and advocating for a co-design process that brings in intelligence on how to make these tools effective from other disciplines like healthcare and aviation. one of the things emerging from the interviews is that often engineers are few and far between in raising concerns and there's a lack of top-down sync in enforcing these across the company. additionally, there might be social costs to bringing up issues which discourages engineers from implementing such measures. creating checklists that reduce friction and fit well into existing workflows will be key in their uptake. for a lot of people who are new to the field of artificial intelligence and especially ai ethics, they see existential risk as something that is immediate. others dismiss it as something to not be concerned about at all. there is a middle path here and this article sheds a very practical light on that. using the idea of canaries in a coal mine, the author goes on to highlight some potential candidates for a canary that might help us judge better when we ought to start paying attention to these kinds of risks posed by artificial general intelligence systems. the first one is the automatic formulation of learning problems, akin to how humans have high-level goals that they align with their actions and adjust them based on signals that they receive on the success or failure of those actions. ai systems trained in narrow domains don't have this ability just yet. the second one mentioned in the article is achieving fully autonomous driving, which is a good one because we have lots of effort being directed to make that happen and it requires a complex set of problems to be addressed including the ability to make real-time, life-critical decisions. ai doctors are pointed out as a third canary, especially because true replacement of doctors would require a complex set of skills spanning the ability to make decisions about a patient's healthcare plan by analyzing all their symptoms, coordinating with other doctors and medical staff among other human-centered actions which are currently not feasible for ai systems. lastly, the author points to the creation of conversation systems that are able to answer complex queries and respond to things like exploratory searches. we found the article to put forth a meaningful approach to reasoning about existential risk when it comes to ai systems. a lot of articles pitch development, investment and policymaking in ai as an arms race with the us and china as front-runners. while there are tremendous economic gains to be had in deploying and utilizing ai for various purposes, there remain concerns of how this can be used to benefit society more than just economically. a lot of ai strategies from different countries are thus focused on issues of inclusion, ethics and more that can drive better societal outcomes yet they differ widely in how they seek to achieve those goals. for example, ai has put forth a national ai strategy that is focused on economic growth and social inclusion dubbed #aiforall while the strategy from china has been more focused on becoming a global dominant force in ai which is backed by state investments. some countries have instead chosen to focus on creating strong legal foundations for the ethical deployment of ai while others are more focused on data protection rights. canada and france have entered into agreements to work together on ai policy which places talent, r&d and ethics at the center. the author of the article makes a case for how global coordination of ai strategies might lead to even higher gains but also recognizes that governments will be motivated to tailor their policies to best meet the requirements of their countries first and then align with others that might have similar goals. reproducibility is of paramount importance to doing rigorous research and a plethora of fields have suffered from a crisis where scientific work hasn't met muster in terms of reproducibility leading to wasted time and effort on the part of other researchers looking to build upon each other's work. the article provides insights from the work of a researcher who attempted a meta-science approach to trying to figure out what constitutes good, reproducible research in the field of machine learning. there is a distinction made early on in terms of replicability which hinges on taking someone else's code and running that on the shared data to see if you get the same results but as pointed out in the article, that suffers from issues of source and code bias which might be leveraging certain peculiarities in terms of configurations and more. the key tenets to reproducibility are being able to simply read a scientific paper and set up the same experiment, follow the steps prescribed and arrive at the same results. arriving at the final step is dubbed as independent reproducibility. the distinction between replicability and reproducibility also speaks to the quality of the scientific paper in being able to effectively capture the essence of the contribution such that anyone else is able to do the same. some of the findings from this work include that having hyperparameters well specified in the paper and its ease of readability contributed to the reproducibility. more specification in terms of math might allude to more reproducibility but it was found to not necessarily be the case. empirical papers were inclined to be more reproducible but could also create perverse incentives and side effects. sharing code is not a panacea and requires other accompanying factors to make the work really reproducible. cogent writing was found to be helpful along with code snippets that were either actual or pseudo code though step code that referred to other sections hampered reproducibility because of challenges in readability. simplified examples while appealing didn't really aid in the process and spoke to the meta-science process calling for data-driven approaches to ascertaining what works and what doesn't rather than relying on hunches. also, posting revisions to papers and being reachable over email to answer questions helped the author in reproducing the research work. finally, the author also pointed out that given this was a single initiative and was potentially biased in terms of their own experience, background and capabilities, they encourage others to tap into the data being made available but these guidelines provide good starting points for people to attempt to make their scientific work more rigorous and reproducible. the push has been to apply ai to any new problem that we face, hoping that the solution will magically emerge from the application of the technique as if it is a dark art. yet, the more seasoned scientists have seen these waves come and go and in the past, a blind trust in this technology led to ai winters. taking a look at some of the canaries in the coal mine, the author cautions that there might be a way to judge whether ai will be helpful with the pandemic situation. specifically, looking at whether domain experts, like leading epidemiologists endorse its use and are involved in the process of developing and utilizing these tools will give an indication as to whether they will be successful or not. data about the pandemic depends on context and without domain expertise, one has to make a lot of assumptions which might be unfounded. all models have to make assumptions to simplify reality, but if those assumptions are rooted in domain expertise from the field then the model can mimic reality much better. without context, ai models assume that the truth can be gleaned solely from the data, which though it can lead to surprising and hidden insights, at times requires humans to evaluate the interpretations to make meaning from them and apply them to solve real-world problems. this was demonstrated with the case where it was claimed that ai had helped to predict the start of the outbreak, yet the anomaly required the analysis from a human before arriving at that conclusion. claims of extremely high accuracy rates will give hardened data scientists reason for caution, especially when moving from lab to real-world settings as there is a lot more messiness with real-world data and often you encounter out-of-distribution data which hinders the ability of the model to make accurate predictions. for ct scans, even if they are sped up tremendously by the use of ai, doctors point out that there are other associated procedures such as the cleaning and filtration and recycling of air in the room before the next patient can be passed through the machine which can dwindle the gains from the use of an unexplainable ai system analyzing the scans. concerns with the use of automated temperature scanning using thermal cameras also suffers from similar concerns where there are other confounding factors like the ambient temperature, humidity, etc. which can limit the accuracy of such a system. ultimately, while ai can provide tremendous benefits, we mustn't blindly be driven by its allure to magically solve the toughest challenges that we face. offering an interesting take on how to shape the development and deployment of ai technologies, mhlambi utilizes the philosophy of ubuntu as a guiding light in how to build ai systems that better empower people and communities. the current western view that dominates how ai systems are constructed today and how they optimize for efficiency is something that lends itself quite naturally to inequitable outcomes and reinforcing power asymmetries and other imbalances in society. embracing the ubuntu mindset which puts people and communities first stands in contrast to this way of thinking. it gives us an alternative conception of personhood and has the potential to surface some different results. while being thousands of years old, the concept has been seen in practice over and over again, for example, in south africa, after the end of the apartheid, the truth and reconciliation program forgave and integrated offenders back into society rather than embark on a kantian or retributive path to justice. this restorative mindset to justice helped the country heal more quickly because the philosophy of ubuntu advocates that all people are interconnected and healing only happens when everyone is able to move together in a harmonious manner. this was also seen in the aftermath of the rwanda genocide, where oppressors were reintegrated back into society often living next to the people that they had hurt; ubuntu believes that no one is beyond redemption and everyone deserves the right to have their dignity restored. bringing people together through community is important, restorative justice is a mechanism that makes the community stronger in the long run. current ai formulation seeks to find some ground truth but thinking of this in the way of ubuntu means that we try to find meaning and purpose for these systems through the values and beliefs that are held by the community. ubuntu has a core focus on equity and empowerment for all and thus the process of development is slow but valuing people above material efficiency is more preferable than speeding through without thinking of the consequences that it might have on people. living up to ubuntu means offering people the choice for what they want and need, rooting out power imbalances and envisioning the companies as a part of the communities for which they are building products and services which makes them accountable and committed to the community in empowering them. ethics in the context of technology carries a lot of weight, especially because the people who are defining what it means will influence the kinds of interventions that will be implemented and the consequences that follow. given that technology like ai is used in high-stakes situations, this becomes even more important and we need to ask questions about the people who take this role within technology organizations, how they take corporate and public values and turn them into tangible outcomes through rigorous processes, and what regulatory measures are required beyond these corporate and public values to ensure that ethics are followed in the design, development and deployment of these technologies. ethics owners, the broad term for people who are responsible for this within organizations have a vast panel of responsibilities including communication between the ethics review committees and product design teams, aligning the recommendations with the corporate and public values, making sure that legal compliance is met and communicating externally about the processes that are being adopted and their efficacy. ethical is a polysemous word in that it can refer to process, outcomes, and values. the process refers to the internal procedures that are adopted by the firm to guide decision making on product/service design and development choices. the values aspect refers to the value set that is both adopted by the organization and those of the public within which the product/service might be deployed. this can include values such as transparency, equity, fairness, privacy, among others. the outcomes refer to desirable properties in the outputs from the system such as equalized odds across demographics and other fairness metrics. in the best case, inside a technology company, there are robust and well-managed processes that are aligned with collaboratively-determined ethical outcomes that achieve the community's and organization's ethical values. from the outside, this takes on the meaning of finding mechanisms to hold the firms accountable for the decisions that they take. further expanding on the polysemous meanings of ethics, it can be put into four categories for the discussion here: moral justice, corporate values, legal risk, and compliance. corporate values set the context for the rest of the meanings and provide guidance when tradeoffs need to be made in product/service design. they also help to shape the internal culture which can have an impact on the degree of adherence to the values. legal risk's overlap with ethics is fairly new whereas compliance is mainly concerned with the minimization of exposure to being sued and public reputation harm. using some of the framing here, the accolades, critiques, and calls to action can be structured more effectively to evoke substantive responses rather than being diffused in the energies dedicated to these efforts. framing the metaphor of "data is the new oil" in a different light, this article gives some practical tips on how organizations can reframe their thinking and relationship with customer data so that they take on the role of being a data custodian rather than owners of the personal data of their customers. this is put forth with the acknowledgement that customers' personal data is something really valuable that brings business upsides but it needs to be handled with care in the sense that the organization should act as a custodian that is taking care of the data rather than exploiting it for value without consent and the best interests of the customer at heart. privacy breaches that can compromise this data not only lead to fines under legislation like the gdpr, but also remind us that this is not just data but details of a real human being. as a first step, creating a data accountability report that documents how many times personal data was accessed by various employees and departments will serve to highlight and provide incentives for them to change behaviour when they see that some others might be achieving their job functions without the need to access as much information. secondly, celebrating those that can make do with minimal access will also encourage this behaviour change, all being done without judgement or blame but more so as an encouragement tool. pairing employees that need to access personal data for various reasons will help to build accountability and discourage intentional misuse of data and potential accidents that can lead to leaks of personal data. lastly, an internal privacy committee composed of people across job functions and diverse life experiences that monitors organization-wide private data use and provides guidance on improving data use through practical recommendations is another step that will move the conversation of the organization from data entitlement to data custodianship. ultimately, this will be a market advantage that will create more trust with customers and increase business bottom line going into the future. towards the systematic reporting of the energy and carbon climate change and environmental destruction are well-documented. most people are aware that mitigating the risks caused by these is crucial and will be nothing less than a herculean undertaking. on the bright side, ai can be of great use in this endeavour. for example, it can help us optimize resource use, or help us visualize the devastating effects of floods caused by climate change. however, ai models can have excessively large carbon footprints. henderson et al.'s paper details how the metrics needed to calculate environmental impact are severely underreported. to highlight this, the authors randomly sampled one-hundred neurips papers. they found that none reported carbon impacts, only one reported some energy use metrics, and seventeen reported at least some metrics related to compute-use. close to half of the papers reported experiment run time and the type of hardware used. the authors suggest that the environmental impact of ai and relevant metrics are hardly reported by researchers because the necessary metrics can be difficult to collect, while subsequent calculations can be time-consuming. taking this challenge head-on, the authors make a significant contribution by performing a meta-analysis of the very few frameworks proposed to evaluate the carbon footprint of ai systems through compute-and energy-intensity. in light of this meta-analysis, the paper outlines a standardized framework called experiment-impact-tracker to measure carbon emissions. the authors use metrics to quantify compute and energy use. these include when an experiment starts and ends, cpu and gpu power draw, and information on a specific energy grid's efficiency. the authors describe their motivations as threefold. first, experiment-impact-tracker is meant to spread awareness among ai researchers about how environmentally-harmful ai can be. they highlight that "[w]ithout consistent and accurate accounting, many researchers will simply be unaware of the impacts their models might have and will not pursue mitigating strategies". second, the framework could help align incentives. while it is clear that lowering one's environmental impact is generally valued in society, this is not currently the case in the field of ai. experiment-impact tracker, the authors believe, could help bridge this gap, and make energy efficiency and carbon-impact curtailment valuable objectives for researchers, along with model accuracy and complexity. third, experiment-impact-tracker can help perform cost-benefit analyses on one's ai model by comparing electricity cost and expected revenue, or the carbon emissions saved as opposed to those produced. this can partially inform decisions on whether training a model or improving its accuracy is worth the associated costs. to help experiment-impact-tracker become widely used among researchers, the framework emphasizes usability. it aims to make it easy and quick to calculate the carbon impact of an ai model. through a short modification of one's code, experiment-impact-tracker collects information that allows it to determine the energy and compute required as well as, ultimately, the carbon impact of the ai model. experiment-impact-tracker also addresses the interpretability of the results by including a dollar amount that represents the harm caused by the project. this may be more tangible for some than emissions expressed in the weight of greenhouse gases released or even in co equivalent emissions (co eq). in addition, the authors strive to: allow other ml researchers to add to experiment-impact-tracker to suit their needs, increase reproducibility in the field by making metrics collection more thorough, and make the framework robust enough to withstand internal mistakes and subsequent corrections without compromising comparability. moreover, the paper includes further initiatives and recommendations to push ai researchers to curtail their energy use and environmental impact. for one, the authors take advantage of the already widespread use of leaderboards in the ai community. while existing leaderboards are largely targeted towards model accuracy, henderson et al. instead put in place an energy efficiency leaderboard for deep reinforcement learning models. they assert that a leaderboard of this kind, that tracks performance in areas indicative of potential environmental impact, "can also help spread information about the most energy and climate-friendly combinations of hardware, software, and algorithms such that new work can be built on top of these systems instead of more energy-hungry configurations". the authors also suggest ai practitioners can take an immediate and significant step in lowering the carbon emissions of their work: run experiments on energy grids located in carbon-efficient cloud regions like quebec, the least carbon-intensive cloud region. especially when compared to very carbon-intensive cloud regions like estonia, the difference in co eq emitted can be considerable: running an experiment in estonia produces up to thirty times as much emissions as running the same experiment in quebec. the important reduction in carbon emissions that follows from switching to energy-efficient cloud regions, according to henderson et al., means there is no need to fully forego building compute-intensive ai as some believe. in terms of systemic changes that accompany experiment-impact-tracker, the paper lists seven. the authors suggest the implementation of a "green default" for both software and hardware. this would make the default setting for researchers' tools the most environmentally-friendly one. the authors also insist on weighing costs and benefits to using compute-and energy-hungry ai models. small increases in accuracy, for instance, can come at a high environmental cost. they hope to see the ai community use efficient testing environments for their models, as well as standardized reporting of a model's carbon impact with the help of experiment-impact-tracker. additionally, the authors ask those developing ai models to be conscious of the environmental costs of reproducing their work, and act as to minimize unnecessary reproduction. while being able to reproduce other researchers' work is crucial in maintaining sound scientific discourse, it is merely wasteful for two departments in the same business to build the same model from scratch. the paper also presents the possibility of developing a badge identifying ai research papers that show considerable effort in mitigating carbon impact when these papers are presented at conferences. lastly, the authors highlight important lacunas in relation to driver support and implementation. systems that would allow data on energy use to be collected are unavailable for certain hardware, or the data is difficult for users to obtain. addressing these barriers would allow for more widespread collection of energy use data, and contribute to making carbon impact measurement more mainstream in the ai community. the paper highlights four challenges in designing more "intelligent" voice assistant systems that are able to respond to exploratory searches that don't have clear, short answers and require nuance and detail. this is in response to the rising expectations that users have from voice assistants as they become more familiar with them through increased interactions. voice assistants are primarily used for productivity tasks like setting alarms, calling contacts, etc. and they can include gestural and voice-activated commands as a method of interaction. exploratory search is currently not well supported through voice assistants because of them utilizing a fact-based approach that aims to deliver a single, best response whereas a more natural approach would be to ask follow up questions to refine the query of the user to the point of being able to provide them with a set of meaningful options. the challenges as highlighted in this paper if addressed will lead to the community building more capable voice assistants. one of the first challenges is situationally induced impairments as presented by the authors highlights the importance of voice activated commands because they are used when there are no alternatives available to interact with the system, for example when driving or walking down a busy street. there is an important aspect of balancing the tradeoff between smooth user experience that is quick compared to the degree of granularity in asking questions and presenting results. we need to be able to quantify this compared to using a traditional touch based interaction to achieve the same result. lastly, there is the issue of privacy, such interfaces are often used in a public space and individuals would not be comfortable sharing details to refine the search such as clothing sizes which they can discreetly type into the screen. such considerations need to be thought of when designing the interface and system. mixed-modal interactions include combinations of text, visual inputs and outputs and voice inputs and output. this can be an effective paradigm to counter some of the problems highlighted above and at the same time improve the efficacy of the interactions between the user and the system. further analysis is needed as to how users utilize text compared to voice searches and whether one is more informational or exploratory than the other. designing for diverse populations is crucial as such systems are going to be widely deployed. for example, existing research already highlights how different demographics even within the same socio-economic subgroup utilize voice and text search differently. the system also needs to be sensitive to different dialects and accents to function properly and be responsive to cultural and contextual cues that might not be pre-built into the system. differing levels of digital and technical literacy also play a role in how the system can effectively meet the needs of the user. as the expectations from the system increase over time, ascribed to their ubiquity and anthropomorphization, we start to see a gulf in expectations and execution. users are less forgiving of mistakes made by the system and this needs to be accounted for when designing the system so that alternate mechanisms are available for the user to be able to meet their needs. in conclusion, it is essential when designing voice-activated systems to be sensitive to user expectations, more so than other traditional forms of interaction where expectations are set over the course of several uses of the system whereas with voice systems, the user comes in with a set of expectations that closely mimic how they interact with each other using natural language. addressing the challenges highlighted in this paper will lead to systems that are better able to delight their users and hence gain higher adoption. the paper highlights how having more translation capabilities available for languages in the african continent will enable people to access larger swathes of the internet and contribute to scientific knowledge which are predominantly english based. there are many languages in africa, south africa alone has official languages and only a small subset are made available on public tools like google translate. in addition, due to the scant nature of research on machine translation for african languages, there remain gaps in understanding the extent of the problem and how they might be addressed most effectively. the problems facing the community are many: low resource availability, low discoverability where language resources are often constrained by institution and country, low reproducibility because of limited sharing of code and data, lack of focus from african society in seeing local languages as primary modes of communication and a lack of public benchmarks which can help compare results of machine translation efforts happening in various places. the research work presented here aims to address a lot of these challenges. they also give a brief background on the linguistic characteristics of each of the languages that they have covered which gives hints as to why some have been better covered by commercial tools than others.in related work, it is evident that there aren't a lot of studies that have made their code and datasets public which makes comparison difficult with the results as presented in this paper. most studies focused on the autshumato datasets and some relied on government documents as well, others used monolingual datasets as a supplement. the key analysis of all of those studies is that there is a paucity in the focus on southern african languages and because apart from one study, others didn't make their datasets and code public, the bleu scores listed were incomparable which further hinders future research efforts. the autshumato datasets are parallel, aligned corpora that have governmental text as its source. they are available for english to afrikaans, isizulu, n. sotho, setswana, and xitsonga translations and were created to build and facilitate open source translation systems. they have sentence level parallels that have been created both using manual and automatic methods. but, it contains a lot of duplicates which were eliminated in the study done in this paper to avoid leakage between training and testing phases. despite these eliminations, there remain some issues of low quality, especially for isizulu where the translations don't line up between source and target sentences. from a methodological perspective, the authors used convs s and transformer models without much hyperparameter tuning since the goal of the authors was to provide a benchmark and the tuning is left as future work. additional details on the libraries, hyperparameter values and dataset processing are provided in the paper along with a github link to the code. in general the transformer model outperformed convs s for all the languages, sometimes even by points on the bleu scores. performance on the target language depended on both the number of sentences and the morphological typology of the language. poor quality of data along with small dataset size plays an important role as evidenced in the poor performance on the isizulu translations where a lowly . bleu score was achieved. the morphological complexity of the language also played a role in the state of the performance as compared to other target languages. for each of the target languages studied, the paper includes some randomly selected sentences to show qualitative results and how different languages having different structures and rules impacts the degree of accuracy and meaning in the translations. there are also some attention visualizations provided in the paper for the different architectures demonstrating both correct and incorrect translations, thus shedding light on potential areas for dataset and model improvements. the paper also shows results from ablation studies that the authors performed on the byte pair encodings (bpe) to analyze the impact on the bleu scores and they found that for datasets that had smaller number of samples, like for isizulu, having a smaller number of bpe tokens increased the bleu scores. in potential future directions for the work, the authors point out the need for having more data collection and incorporating unsupervised learning, meta learning and zero shot techniques as potential options to provide translations for all official languages in south africa. this work provides a great starting point for others who want to help preserve languages and improve machine translations for low resources languages. such efforts are crucial to empower everyone in being able to access and contribute to scientific knowledge of the world. providing code and data in an open source manner will enable future research to build upon it and we need more such efforts that capture the diversity of human expression through various languages. presence of media cards, total interactions, history of engagement with the creator of the tweet, the user's strength of connection with the creator and the user's usage pattern of twitter. from these factors, one can deduce why filter bubbles and echo chambers form on the platform and where designers and technologists can intervene to make the platform a more holistic experience for users that doesn't create polarizing fractions which can promote the spread of disinformation. for the first time, there's a call for the technical community to include a social impact statement from their work which has sparked a debate amongst camps that are arguing to leave such a declaration to experts who study ethics in machine learning and those that see this as a positive step in bridging the gap between the social sciences and the technical domains. specifically, we see this as a great first step in bringing accountability closer to the origin of the work. additionally, it would be a great way to build a shared vernacular across the human and technical sciences easing future collaboration. we are impressed with all the work that the research and practice community has been doing in the domain of ai ethics. there are many unsolved and open problems that are yet to be addressed but our overwhelming optimism in the power of what diversity and interdisciplinarity can help to achieve makes us believe that there is indeed room for finding novel solutions to the problems that face the development and deployment of ai systems. we see ourselves as a public square, gathering people from different walks of life who can have meaningful exchanges with each other to create the solutions we need for a better future. let's work together and share the mic with those who have lived experiences, lifting up voices that will help us better understand the contexts within which technology resides so that we can truly build something that is ethical, safe, and inclusive for all. see you here next quarter, the state of ai ethics, june suckers list: how allstate's secret auto insurance algorithm squeezes big spenders algorithmic injustices towards a relational ethics social biases in nlp models as barriers for persons with disabilities the second wave of algorithmic accountability the unnatural ethics of ai could be its undoing this dating app exposes the monstrous bias of algorithms ( arielle pardes data is never a raw, truthful input -and it is never neutral racial disparities in automated speech recognition working to address algorithmic bias? don't overlook the role of demographic data ai advances to better detect hate speech algorithms associating appearance and criminality have a dark past beware of these futuristic background checks go deep: research summaries the toxic potential of youtube's feedback loop study: facebook's fake news labels have a fatal flaw research summaries capabilities, and ai assistive technologies go wide: article summaries ancient animistic beliefs live on in our intimacy with tech humans communicate better after robots show their vulnerable side at the limits of thought engineers should spend time training not just algorithms, but also the humans who use them using multimodal sensing to improve awareness in human-ai interaction different intelligibility for different folks aligning ai to human values means picking the right metrics why lifelong learning is the international passport to success (pierre vandergheynst and isabelle vonèche cardia you can't fix unethical design by yourself research summaries the wrong kind of ai? artificial intelligence and the future of labor demand ai is coming for your most mind-numbing office tasks tech's shadow workforce sidelined, leaving social media to the machines here's what happens when an algorithm determines your work schedule automation will take jobs but ai will create them research summaries what's next for ai ethics, policy, and governance? a global overview (daniel schiff a holistic approach to implement ethics in ai beyond a human rights based approach to ai governance: promise, pitfalls and plea go wide: article summaries this is the year of ai regulations apps gone rogue: maintaining personal privacy in an epidemic maximizing privacy and effectiveness in covid- apps article summaries who's allowed to track my kids online? chinese citizens are racing against censors to preserve coronavirus memories on github can i opt out of facial scans at the airport? with painted faces, artists fight facial recognition tech research summaries adversarial machine learning -industry perspectives article summaries the deadly consequences of unpredictable code adversarial policies: attacking deep reinforcement learning specification gaming: the flip side of ai ingenuity doctors are using ai to triage covid- patients. the tools may be here to stay the future of privacy and security in the age of machine learning research summaries towards a clearer account of research priorities in ai ethics and society integrating ethical values and economic value to steer progress in ai machine learning scholarship go wide: article summaries microsoft researchers create ai ethics checklist with ml practitioners from a dozen tech companies why countries need to work together on ai quantifying independently reproducible machine learning be a data custodian, not a data owner research summaries towards the systematic reporting of the energy and carbon footprints of machine learning challenges in supporting exploratory search through voice assistants a focus on neural machine translation for african languages radioactive data: tracing through training using deep learning at scale in twitter's timeline (nicolas koumchatzky neurips requires ai researchers to account for societal impact and financial conflicts of interest in modern ai systems, we run into complex data and processing pipelines that have several stages and it becomes challenging to trace the provenance and transformations that have been applied to a particular data point. this research from the facebook ai research team proposes a new technique called radioactive data that borrows from medical science where compounds like baso are injected to get better results in ct scans. this technique applies minor, imperceptible perturbations to images in a dataset by causing shifts within the feature space making them "carriers".different from other techniques that rely on poisoning the dataset that harms classifier accuracy, this technique instead is able to detect such changes even when the marking and classification architectures are different. it not only has potential to trace how data points are used in the ai pipeline but also has implications when trying to detect if someone claims not to be using certain images in their dataset but they actually are. the other benefit is that such marking of the images is difficult to undo thus adding resilience to manipulation and providing persistence. prior to relevance based timeline, the twitter newsfeed was ordered in reverse chronological order but now it uses a deep learning model underneath to display the most relevant tweets that are personalized according to the user's interactions on the platform. with the increasing use of twitter as a source of news for many people, it's a good idea for researchers to gain an understanding of the methodology that is used to determine the relevance of tweets, especially as one looks to curb the spread of disinformation online. the article provides some technical details in terms of the deep learning infrastructure and the choices made by the teams in deploying computationally heavy models which need to be balanced with the expediency of the refresh times for a good experience on the platform. but, what's interesting from an ai ethics perspective are the components that are used to arrive at the ranking which constantly evolves based on the user's interaction with different kinds of content.the ranked timeline consists of a handful of the tweets that are the most relevant to the user followed by others in reverse chronological order. additionally, based on the time since one's last visit on the platform, there might be an icymi ("in case you missed it") section as well. the key factors in ranking the tweets are their recency, key: cord- - ah w o authors: sakurai, mihoko; adu-gyamfi, bismark title: disaster-resilient communication ecosystem in an inclusive society – a case of foreigners in japan date: - - journal: int j disaster risk reduct doi: . /j.ijdrr. . sha: doc_id: cord_uid: ah w o the number of foreign residents and tourists in japan has been dramatically increasing in recent years. despite the fact that japan is prone to natural disasters, with each climate-related event turning into an emergency such as with record rainfalls, floods and mudslides almost every year, non-japanese communication infrastructure and everyday disaster drills for foreigners have received little attention. this study aims to understand how a resilient communication ecosystem forms in various disaster contexts involving foreigners. within a framework of information ecology we try to get an overview of the communication ecosystem in literature and outline its structure and trends in social media use. our empirical case study uses twitter api and r programming software to extract and analyze tweets in english during typhoon (hagibis) in october . it reveals that many information sources transmit warnings and evacuation orders through social media but do not convey a sense of locality and precise instructions on how to act. for future disaster preparedness, we argue that the municipal government, as a responsible agent, should ( ) make available instructional information in foreign languages on social media, ( ) transfer such information through collaboration with transmitters, and ( ) examine the use of local hashtags in social media to strengthen non-japanese speaker’s capacity to adapt. the geographic characteristics of japan makes the country highly vulnerable to disasters such as earthquakes, typhoons, volcanic eruptions, flash floods and landslides [ ] . therefore, disaster risk reduction and resilience measures are enshrined in the every-day activities of the japanese people including school curriculum, building regulations and design, as well as in corporate organization setup [ ] . disaster risk reduction drills often take place in schools, work places, homes, and include the publication of detailed evacuation plans and procedures in all local communities [ ] . however, these efforts of building the capacity of residents facing emergencies could be falling short in terms of coverage and results due to the influx of foreigners who may neither be enrolled in schools or engaged with japanese establishments or are staying only for a short period of time [ ] . the population of foreign residents in japan has been increasing exponentially in recent years according to a report by mizuho research institute [ ] , with their number reaching a record high of . million people by january , . the report reveals an increase by , residents from january to december in . additionally, the japan national tourism organization reports that the number of foreign tourists visiting japan is rapidly increasing. it recorded million in , which was four times as many as visiting in , and six times over the number visiting in . this trend of increased foreign tourists may partly stem from the influence of numerous sporting and other events across the country which include the formula grand prix, the fivb volleyball women's championship and the japan tennis open [ ] . the tokyo olympic and paralympic games, which has been postponed to , as well as the push by government stakeholders to amend bills to relax immigration laws can also be expected to boost the number of foreign residents and tourists in the country [ , ] . even though some foreigners become permanent residents, the qualification for permanent residence status requires a ten-year continuous stay in japan including five years of work, but does not necessarily require the acquisition of japanese language skills [ ] although it may matter at some points. living in japan briefly or as a permanent resident does not necessarily mean the person speaks japanese or is accustomed to disaster risk reduction culture and procedures in japan. although the composition and dynamics of the japanese population is gradually changing in terms of the non-japanese population, existing infrastructure and systems that support non-japanese speakers and foreigners as a whole during disasters seem to last accessed on march , . https://www.jnto.go.jp/jpn/statistics/visitor_trends/ j o u r n a l p r e -p r o o f be inadequate or lacking . the current japanese disaster management system is composed of both vertical and horizontal coordination mechanisms and, depending on the scale of the disaster, activities flow vertically from the national level, through the prefectural to the municipal level before getting down to the communities themselves [ ] . the municipal disaster management council together with other parties are close to disaster victims and responsible for municipal disaster management plans and information [ ] . when necessary, the municipality is also responsible for the issue of evacuation orders to residents during disasters [ ] . again, when it comes to emergency alerts or risk information, there is an existing mechanism where warning messages proceed from the national government (the cabinet, japan meteorological agency (jma)) to local administrations (prefecture, municipalities) and from there to the people. messages may also be transmitted directly top-down through base stations and are received on cell phones. however, reports by news outlets suggest that language barriers, coupled with the inexperience of many foreigners with regard to japanese disaster procedure protocols, create a huge sense of panic and confusion during disasters [ , ] . lack of appropriate risk information and procedural evacuation or alert actions create this confusion amongst the foreigners. therefore, most foreigners access news outlets in their respective countries or rely on social media for disaster risk particulars, alert instructions, or evacuation information [ ] . the use of social media is seen as a significant trend in accessing swift, precise and easy feedback information in critical situations [ ] [ ] [ ] . social media has become the most sourced avenue for information during disasters [ ] . this applies to both japanese and non-japanese users. irrespective of the chaotic nature or confusion that erupts due to limited risk information accessibility and delivery, a certain level of resilience is achieved through instinctive user responses. thus, in extreme events, a spontaneous system of resilience is often formed within the context of all the actors involved in the situation [ ] . this process usually develops because the participating actors share a common interest or find themselves in a situation that requires urgent solutions. the characteristics of this phenomenon create a state of ecosystem which becomes unique to the area affected and to the nature of the event; it generates interactions and communication procedures or methods to reduce vulnerabilities to the disaster [ , ] . the interactions within this ecosystem could be formal or informal, depending on the situation and different communication structures, modes and tools employed. the high frequency of disaster occurrences in japan in conjunction with a booming foreign population up to covid- , and an anticipated, fast rising population again thereafter, provides an ideal case for trying to understand how this ecosystem is established within the context of resilience and communication. resilience in this study is defined as a system's ability to absorb disturbances [ ] . to analyze this ability, we define the system in terms of the information ecology framework. we regard disaster resilience in the information ecology framework to encompass the efforts of collaboration and communication dependencies that exist amongst stakeholders engaged in the situation within a local context. we want to investigate how foreigners in japan obtain disaster related information while facing a language barrier and inexperience with disaster management procedures. this will guide us to understand the characteristics of the communication ecosystem for a foreign population. this paper is divided into the following parts: first, we introduce the information ecology framework as the theoretical premise of a resilient communication ecosystem. a literature review should give us insights into current studies on the topic and their deployment. a key question is how a resilient communication ecosystem is formed in different disaster contexts. to better understand this question, we seek cases amongst the literature which highlight disaster resilience through collaboration, communication and stakeholder participation. cases with such attributes are selected, reviewed and discussed from the view point of information ecology. following the case review, we use the twitter api (application programming interface) application and the r programing language to collect and analyze english language tweets shared on the twitter platform during typhoon hagibis which hit japan in . this gives us an empirical understanding of a resilient communication ecosystem. it is assumed that information shared on twitter in english is meant for consumption by non-japanese. again, media coverage of the typhoon is also monitored to serve as supplementary information to our analysis. based on insights from literature review and findings from tweet analysis, we try to describe a structure which guides our understanding of a communication system applicable to foreigners during a disaster. we conclude the paper with observed limitations and future research directions. information ecology is composed of a framework which is defined as "a system of people, practices, values, and technologies in a particular local environment" [ ]. previous research uses the notion of ecosystem to describe the nature of resilience in a societal context [ ] . information ecology framework, which is an extended notion of the ecosystem, helps us to examine the capabilities of each element within the system. the framework contains five components, including: system, diversity, coevolution, keystone species and locality. table summarizes the elements of information ecology. not shown but also included is the perspective of a technology role which focuses on human activities that are enabled by the given technology implementation. table table table table the presence of keystone species is crucial to the survival of the ecology itself, e.g., skilled people/groups whose presence is necessary to support the effective use of technology. locality locality locality local settings or attributes that give people the meaning of the ecology. given that resilience is defined as a system's ability to absorb disturbances [ ] , information ecology believes that this ability emerges through interrelationships or dependencies between system entities [ ] . these interrelationships exist between a diversity of players equipped with relevant technologies. keystone species play a central role in creating interrelationships, which, in turn, may lead to new forms of coevolution. the notion of locality reminds us of interrelationships and coevolution emerging in a local setting [ ] . the information ecology framework comes down to a set of key elements providing a systematic view of resilience. it enables us to understand who within a network of relationships is a key for realizing resilience, and what relations can be observed within a system and under which local contexts. coevolution is a driving force for forming the resilient ecosystem in times of disaster and is produced by collaboration of systems with j o u r n a l p r e -p r o o f diverse species that provide local context or knowledge [ ] . to this we need to add the role of technology in helping human activities and support the formation of coevolution. we notice a similar framework exists as communication ecology [ ] . it refers to individual or socio-demographic group connections that strengthen neighborhood communication infrastructure. it helps identify communication patterns of local communities or groups of people. in a disaster context, communication ecology includes community organizations, local media and disaster-specific communication sources [ ] . such communication fosters community resilience [ ] , which can be described as the ability of a neighborhood or geographically defined area to collectively optimize their connected interactions for their benefit [ ] . collective ability is required to deal with stressors and resume daily life through cooperation, following unforeseen shocks. hence, community resilience and its collective ability aim to empower individuals in a given community [ ] . resilience is enhanced through economic development, social capital, information and communication, and community competence [ ] . provision of or ensuring access to vital information through proper communication systems is essential to strengthen community capacity towards the unexpected. in this regard, a resilient communication ecosystem that this paper tries to get an understanding of, encompasses the dynamically evolving processes that empower the collective ability of information gathering and provision, as well as interactions and collective communication structures among individuals, communities, and local organizations. a literature review in the following section will help us extract the essential elements of a resilient communication ecosystem based on information ecology. this study follows a systematic mapping approach [ ] as it has the flexibility, robustness and ability to chronologically categorize and classify related materials in research contexts. the process was based on papers which applied the same approach, i.e., [ ] [ ] [ ] . we set the following research question (rq); rq: how is a resilient communication ecosystem formed in time of a disaster? we want to understand how resilience mechanisms are able to evolve spontaneously during disasters through communication, collaboration and the roles of different j o u r n a l p r e -p r o o f stakeholders affected by them. to accomplish this, this study categorized the process into three main activities [ ] . they are ( ) search for relevant publications, ( ) definition of a classification scheme, and ( ) review of relevant journal papers. the search for relevant publications was undertaken in two steps. first, an online database search was conducted with the keywords "collaboration in disaster", "communication in disaster" and "institutional role in disaster" across a number of journal databases such as sciencedirect (elsevier), springerlink, and emerald this procedure resulted in the selection of papers from all the sampled papers. the classification used in this research was to read, clearly identify and arrange the contents of the potential papers to extract the following content; . research titles, . source (type of journal database), . research question of the article . purpose (which keyword is dominant), . type of disaster, . country or region of disaster, . identified stakeholders, . communication tools used, and . communication structure of the case. this stage identified the contents relevant to this study. the communication ecosystem found to be prevalent within the reviewed cases highlights the presence of institutions, individuals and other actors who become involved in disaster events and perform various tasks or activities aimed at reducing either their own risk or that of the stakeholders. in most cases, a system evolved consisting of actors receiving and sharing information when a disaster or crisis took place [ , ] . information ecology promotes sharing and learning; particularly about the use of new technologies, and to reduce given levels of confusion, frustration and ambiguities [ ] . in this review, social media emerges as a new trend in technology and rather becomes the medium for sharing information with the aim to reduce anxiety about a disaster situation that could negatively affect the people involved [ ] . actors include government agencies, non-government agencies, and other actors who evolve to become part of the resilient structure of the framework, matching the nature of each situation. actors can be clearly distinguished based on the roles they play within the system. they include ( ) remote actors, ( ) responsible agents and ( ) transmitters, and ( ) austria [ ] . the world health organization also supports local communities by giving constant guidance and recommendations as a remote actor during ebola outbreak in west africa [ ] . responsible agents have the highest authority and are responsible for issuing needed guidance for people to stay out of risk zones, advising rescue efforts, and providing information on how to reduce risk [ ] . they are the national agencies such as at the level of ministry or governmental organization [ ] , including security agencies and fire brigades [ ] . and information flow between them and other parties [ ] . keystone species identified in the described disasters represent the group or actors whose presence is crucial for all resilience activities. although they are usually described as highly skilled actors, evidence from the reviewed cases also proves that their activities would not be successful without the cooperation and support from other entities [ ] . for instance, staff of the oslo university hospital can be described as "keystone species" in the case of terrorist attacks in norway [ ] . however, actors such as blood donors to the hospital and patients, news agencies who covered all events, individual volunteers who help spread "blood request" alerts by the hospital on social j o u r n a l p r e -p r o o f media, as well as others who sent time situational information on social media to update others can be said to be the auxiliary actors who complemented the "keystone species" effort. they give rise to the notion that resilience efforts are often created by a system of actors or groups who are connected through information sharing and play specific roles in a cohesive and coordinated manner during events or disasters [ ] . furthermore, the key to all these systems are the space or context in which their actions take place [ , ] . in information ecology, it is the "locality" or the setting that initiates all processes. this gives the specification of the context and guides the actions that follow. in some cases, factors such as the geographical location, messaging or information content, characteristics of actors, the medium of information delivery, duration of the event, and the actions taken give the "local identity" of the system [ , , ] . during the ebola crises, this meant the names of the affected countries [ ], while for earthquakes and disasters such as fires it refers to their exact locations. for the purpose of this study, locality refers to a system of individuals, agencies and local communities that refer to a local information and actions to be taken in order to reduce risk within a given area. table summarizes insights from our case review which illustrate the structure of a resilient communication ecosystem. table table table table as stated previously, the japanese governance system is composed of three tiers, national, prefectural, and municipal. among these, municipal government is closest to people at large and therefore is in charge of issuing evacuation orders, opening and operating evacuation centers, and managing relief efforts [ ] . when an evacuation order is issued by a local municipality, residents who live in specific zones are supposed data collection was done using twitter api in r package to scout tweets in the english language. to extract the tweets, the following steps were used: a. registration with twitter api to secure api credentials b. integrating twitter cran (programming language for r package) into r statistical package c. searching by tweets. this was conducted by searching the hashtags "#hagibis" and "#typhoon_ ". to restrict tweets to only english language, the code "en" was added. typhoon hagibis made a landfall on october , . therefore, the tweet search was restricted to search for information of tweets made on that day. the entire code use for the search was (hagibis <-searchtwitter ("#hagibis","#typhoon_ ", n=" ", lang = "en", since = " - - : ", until = " - - : "). five thousand tweets were collected from this search. d. export and further analysis: the results were exported to microsoft excel for further analysis. data analysis was based on the framework of a resilient communication ecosystem. the first aim was to find keystone species supporting the communication ecosystem. therefore, we focused on the number of tweets and twitter id accounts with a high number of retweets. as a result of retweet number analysis, we established the formation of coevolution within the ecosystem. we also aimed to explore the kind of information content exchanged in the ecosystem. hence we conducted thematic analysis [ ] which allowed us to apply key elements of a communication ecosystem. it shows how the communication ecosystem for english language was formed when the typhoon hit japan. in order to extract a sense of locality, we picked up the name of a region or city, and searched for local information such as evacuation orders or emergency alerts within a certain tweet. diversity in this case reflects the number of tweets and different account id information that posted or retweeted with #hagibis or #typhoon_ during the sampled time frame. frame, but could be the n th retweet for that particular tweet. from our findings, the highest number of original tweets from a single account was out of the original tweets. they were sent by @nhkworld_news, a public broadcaster in japan (figure ). this was followed by tweets created through a twitter handle name @earthuncuttv; which is leading online portal, documenting extreme natural events or disasters in the asia pacific region and around the world. around half the tweets within the original tweets were made by individuals. this was determined based on their handle ids. the total aggregated number of retweets we found from our time frame was , . amongst them, % were retweets which originated from @nhkworld_news ( figure ). @earthuncuttv covered % of total retweets while % came from other origins. the results show that within our sample framework, @nhkworld_news is the most dominant tweeter account. the ranking of retweets exceeding one hundred per source is shown in figure . the most retweeted tweet recorded , retweets. the original tweet was posted by @nhkworld_news saying "hagibis is expected to bring violent winds to some areas. a maximum wind speed of kilometers for the kanto area, and kilometers for the tohoku region." the second most retweeted one was posted by an individual account saying "ghost town //shibuya at pm before a typhoon" with a photo taken in shibuya city. the top ten retweeted tweets are shown in table . names in italics stand for region or city name. table table table table the th and th tweets mentioned evacuation orders issued by local municipalities while other tweets shared the event as it happened in different places around japan, or they reported on its effects. we found only three among original tweets that captured the emergency alert issued by jma (figure ). those three were tweeted by individuals and generated only a few retweets. the emergency alert was described in japanese which is translated as "special alert at level for heavy rain has been issued to tokyo area. level alert means the highest level of warning. take action for saving your life. some areas have been newly added under the special alert. check tv, radio, or local municipalities' information," as of october , : pm. the alert requested people to refer to information from local municipalities but all of it was in japanese. our constant monitoring of the event on the internet also reveals that jma was giving constant updates of the real-time weather status and alerts on its website. this information was provided in english. however, most of it did not appear in our sample frame as it did not utilize twitter in english. as the japanese disaster management system also grants direct access of emergency information from jma to the population, an emergency alert was sent to people's cell phones from jma, but this was written in japanese ( figure ) . hence, people who do not speak japanese may not have understood this, and the real time information could only be solicited by a few who originally knew about such a website or the information provided. on the other hand, much evacuation information is issued by municipal government. we found five tweets mentioning evacuation orders. two of those were the j o u r n a l p r e -p r o o f @nhkworld_news (italics refers to location), as follows. as to take [ ] . for example, a dutch man, who had been living in japan for a half year, received an emergency message but he could not find information where to evacuate. another american woman was supported by her friend in translating disaster information. emergency situations generate exceptional communication features which contravene the norm [ ] . therefore, a communication ecosystem forms and breaks up every time a disaster occurs. we observed dynamic characteristics of crisis communication structure with the involvement of various organizations [ ] and tools [ ] . the tools greatly hinge on social media which have become essential for digital crisis communication [ ] . however, more research is required to better capture the nature of social media use in crisis communication [ ] . several studies reported that j o u r n a l p r e -p r o o f local governments especially in thailand and the u.s. are reluctant to utilize social media in disaster situations [ , ] . they prefer to use traditional media such as tv and radio though those mass media tend to allow only one-way communication during a crisis [ ] . as our study shows, information through mass media becomes limited or challenged to meet specific needs due to its broader audience base [ ] compared to the multiple-interaction communication style that characterizes social media [ ] . government, as a responsible agency, needs to understand the characteristics of mass media and social media communication and create an appropriate communication strategy or collaboration scheme when preparing for future catastrophes. in the recent covid- crisis, the need for more targeted health information within a community and the importance of strong partnerships across authorities and trusted organizations are being discussed [ ] . we hold that more consideration should be paid to the provision of targeted information to those who do not share a common context or background of disaster preparedness. on daily-basis practices in japan, almost all local municipal governments prepare a so-called "hazard map" and distribute it to residents. ordinarily, municipalities only issue evacuation orders and do not provide residents with specific information where to evacuate to. from a regular everyday disaster drill, trained residents are expected to know where the evacuation centers are in their area. as people interpret new information based on previously acquired knowledge [ ] , a responsible agent should also consider how to provide information to people who haven't had any disaster training. beside the feature of interactive communication characterizing social media, the pull of multiple information sources and agents creates a sense of assurance and security in coping with or adjusting to events. this was experienced in the haiti earthquake, when social media contributed swiftly to creating a common operating picture of the situation through the collection of information from individuals rather than from a hierarchical communication structure [ ] . in addition, what is important in crisis communication is who distributes a message and how it is mediated in a population [ ] . our study shows that news sources who already seem to have gained social trust played a role as a keystone species in foreign language communication minorities prioritize personal information sources over the media [ ] . these cases suggest that social media promote effective resilience in communication, and that the delivery of information to foreigners in japan from different language backgrounds and cultures further creates traits where personal connection contributes to information accessibility choices. it is also true that social media-based communication raises situation awareness [ ] by receiving warning alerts as well as emotionally-oriented messages such as from friends and relatives [ ] . language barrier is likely to have been an obstacle that actually helped to shape the ecosystem identified in this study. during typhoon hagibis, responsible agents sent warning messages only in japanese language as shown in figure , so this could have been a major factor in harnessing the full potential of social media for disaster communication. results from this study reveal a lack of local knowledge or reference points across twitter communication, which shapes discussion within the ecosystem. for instance, in the case of the chennai flooding in india, tweets covered subjects such as road updates, shelters, family searches and emergency contact information [ ] , none of which were much referred to in our sampled tweets. while in the and philippine flooding, the most tweeted topic was prayer and rescue, followed by traffic updates and weather reports with more than half the tweets written in english [ ] . moreover, around % of those tweets were secondary information, i.e., retweets and information from other sources [ ] . our study recognizes the importance of secondary information. information exchange between agencies and affected residents and tourists assists in reducing risks such as, for example, during hurricane sandy in , where the top five twitter accounts with a high number of followers were storm related organizations posting relevant news and the status of the hurricane [ ] . as a remote actor becomes a source of secondary information, however, such information should guide people in what they should do next, rather than just point to the disaster situation. in this context, we note the importance of instructional messages whose content guides risk reduction behaviors [ ] . an example of such a message is "do not drive a car" which was tweeted by police during a snowstorm in lexington [ ] . it tells what specific life-saving action should be taken [ ] . generally, however, there appears to be little investigation of instructional information in crisis communication [ ] . the findings of this study imply that there could be two types of disaster information: a) risk information that refers to the potential effect of the disaster, i.e., emergency alert or warning, and b) action-oriented information that carries instructions for reducing the risk, i.e., an evacuation order and itinerary to be followed. risk information is published by remote actors while action-oriented information is provided by responsible agents. both types of information must contain local knowledge and allow easy access when people search for that information [ ] . in order to make sure people can reach instructional messages, it is not enough to just point to the information on organizational websites [ ] , as jma did during typhoon hagibis. instead, "localized hashtags" [ , , ] can support people to find life-saving information as quickly as possible. a previously cited study argues that social media can be useful when sending a request for help [ ] . "typhoon damage of nagano" is an example of a localized hashtag, assisting a fire brigade in nagano prefecture utilizing twitter to respond to calls for help from residents during typhoon hagibis . a localized hashtag can also be used as a point of reference where people can find relevant information [ ] . last accessed on july , , http://www.nhk.or.jp/politics/articles/lastweek/ .html. in future disaster communication, local information should be distributed using geographical information with instructions as to what to do next [ ] . responsible agencies should take this into account, while also analyzing which information was shared across social media in previous disaster cases [ ] . furthermore, cultural differences can also be considered, as western culture emphasizes individual action [ ] while asian cultures prioritize collective actions or community commitment. as this may generate behavioral differences among japanese and foreign residents or tourists, further investigation will be beneficial. based on information ecology framework and literature review, the structure of resilient communication ecosystem is proposed and verified through empirical data analysis. the resilient communication ecosystem is structured with heterogeneous actors who could be a driving force for collaboration or coevolution. the empirical case revealed that a media source might transmit warnings and evacuation orders through social media but that such information does not contain points of reference. limited delivery of such information particularly in the english language results in confusion to non-japanese communities who need it most. based on study results and discussions, we suggest that in any disaster a form of ecosystem is spontaneously generated. municipalities, which are often responsible agents should ( ) produce instructional information in foreign languages on social media, ( ) transfer such information through collaboration with transmitters who may have a strong base on social media to assist translating it to reach the wider audience, and ( ) examine, in association with the mass media and weather forecast agencies, the use of local hashtags in social media communication for future disaster preparation. it brings various actors together, creates a sense of locality, and supplements individual efforts in risk reduction. our empirical data is limited to some extent as we only observed twitter for one day, october , . again, we intended to extract tweets in english language just before and after the typhoon hagibis hit japan. hence, we chose hashtag #hagibis and "#tyhoon_ " when we were crawling tweets. we are also aware that our data is based on only a single disaster case and requires further data sets from other disaster events for corroboration. we may consider different characteristics of information and actors in the resilient communication ecosystem in future research. nevertheless, our findings provide interesting insights into disaster communication which may guide the direction of future research. as instructional information sometimes contains ethics and privacy issues [ ] , rules for dealing with specific local or personal information should be taken j o u r n a l p r e -p r o o f into account. behavioral differences are also worth investigating because of increasing human diversity in japanese society. the case of japan reveals the importance and potential of communication as a main mechanism for offering information and response activities to promote resilience and reduce the risk of disasters. social media space serves as a major platform that brings numerous stakeholders together. this platform creates the main avenue for information sharing on events and offers feedback mechanisms for assessment and improvement. for future disaster preparedness, then, we may benefit from a closer understanding of the nature of a resilient communication ecosystem, its structure and the knowledge that it allows us to share. resiliency in tourism transportation: case studies of japanese railway companies preparing for the tokyo olympics a study of the disaster management framework of japan effect of tsunami drill experience on evacuation behavior after the onset of the great east japan earthquake a framework for regional disaster risk reduction for foreign residents (written in japanese) japan's foreign population hitting a record high the tourism nation promotion basic plan japan's immigration policies put to the test, in nippon japan's immigration chief optimistic asylum and visa woes will improve in immigration services agency of japan, guidelines for permission for permanent residence sustaining life during the early stages of disaster relief with a frugal information system: learning from the great east japan earthquake. communications magazine, ieee who would be willing to lend their public servants to disaster-impacted local governments? an empirical investigation into public attitudes in disaster message not getting through to foreign residents hokkaido quake reveals japan is woefully unprepared to help foreign tourists in times of disaster media preference, information needs, and the language proficiency of foreigners in japan after the great east japan earthquake social media communication during disease outbreaks: findings and recommendations, in social media use in crisis and risk communication, h. harald and b. klas, editors social media use in crises and risks: an introduction to the collection, in social media use in crisis and risk communication, h. harald and b. klas, editors social media, trust, and disaster: does trust in public and nonprofit organizations explain social media use during a disaster? social media use during disasters: a review of the knowledge base and gaps. , national consortium for the study of terrorism and responses to terrorism building resilience through effective disaster management: an information ecology perspective information ecologies using technology with heart resilience and stability of ecological systems social-ecological resilience to coastal disasters understanding communication ecologies to bridge communication research and community action disaster communication ecology and community resilience perceptions following the central illinois tornadoes the centrality of communication and media in fostering community resilience:a framework for assessment and intervention building resilience: social capital in post-disaster recovery community disaster resilience and the rural resilience index community resilience as a metaphor, theory, set of capacities, and strategy for disaster readiness guidelines for conducting systematic mapping studies in software engineering: an update. information and software technology software product line testing-a systematic mapping study. information and software technology systematic literature reviews in software engineering -a systematic literature review. information and software technology microblogging during two natural hazards events: what twitter may contribute to situational awareness an analysis of the norwegian twitter-sphere during and in the aftermath of the information ecology of a university department understanding social media data for disaster management. natural hazards flows of water and information: reconstructing online communication during the european floods in austria lessons from ebola affected communities: being prepared for future health crises information and communication technology for disaster risk management in japan: how digital solutions are leveraged to increase resilience through improving early warnings and disaster information sharing government-communities collaboration in disaster management activity: investigation in the current flood disaster management policy in thailand twitter in the cross fire-the use of social media in the westgate mall terror attack in kenya municipal government communications: the case of local government communications. strategic communications management media coverage of the ebola virus disease: a content analytical study of the guardian and daily trust newspapers, in the power of the media in health communication blood and security during the norway attacks: authorities' twitter activity and silence, in social media use in crisis and risk communication, h. harald and b. klas, editors social media and disaster communication: a case study of cyclone winston challenges and obstacles in sharing and coordinating information during multi-agency disaster response: propositions from field exercises. information systems frontiers indigenous institutions and their role in disaster risk reduction and resilience: evidence from the tsunami in american samoa emergent use of social media: a new age of opportunity for disaster resilience the role of data and information exchanges in transport system disaster recovery: a new zealand case study institutional vs. non-institutional use of social media during emergency response: a case of twitter in australian bush fire social movements as information ecologies: exploring the coevolution of multiple internet technologies for activism crowdsourced mapping in crisis zones: collaboration, organisation and impact role of women as risk communicators to enhance disaster resilience of bandung, indonesia. natural hazards providing real-time assistance in disaster relief by leveraging crowdsourcing power. personal and ubiquitous computing digitally enabled disaster response: the emergence of social media as boundary objects in a flooding disaster why we twitter: understanding microblogging usage and communities social media usage patterns during natural hazards securing communication channels in severe disaster situations -lessons from a japanese earthquake. in information systems for crisis response and management using thematic analysis in psychology emergency warnings and expat confusion in typhoon hagibis, in nhk the design of a dynamic emergency response management journal of information technology theory and application social media in disaster risk reduction and crisis management social media for knowledge-sharing: a systematic literature review a community-based approach to sharing knowledge before, during, and after crisis events: a case study from thailand thor visits lexington: exploration of the knowledge-sharing gap and risk management learning in social media during multiple winter storms the role of media in crisis management: a case study of azarbayejan earthquake social media and disasters: a functional framework for social media use in disaster planning, response, and research using social and behavioural science to support covid- pandemic response crisis communication, race, and natural disasters emergency knowledge management and social media technologies: a case study of the haitian earthquake exploring the use of social media during the flood in malaysia understanding the efficiency of social media based crisis communication during hurricane sandy the role of social media for collective behavior development in response to natural disasters understanding the behavior of filipino twitter users during disaster communicating on twitter during a j o u r n a l p r e -p r o o f disaster: an analysis of tweets during typhoon haiyan in the philippines the instructional dynamic of risk and crisis communication: distinguishing instructional messages from dialogue. review of communication conceptualizing crisis communication, in handbook of risk and crisis communication social media and disaster management: case of the north and south kivu regions in the democratic republic of the congo social media and crisis management: cerc, search strategies, and twitter content this work was supported by jsps kakenhi grant number jp k . key: cord- -hsryei b authors: samy, michael; abdelmalak, rebecca; ahmed, amna; kelada, mary title: social media as a source of medical information during covid- date: - - journal: medical education online doi: . / . . sha: doc_id: cord_uid: hsryei b nan the covid- pandemic has been rapidly progressing, with guidelines and advice constantly evolving in light of emerging and updated evidence. this has warranted the need for rapid dissemination of information. the internet is becoming an increasingly popular source of medical information, especially at a time in which data can be incomplete and originate from different sources. social media offers a platform where all individuals can freely access medical information; from evidencebased data produced by medical professionals to opinion-based information from a laypersons' first-hand experiences of health and illness. guidance issued by the general medical council uses the term 'social media' to encompass blogs and microblogs (such as twitter), internet forums (such as doctors.net), content communities (such as youtube), and social networking sites (such as facebook and linkedin) [ ] . these have all been employed in attempts to spread information regarding the covid- pandemic. for example, official government channels on twitter and instagram regularly published the latest government statistics and advice, whilst encouraging the public to follow guidelines. slogans such as 'stay at home, protect the nhs, save lives' were able to further their reach through social media. the government has also advertised on platforms frequented primarily by teenagers, such as tiktok, enabling information to reach audiences of all ages. this may be more difficult with traditional advertising methods. hand hygiene is of extreme importance in the prevention of covid- transmission [ ] , and youtube videos were consequently created to demonstrate proper handwashing techniques to the public. youtube videos were also utilised for the education of health professionals, such as to demonstrate proper donning of personal protective equipment. clinicians' forums, such as facebook's , member covid doctors forum (uk), have provided doctors with a community where they can reflect on emerging evidence and guidelines whilst offering support to fellow colleagues. practical issues arise surrounding the accuracy of information and reliability of those publishing content. ethical concerns range from digital identity to proper professional behaviour in the use of social media, particularly confidentiality, defamation, and doctor-patient boundaries [ ] . furthermore, unverified medical anecdotes often evoke fear, while the desire to protect loved ones can quickly lead to hoax messages becoming viral. news outlets and social media platforms ought to be cautious about the information they disseminate. there are also concerns surrounding the distribution of advice by health professionals or people in authority, as illustrated by president trump's comments promoting the use of hydroxychloroquine for covid- prevention without us food and drug administration approval [ ] . moreover, the urgent need to disseminate information regarding covid- may have led to some medical journals adopting an expedited peer-review process, perhaps compromising the quality of evidence published. social media plays a key role in making medical knowledge widely available. information can be constantly updated and disseminated, which is vital in the early days of a rapidly evolving outbreak. however, the unregulated nature of the internet can result in unvalidated or unproven information being spread. this can lead to severe, and in some cases life-threatening, consequences. therefore, the public need to be able to discern between sources that are trustworthy and unreliable. we would also argue that despite its challenges, social media platforms should play a greater role in the regulation and fact-checking of information distributed on their sites. doctors' use of social media interim recommendations on obligatory hand hygiene against transmission of covid- practicing medicine in the age of facebook donald trump is taking hydroxychloroquine to ward off covid- . is that wise? all authors contributed equally medical education online , vol. , https://doi.org/ . / . .