key: cord-0982436-3e1s4k1t authors: nan title: COVID-19 Control by Computer Vision Approaches: A Survey date: 2020-09-29 journal: IEEE Access DOI: 10.1109/access.2020.3027685 sha: 74d8100d69e82153cbb3f3c03ebc9e7f3f3ba375 doc_id: 982436 cord_uid: 3e1s4k1t The COVID-19 pandemic has triggered an urgent call to contribute to the fight against an immense threat to the human population. Computer Vision, as a subfield of artificial intelligence, has enjoyed recent success in solving various complex problems in health care and has the potential to contribute to the fight of controlling COVID-19. In response to this call, computer vision researchers are putting their knowledge base at test to devise effective ways to counter COVID-19 challenge and serve the global community. New contributions are being shared with every passing day. It motivated us to review the recent work, collect information about available research resources, and an indication of future research directions. We want to make it possible for computer vision researchers to find existing and future research directions. This survey article presents a preliminary review of the literature on research community efforts against COVID-19 pandemic. COVID-19, known as an infectious disease is caused by severe acute respiratory syndrome (SARS-CoV-2) [1] and named coronavirus due to its visual appearance (under an electron microscope) to solar corona (similar to a crown) [2] . The fight against COVID-19 has motivated researchers worldwide to explore, understand, and devise new diagnostic and treatment techniques to culminate this threat to our generation. In this article, we discuss how the computer vision community is fighting with this menace by proposing new types of approaches, improving efficiency, and speed of the existing efforts. The scientific response to combat COVID-19 has been far quicker and widespread. A keyword search on PubMed and the major open-access preprint repositories (arXiv, bioRxiv and medRxiv) revealed that in 2019, 735 published papers included the word ''coronavirus''. FIGURE 1 illustrates our findings. During the first half of 2020, this number has increased a thirty-fold and rose to astounding The associate editor coordinating the review of this manuscript and approving it for publication was Derek Abbott . 21 ,806 articles. For comparison, the SARS pandemic, with less than 10,000 confirmed infections and <1,000 deaths, led roughly to a four-fold increase over two years (2002: 221 and 2004: 822) . After the occurrence of MERS in 2012 (less than 3,000 confirmed infections and 1,000 deaths to date) a doubling in coronavirus related papers over four years (2011 to 2015) was observed. The Economist has dubbed the current Herculean task science of the times with the hope that such efforts would help speed up the development of a COVID-19 vaccine [3] . Numerous approaches in computer vision have been proposed so far, dealing with different aspects of combat the COVID-19 pandemic. These approaches vary in terms of their approach to the fundamental questions: • How can medical imaging facilitate faster and reliable diagnosis of COVID-19? • Which image features correctly classify conditions as Bacterial, Viral, COVID-19, and Pneumonia? • What can we learn from imaging data acquired from disease survivors to screen critical and non-critical ill patients? • How can computer vision be used to enforce social distancing and early screening of infected people? • How can 3D computer vision help to maintain healthcare equipment supply and guide the development of a COVID-19 vaccine? The answers to these questions are being explored, and preliminary work has been done. The contribution of this review article is as follows: This review article classifies COVID-related computer vision methods into broad categories and provides salient descriptions of representative methods in each group. We aspire to give readers the ability to understand the baseline efforts and kickstart their work where others have left. Furthermore, we aim to highlight new trends and innovative ideas to build a more robust and well-planned strategy during this war of our times. Our survey will also include research articles in pre-print format due to the time urgency imposed by this disease. However, one limitation of this review is the inclusion of the risk of lower quality and work without due validation. Many of the works have not been put into the clinical trial as it is time-consuming. Nevertheless, our intention here is to share ideas from a single platform while highlighting the computer vision community efforts. We hope that our reader is aware of these contemporary challenges. This article is an extended and revised version of the earlier preprint survey [4] . We follow a top-down approach to describe the research problems that require urgent attention. We start with disease diagnosis, discuss disease prevention and control, followed by treatment-related computer vision research work. We have organised the paper as follow: Section 2 describes the overall taxonomy of computer vision research areas by classifying these efforts into three classes. Section 3 provides a detailed description of each research area, relevant papers, and a brief description of representative work. Section 4 describes available resources, including research datasets, their links, deep learning models, and codes. Section 5 provides the discussion and future work directions followed by concluding remarks and references. The novel coronavirus SARS-CoV-2 is the seventh member of the Corona viridae family of viruses which are enveloped, non-segmented, positive-sense RNA viruses [5] . The mortality rate of COVID-19 is less than that of the severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) coronavirus diseases (10% for SARS-CoV and 37% for MERS-CoV). However, it is highly infectious, and the number of cases is on continuous rise [6] . The disease outbreak first reported in Wuhan, the Hubei province of China, after several cases of pneumonia with unknown causes were reported on 31 December 2019. A novel coronavirus was discovered as the causative organism through in-depth sequencing analysis of samples of patient's respiratory tract at Chinese facilities on 7 January 2020 [6] . The outbreak was announced as a Public Health Emergency of International Concern on 30 January 2020. On 11 February 2020, the World Health Organization (WHO) announced a name for the new coronavirus disease: COVID-19. It was officially being considered pandemic after the 11 March announcement by WHO [7] . In this section, we describe the classification of computer vision techniques that try to counter the menace of COVID-19. For better comprehensibility, we have classified them into three key areas of research: (i) diagnosis and prognosis, (ii) disease prevention and control, and (iii) disease treatment and management. FIGURE 2 shows this taxonomy. In the following subsections, we discuss the research fields, the relevant papers, and present a brief representative description of related works. An essential step in this fight is the reliable, faster, and affordable diagnostic process that can be readily accessible and available to the global community. According to Cambridge dictionary [8] , diagnosis is: ''the making of a judgment about the exact character of a disease or other problem, especially after an examination, or such a judgment'' and prognosis is ''a doctor's judgment of the likely or expected development of a disease or of the chances of getting better''. Currently, Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) tests are considered as the gold standard for diagnosing COVID-19 [9] . During such a test, small amounts of viral RNA are extracted from a nasal swab, amplified, quantified. Virus detection is then performed using a fluorescent dye. Although accurate, the test is time-consuming, manual and requires biomolecular testing facilities which limits its availability in large scales and third-world countries. Care has to be taken in interpreting negative test results. A meta-study estimated the sensitivity over the disease process and found a maximal sensitivity of 80%, eight days after infection [10] . Some studies have also shown false-positive PCR testing [11] . An alternative approach is the use of a radiology examination that uses computed tomography (CT) imaging [12] . A chest CT scan is a non-invasive test conducted to obtain a precise image of a patient's chest. It uses an enhanced form of X-Ray technology, providing more detailed images of the chest than a standard X-Ray. It produces images that include bones, fats, muscles, and organs, giving physicians a better view, which is crucial when making accurate diagnoses. A Chest CT scan is of two types: namely high-resolution and spiral chest CT scan [13] . The high-resolution chest CT scan provides more than a slice (or image) in a single rotation of the X-Ray tube. The spiral chest CT scan application involves a table that continuously moves through a tunnel-like hole while the X-Ray tube follows a spiral path. The advantage of the spiral CT is that it is capable of producing a three-dimensional image of the lungs. Important CT features include ground-glass opacity, consolidation, reticulation/thickened interlobular septa, nodules, and lesion distribution (left, right or bilateral lungs) [14] - [17] . The most observable CT features discovered in COVID-19 pneumonia include bilateral and sub pleural areas of ground-glass opacification, consolidation affecting the lower lobes. Within the intermediate stage (4-14 days from symptom onset), crazy-paving pattern and possibly observable Halo sign become important features as well [6] , [11] , [12] , [12] , [14] - [18] . One case of CT images is shown in FIGURE 3 that illustrates ground glass opacities and ground halo features. As the identification of disease features is timeconsuming, even for expert radiologists, computer vision can help by automating such a process. To date, various CT-scanning automated approaches have been proposed [8] , [12] - [16] , [18] - [27] . To discuss the approach and performance of the computer vision CT-based disease diagnosis, we have selected some recent representative works that provide an overview of their effectiveness. It is worth noting that they have been presenting different performance metrics and using a diverse number of images and datasets. These practices make their comparison very challenging. Some of the metrics include Accuracy, Specificity, Sensitivity, Positive predictive value (PPV), Negative predictive value (NPV), Area Under Curve (AUC), and F1 score. A quick elucidation on their definition can be useful. The accuracy of a method finds how correct the values are predicted. The precision finds the reproducibility of the measurement; Recall presents how many of the correct results are discovered while F1-score uses a combination of precision and recall for a balanced average result. The first class of work discussed here approaches diagnosis as a segmentation problem. Chen et al. [22] has proposed a CT image dataset of 46,096 images of both healthy and infected patients, labelled by expert radiologists. It was collected from 106 patients admitted with 51 confirmed COVID-19 pneumonia and 55 control patients. The work used deep learning models for segmentation only so that it could identify the infected area in CT images between healthy and infected patients. It was based on UNet++ semantic segmentation model [23] , used to extract valid areas in the images. It used 289 randomly selected CT images and tested it on other 600 randomly selected CT images. The model achieved a per-patient sensitivity of 100%, specificity of 93.55%, the accuracy of 95.24%, PPV (positive prediction value) of 84.62%, and NPV (negative prediction value) of 100%. In the retrospective dataset, it resulted in a per-image sensitivity of 94.34%, the specificity of 99 The second type of work considered COVID-19 as a binary classification problem. Li et al. [24] proposed (COVNet), to extract visual features from volumetric chest CT using transfer learning on the RESNET50. Lung segmentation was performed as a pre-processing task using the U-Net model. It used 4356 chest CT exams from 3,322 patients from the dataset collected from 6 hospitals between August 2016 and February 2020. The sensitivity and specificity for COVID-19 are 90% (114 of 127; p-value<0.001) with 95% confidence interval (CI) of [95% CI: 83%, 94%] and 96% (294 of 307; p-value<0.001) with [95% CI: 93%, 98%], respectively. The model was also made available online for public use at https://github.com/bkong999/COVNet. The diagnosis problem was also approached as a 3-category classification task: distinguishing healthy patients from those with other types of pneumonia and those with COVID-19. Li et al. [24] used data from 88 patients diagnosed with the COVID-19, 101 patients infected with bacteria pneumonia, and 86 healthy individuals. It proposed the DRE-Net (Relation Extraction neural network) based on ResNet50, on which the Feature Pyramid Network (FPN) [25] and the Attention module was integrated to represent more fine-grained aspects of the images. An online server is available for online diagnoses with CT images at http://biomed.nsccgz.cn/server/Ncov2019. A recent landmark study was published by Mei et al. [27] in Nature Medicine. In a cohort of 906 RT-PCR tested patients (419 COVID-positive), a two-stage CNN was combined with an MLP on clinical features (age, sex, exposure history, symptoms) and the diagnostic performance was compared to senior radiologists. A ''slice selection CNN'' was used to select abnormal CT scans which were subsequently classified by the ''disease diagnosis CNN''. Interestingly, fusing a 512-dimensional vector of the CT scans with clinical features yielded a joint model that significantly outperformed the CNN-only model in ROC-AUC and specificity. On a test set of 279 patients, the joint model surpassed senior radiologists in ROC-AUC (0.92 vs. 0.84), while showing worse specificity (83% vs. 94%) and statistically insignificant better sensitivity (84% vs. 75%). The model also correctly identified 68% of positive patients who exhibited normal CT scans according to the radiologists. It hints toward the potential of deep learning . Provided are three chest XR chosen out of the daily chest CXR for this patient. The consolidation can be observed in the right lower zone on day 0 persist into day four, followed by novel consolidate changes in the right mid-zone periphery and perihelia region. Such type of mid-zone change improved on the day seven-film. Image adapted from [41] . to pick-up complex, disease-relevant patterns that may stay indiscernible for radiologists. Due to limited time available for annotations and labelling, weakly-supervised deep learning-based approaches have also been developed using 3D CT volumes to detect COVID-19. Zheng et al. [26] proposed 3D deep convolutional neural Network (DeCoVNet) to Detect COVID-19 from CT volumes. The weakly supervised deep learning model could accurately predict the COVID-19 infectious probability in chest CT volumes without the need for annotating the lesions for training. The CT images were segmented using a pre-trained UNet. It used 499 CT volumes for training, collected from 13 December 2019 to 23 January 2020, and 131 CT volumes for testing, collected from 24 January 2020 to 6 February 2020. The authors chose a probability threshold of 0.5 to classify COVID-positive and COVID-negative cases. The algorithm obtained an accuracy of 0.901, a positive predictive value of 0.840, and a high negative predictive value of 0.982. The developed deep learning model is available at https://github.com/sydney0zq/covid-19-detection. One drawback of using CT imaging is the need for high patient dose and enhanced cost [43] . The low availability imposes Additional challenges for CT in remote areas and the need of patient relocation and exhaustive disinfection of the scanner rooms (several hours per day) that risk contagion for staff and other patients [44] . These disadvantages call into play chest X-Ray radiography (CXR) as a preferred first-line imaging modality with lower cost and a wider availability for detecting chest pathology. Digital X-Ray imagery computer-aided diagnosis is used for different diseases, including osteoporosis [45] , cancer [46] and cardiac disease [39] . However, as it is really hard to distinguish soft tissue with a poor contrast in X-Ray imagery, contrast enhancement is used as pre-processing step [47] , [48] . Lung segmentation of chest X-Rays is a crucial and important step in order to identify lung nodules and various segmentation approaches are proposed in the literature [49] - [52] . CXR examinations show consolidation in COVID-19 infected patients. In one study at Hong Kong [41] , three different patients had daily CXR, two of them showed progression in the lung consolidation over 3-4 days. Further CXR examinations show improvement over the subsequent two days. The third patient showed no significant variations over eight days. However, a similar study showed that the ground glass opacities in the right lower lobe periphery on the CT are not visible on the chest radiograph, which was taken 1 hour apart from the first study. FIGURE 4 illustrates a scenario with three chest XR chosen out of the daily chest CXR for a patient. The consolidation can be observed in the CSR image. In a large-scale study of 636 ambulatory COVID-19 patients, Weinstock et al. found that 58% of CXR was normal and 89% were normal or mildly abnormal [53] . Interstitial changes (24%) and GGOs (19%) were the most prominent symptoms, and abnormalities were most prevalent in the lower lobe (34%). While the sensitivity of CXR is significantly lower than for CT, the American College of Radiology (ACR) recommends to conduct CXR with portable devices and only if ''medically necessary'' for better radiological analysis. It moreover firmly advises to not use any imaging technique for COVID-19 diagnosis but instead suggests biomolecular tests [54] . In the realm of AI, Various CXR-related automated approaches are proposed. The following section discusses the most salient work, while TABLE 2 presents a more systematic presentation of such methods. To date, many deep learning-based computer vision models for X-Ray COVID-19 were proposed. One of the most significant development is the model COVID-Net [58] proposed VOLUME 8, 2020 by Darwin AI, Canada. In this work, human-driven principled network design prototyping is combined with machine-driven design exploration to produce a network architecture for the detection of COVID-19 cases from chest X-Ray. The first stage of the human-machine collaborative design strategy is based on residual architecture design principles. The dataset used to train and evaluate COVID-Net is referred to as COVIDx [58] and comprise a total of 16,756 chest radiography images across 13,645 patient cases. The proposed model achieved 92.4% accuracy 80% sensitivity for COVID-19 diagnosis. The initial network design prototype makes one of three classes: a) no infection (normal), b) non-COVID19 infection (viral and bacterial), and c) COVID-19 viral infection. The goal is to aid clinicians to decide better which treatment strategy to employ depending on the cause of infection since COVID-19 and non-COVID19 infections require different treatment plans. In the second stage, data, along with human-specific design requirements, act as a guide to a design exploration strategy to learn and identify the optimal macro-and microarchitecture designs to construct the final tailor-made deep neural network architecture. The proposed COVIDNet network diagram is shown in FIGURE 5 and available publicly at https:// github.com/lindawangg/COVID-Net. Hemdan et al. [59] proposed the COVIDX-Net based on seven different architectures of DCNNs; namely VGG19, DenseNet201 [60] , InceptionV3, ResNetV2, InceptionResNetV2, Xception, and MobileNetV2 [61] . These models were trained on COVID-19 cases provided by Dr Joseph Cohen and Dr Adrian Rosebrock, available at https://github.com/ieee8023/covid-chestxraydataset [62] . The best model combination resulted in F1-scores of 0.89 and 0.91 for normal and COVID-19 cases. Similarly, Abbas et al. [63] proposed a Decompose, Transfer, and Compose (DeTraC) approach for the classification of COVID-19 chest X-Ray images. The authors applied CNN features of pre-trained models on ImageNet and ResNet to perform the diagnoses. The dataset consisted of 80 samples of normal CXRs (with 4020 x 4892 pixels) from the Japanese Society of Radiological Technology (JSRT) Cohen JP. COVID-19 image data collection, available at https://githubcom/ieee8023/covidchestxray-dataset [62] . This model achieved an accuracy of 95.12% (with a sensitivity of 97.91%, a specificity of 91.87%, and a precision of 93.36%). The code is available at https://github.com/asmaa4may/DeTraC COVId19. Ghoshal and Tucker et al.. [57] introduced Uncertainty-Aware COVID-19 Classification and Referral model with the proposed Dropweights based on Bayesian Convolutional Neural Networks (BCNN). For COVID-19 detection to be meaningful, two types of predictive uncertainty in deep learning were used on a subsequent work [64] . One of it is Epistemic, or Model uncertainty accounts for the model parameters uncertainty as it does not take all of the aspects of the data into account or the lack of training data. The other is Aleatoric uncertainty that accounts for noise inherent in the observations due to class overlap, label noise, homoscedastic and heteroscedastic noise, which cannot be reduced even if more data were to be collected. Bayesian Active Learning by Disagreement (BALD) [65] , is based on mutual information that maximizes the information between model posterior and predictions density functions approximated as the difference between the entropy of the predictive distribution and the mean entropy of predictions across samples. A BCCN model was trained on 68 Posterior-Anterior (PA) X-Ray images of lungs with COVID-19 cases from Dr Joseph Cohen's Github repository [62] , augmented the dataset with Kaggle's Chest X-Ray Images (Pneumonia) from healthy patients. It achieved 88.39% accuracy on the available dataset. This work additionally recommended visualisation of distinct features, as an additional insight to point prediction for a more informed decision-making process. It used the saliency maps produced by various state-of-the-art methods, e.g. Class Activation Map (CAM) [66] , Guided Backpropagation, and Guided Gradient, and Gradients to show more distinct features in the CSR images. A Capsule Network-based Framework called COVID-CAPS [67] is proposed for the Identification of COVID-19 cases from X-ray Images. A lightweight deep neural network (DNN) based mobile app is proposed in [68] that can process noisy images of chest X-ray (CXR) for point-of-care COVID-19 screening and is available at url:https://github.com/xinli0928/COVID-Xray. A 3-step approach to fine-tune a pre-trained ResNet-50 architecture to improve model performance is proposed by [58] . Similar other works are proposed recently [69] - [71] . To the best of our knowledge, [72] reported the largest dataset including 144,167 images from 750 patients (400 COVID patients). As deep-learning models are data-hungry and most other projects perform transfer learning on extremely small datasets (often < 1000 images), this is a remarkable project and a first step towards signifying more realistic and clinically relevant performance estimates. The classifier achieves a sensitivity of 95% and a specificity of 93%. Besides, a segmentation model is trained with the deep supervision strategy and shown to identify lesion areas of the positive predictions. One drawback of the work is that the models operate autonomously and the lesions identified by the segmentation model may by no means have been relevant for the positive prediction of the classifier. Lung ultrasound (LUS) is evolved over the last few years to its theoretical and operative aspects. One of the characteristic features of LUS is its ability to define the alterations affecting the ratio between tissue and air in the superficial lung [55] , [78] . The practical advantages of LUS are numerous: US devices are portable, bringing along the salient benefit of performing a point-of-care LUS at the patient's bedside or even home that can easily be repeated for monitoring purposes. LUS minimizes the requirement for transferring the patient, controlling the potential risk of further infection and spreading it among health care personnel. In contrast to CT and X-Ray, US is non-irradiating, and the instruments are cheap and thus highly available even outside developed countries [79] . However, ultrasound is operator-dependent and to follow standardized protocols for LUS like the BLUE protocol [80] , experienced technicians are desired. This is boon and bane: While conducting a full LUS can take a few minutes and cause significantly higher portions of data than other modalities, the auto-correlation is exceptionally high and diagnostic patterns are visible only in few frames. LUS was repeatedly shown superiority to CXR for diagnosing pulmonary diseases (for review see [81] ), especially in resource-limited settings [82] . For COVID-19, LUS patterns are correlated to disease stage, comorbidities and severity of pulmonary injury [83] and most dominantly include B-lines, vertical artifacts that range from the pleural deep into the lung [84] . Importantly, LUS was lately reported higher sensitivity and equal specificity than CXR in diagnosing COVID-19 [85] . In a comparison of LUS to CT, it was shown that for all typical features of LUS in COVID-19 patients, analogs to known patterns in CT scans could be found [86] . FIGURE 6 illustrates the detection of COVID-19 from ultrasound images. While LUS is used commonly as a first-line examination method in European countries like Italy [87] , it is not mentioned in the ACR recommendations as clinical practice for COVID [54] . Besides, some articles argued that LUS can assist early diagnosis and assessment of COVID and even found better sensitivity of LUS in detecting certain features [88] . This has caused a vivid debate on the role of LUS for the COVID pandemic [89] - [92] . Since LUS is a less established practice for examining COVID-19 patients, less clinical data is recorded and publicly available. It is presumably a primary reason why fewer computer vision projects focus on it, despite the advocacy of recent trends in medicine (see above).. TABLE 3 presents a more categorical presentation of such methods. Preliminary investigations for clarifying the diagnostic and prognostic role of LUS in COVID-19 are underway. Computer vision on ultrasound imaging became increasingly popular in the last years [93] , but comparably little work has been done on LUS. The first work to apply computer vision on ultrasound probes of COVID-19 patients was POCOVID-Net, a deep convolutional neural network with a VGG backbone [94] . POCOVID-Net introduced an LUS dataset that initially consisted of 1103 images (654 COVID-19, 277 bacterial pneumonia, and 172 healthy controls), sampled from 64 videos. As of July 2020, the dataset contains ∼150 videos and ∼50 images, resembling the largest publicly available dataset of LUS: https://github.com/jannisborn/covid19_pocus_ultrasound. Besides, the trained models were deployed and can be freely used at: https://pocovidscreen.org. On the initial dataset, POCOVID-Net reports a video accuracy of 92% and a sensitivity and specificity of 96% and 79% for COVID-19 respectively. It accounts for a preliminary proof-of-concept that COVID-19 can be automatically distinguished from other pulmonary conditions through LUS, and it opens a branch to follow up on the granularity of the differentiation. On the updated POCOVID-Net dataset, performance could be improved with an accuracy of 94%, sensitivity and specificity of 98% and 91% in a 5-fold cross-validation on LUS videos [95] . This work utilizes Bayesian deep learning to compute uncertainty estimations that are deemed crucial for medical imaging [96] . [95] then demonstrated how epistemic uncertainty estimations (measured by Monte Carlo dropout) could let the model self-recognize low confidence situations. Additionally, the authors computed and validated CAMs with the help of medical experts and found that the model learns in a completely unsupervised fashion to highlight lung FIGURE 6. Detection of COVID-19 from ultrasound images: Ultrasound imagery is widely available and accessible throughout the world and therefore, can be a valuable tool for monitoring disease progression. Adapted from [55] . consolidations (94% sensitivity) and, to a lesser extent, A-lines (62%). The CAMs were overall found helpful for diagnosis by the experts. However, it leaves room for improvement in B-line detection. Interestingly, the performance could be mildly improved when the classifier was coupled with the segmentation model by [97] . The named work by [97] introduced a rich stack of CNN models for segmentation and severity assessment of COVID-19 patients. Based on ∼1000 images from convex probes of 33 patients, an ensemble of 3 segmentation models (UNet, UNet++ and Deeplabv3+) is shown to reliable extract both, A-lines and COVID biomarkers (accuracy 96%, binary dice score 0.75). Besides, they classify COVID severity on four levels (0 to 3). They introduce a so-called regularised spatial transformer network that performs a weak localization by extracting two transformed image sections that, ideally, should contain pathological artifacts. Their model achieves a precision of 70% and a recall of 60% on the four-class classification. However, despite the authors claim to release a dataset of 277 LUS videos from 35 patients with a total of almost 60,000 frames, to date, only 60 videos can be accessed (after the account request is manually approval). No annotations are available for those videos, rendering a validation of the results effectively impossible. As B-lines are maybe the most critical LUS feature in COVID patients, [98] presented a specialized approach for line artifact quantification that utilizes a non-convex regularization technique dubbed Cauchy proximal splitting. This technique outperforms state-of-the-art B-line identification [99] and detects 87% of the B-lines in 9 COVID-19 patients, reducing the error margin by 40% compared to [99] . Since ultrasound equipment is small and portable options are available (POCUS devices), the impact of webindependent, on-device analysis is high, especially since LUS belongs to the standard repertoire even in remote medical facilities. Future projects could, for example, improve the mediocre results found in an ablation study with mobile-friendly CNNs [95] to facilitate on-device processing. WHO has provided some guidelines on infection prevention and control (IPC) strategies for use when infection with a novel coronavirus is suspected [104] . Major IPC tries to control transmission in health care settings that include early recognition and source control and applying standard precautions for all patients. It also includes implementation of additional empiric precautions like airborne precautions for suspected cases of COVID-19, implementation of administrative controls, and use of environmental and engineering controls. Computer vision applications are providing valuable support for the implementation of IPC strategies. Protective techniques to control the virus spread in the early stage of disease progression were considered very early, as the usage of masks. Some countries like China implemented it as a control strategy at the start of the epidemic. Computer vision-based systems greatly facilitated such implementation. Wang et al.. [100] proposed the Masked Face Recognition approach using a multi granularity masked face recognition model, resulting in 95% accuracy on a masked face image dataset. The data was made public for research and provided three types of masked face datasets, including A similar strategy is the use of Infrared thermography. It can be used as an early detection strategy for infected people, especially in crowns like passengers at an airport-various medical applications of infrared thermography re summarised by Lahiri et al. [56] , including fever screening. Somboonkaew et al. [107] introduced a mobile platform that can be used for an automatic fever screening system using forehead temperature. Ghassemi et al. [108] has discussed the best practices for standardized performance and testing of infrared thermographs. An Infection Screening System based on Thermography and CCD Camera is proposed by Negishi et al. [109] with Good Stability and Swiftness for Non-contact Vital-Signs Measurement by Feature Matching and MUSIC Algorithm. Earlier for SARD spread control. A computer vision system to help in fever screening by Chiu et al. [102] was used in earlier outbreaks of SARS. From 13 April to 12 May 2003, 72,327 patients and visitors passed through the only entrance allowed at TMU-WFH where a thermography station was in operation. FIGURE 7 illustrates the use of thermal imagery for temperature screening. Additional miscellaneous approaches for prevention and control are also worth noting. An example is pandemic drones using remote sensing and digital imagery, which were recommended for identifying infected people. Al-Naji et al. [110] have used such a system for remote life sign monitoring in FIGURE 8. Visualizations shown by using different saliency maps that provide additional insights diagnosis. These maps help to identify the areas of activation that can lead to disease progression monitoring and severity detection. Adapted from [57] . disaster management in the past. A similar application is to use vision-guided robot control for 3D object recognition and manipulation. Moreover, 3D modelling and printers are helping to maintain the supply of healthcare equipment in this troubled time. Pearce [101] discusses RepRap-class 3-D printers and open-source microcontrollers. The applications are relevant since mass distributed manufacturing of ventilators has the potential to overcome medical supply shortages. Lastly, germ scanning is an essential step against combating COVID-19. Hay and Parthasarathy [103] has proposed a convolutional neural network for germ scanning such as the identification of bacteria Light-sheet microscopy image data with more than 90% accuracy. Although various attempts and claims of vaccinations development are announced in the media, however, there is no agreed and widely used treatment for disease caused by the virus at the moment. However, many of the COVID-19 symptoms can be treated.depending on the clinical condition of the patient. An improvement in clinical management practices is possible through automating various practices with the help of computer vision. One example is the classification of patients based on the severity of the disease and advising them appropriate medical care. FIGURE 8. provides a scenario of progression and severity monitoring by using different saliency maps that provide additional insights diagnosis. These maps help to identify the areas of activation that can lead to disease progression monitoring and severity detection. FIGURE 9 illustrates the Corona score calculation on a 3D model of patients CT images for patient disease progression. It is one of the ways infected areas can be visualised, and disease severity can be predicted for better disease management and patient care. TABLE 4 presents a more categorical presentation of such methods. An essential part of the fight against the virus is clinical management, which can be done by identifying patients that are critically ill so that they get immediate medical attention or ventilator support. A disease progression score is recommended to classify different types of infected patients in [35] . It is called ''corona score'' and is calculated by measurements of infected areas and the severity of disease from CT images. The corona score measures the progression of patients over time, and it is computed by a volumetric summation of the network-activation maps. MacLaren et al. [111] supports that radiological evidence can also be an essential tool to distinguish critically ill patients. Wang et al. [112] used depth camera and deep learning as abnormal respiratory patterns classifier that may contribute to the large-scale screening of people infected with the virus accurately and unobtrusively. Respiratory Simulation Model (RSM) is developed to control the gap between scarce real-world data and a large amount of training data. They proposed GRU neural network with FIGURE 9. Method of corona score calculation for patient disease progression monitoring is illustrated. It is one of the ways infected areas can be visualised, and disease severity can be predicted for better disease management and patient care. [35] . The CoV spike (S) glycoprotein is the main target for vaccines, therapeutic antibodies, and diagnostics that can guide future decisions. The virus connects to host cells through its trimeric spike glycoprotein. Using biophysical assays, Wrapp et al. [113] illustrated that this protein binds to their common host cell receptor at least ten times more tightly than the corresponding spike protein of severe acute respiratory syndrome (SARS)-CoV. Protein X-ray crystallography can discover the atomic structure of molecules and their functions. It can further facilitate scientists to design new drugs targeted to that function. MAchine Recognition of Crystallization Outcomes (MARCO) [114] initiative has introduced deep convolutional networks to achieve an accuracy of more than 94% on the visual recognition task of identifying protein crystals. It uncovers the potential of computer vision and deep learning for drug discovery. Quantitative structure-activity relationship (QSAR) analysis has perspectives on drug discovery and toxicology [115] . It employs structural, quantum chemical and physicochemical features calculated from molecular geometry as explanatory variables predicting physiological activity. Deep feature representation learning can be used for QSAR analysis by incorporating 360 • images of molecular conformations. Uesawa [116] has proposed QSAR (Quantitative structure-activity relationship) analysis using deep learning using a novel molecular image input technique. Similar techniques can be used for drug discovery to pave the way for vaccine development for COVID-19. • COVID-CT-Dataset [117] -The University of San Diego has released a data set with 349 CT images containing clinical findings of COVID-19. It claims to be the largest of its kind. To demonstrate its potential, an AI model is trained, achieving 85% accuracy. The data set is available at https://github.com/UCSD-AI4H/COVID-CT. • An image-based model working with CTs for COVID-19 diagnosis can be found at https://github.com/ JordanMicahBennett/SMART-CT-SCAN_BASED-COVID19_VIRUS_DETECTOR/. masked face recognition task. RMFRD dataset includes 5,000 pictures of 525 people wearing masks and 90,000 images of the same 525 subjects without masks. To the best of our knowledge, this is currently the world's largest real-world masked face dataset. SMFRD is a simulated masked face data set covering 500,000 face images of 10,000 subjects. These datasets are available at https://github.com/X-zhangyang/Real-World-Masked-Face-Dataset. • Thermal Images Datasets -There is no dataset of thermals for high fever screening. However, a fully annotated thermal face database and its application for thermal facial expression recognition were proposed by Kopaczka [120] . Information on further ideas of related data that can be figured out by using such systems is available at http://www.flir.com.au/discover/public-safety/thermalimaging-for-detecting-elevated-body-temperature/. Overall, it is encouraging that the computer vision research community had a massive response in return to the call for fighting COVID-19 epidemic. Data was collected and shared in a short time, and researchers proposed various approaches to address different challenges related to disease control. It became possible due to recent success in the field of deep learning and artificial intelligence. Web repositories like GitHub and ArXiv have contributed significantly to the rapid sharing of information. However, the impact of this research work is limited due to lack of clinical testing, fair evaluation and appropriate imaging datasets. We argue that COVID-19 research landscape is quite broad that covers more than imaging and becomes beyond the scope of computer vision research. Similarly, we did not include any machine learning or signal processing work that does not include imaging modality. Most of the research work is performed around disease diagnosis problem with various performance metrics and without clinical trials that make it hard to compare their performance. Similarly, various research datasets have been released for research purpose since the outbreak of the epidemic. However, these datasets can offer only limited scope and problem domains. For instance, for disease progression, often multiple images related to single patients are required with the timeline. Similarly, to evaluate different imaging modalities, researchers require multimodal imaging data related to the same patient that is not yet available for research purposes. The future work includes the fair performance comparison of different approaches, collection of a vast universal dataset and benchmark. We hope that the collective efforts of computer vision community like Imagaenet challenge can fill up this gap. In this article, we presented an extensive survey of computer vision efforts and methods to combat the COVID-19 pandemic challenge and also gave a brief review of the representative work to date. We divide the described methods into four categories based on their role in disease control: Computed Tomography (CT) scans, X-Ray Imagery, Ultrasound imaging and Prevention and Control. We provide detailed summaries of preliminary representative work, including available resources to facilitate further research and development. We hope that, in this first survey on Computer vision methods for COVID-19 control with extensive bibliography content, one can find give valuable insight into this domain and encourage new research. However, this work can be considered only as an early review since many computer vision approaches are being proposed and tested to control the COVID-19 pandemic at the current time. We believe that such efforts will be having a far-reaching impact with positive results to periods during the outbreak and post the COVID-19 pandemic. DOUGLAS PINTO SAMPAIO GOMES (Member, IEEE) received the B.E. degree in electrical engineering from the Federal University of Mato Grosso, Brazil, in 2013, the M.Sc. degree in power systems from the University of Sao Paulo, Brazil, in 2016, and the Ph.D. degree from Victoria University, Melbourne, Australia. His research interests include power systems, protection, power quality, and artificial intelligent systems. SUBRATA CHAKRABORTY (Senior Member, IEEE) received the Ph.D. degree in decision support systems from Monash University, Australia. He has worked as an Academician with the University of Southern Queensland, Charles Sturt University, the Queensland University of Technology, and Monash University. He is currently a Senior Lecturer with the Faculty of Engineering and Information Technology, School of Information, Systems and Modelling, University of Technology Sydney (UTS), Australia. He is also a core member of the Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), UTS. His current research interests include optimization models, data analytics, machine learning, and image processing with decision support applications in diverse domains, including business, agriculture, transport, health, and education. He is a Certified Professional Senior Member of ACS. Coronavirus infec-tions|more than just the common cold Emerging coronaviruses: Genome structure, replication, and pathogenesis Computer vision for COVID-19 control: A survey The continuing 2019-nCoV epidemic threat of novel coronaviruses to global health-The latest 2019 novel coronavirus outbreak in Wuhan, China Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: A descriptive study WHO Director-General's Opening Remarks at the Media Briefing on COVID-19-11 Explore the Cambridge Dictionary Detection of SARS-CoV-2 in different types of clinical specimens Variation in false-negative rate of reverse transcriptase polymerase chain reaction-based SARS-CoV-2 tests by time since exposure SARS-CoV-2-positive sputum and feces after conversion of pharyngeal samples in patients with COVID-19 Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases Computed Tomography (CT)-Chest Coronavirus disease 2019 (COVID-19): Role of chest CT in diagnosis and management Spectrum of chest CT findings in a familial cluster of COVID-19 infection Chest computed tomography images of early coronavirus disease (COVID-19),'' Can Development and evaluation of an AI system for COVID-19 diagnosis COVID-19 infection presenting with CT halo sign Chest CT for typical 2019-nCoV pneumonia: Relationship to negative RT-PCR testing Time course of lung changes on chest CT during recovery from 2019 novel coronavirus (COVID-19) pneumonia,'' Radiology Sensitivity of chest CT for COVID-19: Comparison to RT-PCR Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: A prospective study UNet++: A nested u-net architecture for medical image segmentation,'' in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT Feature pyramid networks for object detection Deep learning-based detection for COVID-19 from chest CT using weak label Artificial intelligence-enabled rapid diagnosis of patients with COVID-19 A deep learning algorithm using CT images to screen for corona virus disease (COVID-19),'' MedRxiv Rethinking the inception architecture for computer vision Deep learning system to screen coronavirus disease 2019 pneumonia Medical imaging 2019: Imaging informatics for healthcare, research, and applications Automatic multi-organ segmentation on abdominal CT with dense V-networks Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images Yet another accelerated SGD: ResNet-50 training on ImageNet in 74.7 seconds Rapid AI development cycle for the coronavirus (COVID-19) pandemic: Initial results for automated detection & patient monitoring using deep learning CT image analysis U-Net: Convolutional networks for biomedical image segmentation Lung infection quantification of COVID-19 in CT images with deep learning Coronavirus (COVID-19) classification using CT images by machine learning methods Scanning-beam digital X-ray (SBDX) technology for interventional and diagnostic cardiac angiography AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system in four weeks Imaging profile of the COVID-19 infection: Radiologic findings and literature review COVID-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images Added value of ultra-low-dose computed tomography, dose equivalent to chest X-ray radiography, for diagnosing chest pathology Infection control for CT equipment and radiographers' personal protection during the coronavirus disease (COVID-19) outbreak in China Screening and early diagnosis of osteoporosis through X-ray and ultrasound based techniques A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification Region based adaptive contrast enhancement of medical X-ray images Adaptive enhancement of X-ray images Lung segmentation in digital radiographs Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration SCAN: Structure correcting adversarial network for organ segmentation in chest X-rays Attention U-Net based adversarial architectures for chest X-ray lung segmentation Chest X-ray findings in 636 ambulatory patients with COVID-19 presenting to an urgent care center: A normal chest X-ray is no guarantee ACR Recommendations for the use of Chest Radiography and Computed Tomography (CT) for Suspected COVID-19 Infection Is there a role for lung ultrasound during the COVID-19 pandemic? Medical applications of infrared thermography: A review Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection COVID-ResNet: A deep learning framework for screening of COVID19 from radiographs COVIDX-Net: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images Utilization of DenseNet201 for diagnosis of breast abnormality MobileNetV2: Inverted residuals and linear bottlenecks COVID-19 image data collection Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning Efficient Bayesian active learning and matrix modelling Grad-CAM: Visual explanations from deep networks via gradient-based localization COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images On-device COVID-19 patient triage and follow-up using chest X-rays Towards an effective and efficient deep learning model for COVID-19 patterns detection in X-ray images X-ray image based COVID-19 detection using pre-trained deep learning models COVID-19 detection through transfer learning using multimodal imaging data JCS: An explainable COVID-19 diagnosis system by joint classification and segmentation Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks Automated methods for detection and classification pneumonia based on X-ray images using deep learning Detection of coronavirus disease (COVID-19) based on deep features Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks COVID-19 outbreak: Less stethoscope, more ultrasound Use of ultrasound in the developing world Relevance of lung ultrasound in the diagnosis of acute respiratory failure * : The BLUE protocol Lung ultrasound compared to chest X-ray for diagnosis of pediatric pneumonia: A meta-analysis Diagnostic use of lung ultrasound compared to chest radiograph for suspected pneumonia in a resource-limited setting Point-of-care lung ultrasound in patients with COVID-19-a narrative review Frequency of abnormalities detected by point-of-Care lung ultrasound in symptomatic COVID-19 patients: Systematic review and meta-analysis Point-of-care lung ultrasound is more sensitive than chest radiograph for evaluation of COVID-19 Findings of lung ultrasonography of novel corona virus pneumonia during the 2019-2020 epidemic Our Italian experience using lung ultrasound for identification, grading and serial follow-up of severity of lung involvement for management of patients with COVID-19 Lung ultrasonography versus chest CT in COVID-19 pneumonia: A two-centered retrospective comparison study from China Is there a role for lung ultrasound during the COVID-19 pandemic? COVID-19 outbreak: Less stethoscope, more ultrasound Covid-19 diagnostic imaging: Caution need before the end of the game Lung ultrasound in COVID-19 pneumonia: Prospects and limitations Deep learning in medical ultrasound analysis: A review Automatic detection of COVID-19 from a new lung ultrasound imaging dataset (POCUS),'' 2020 Accelerating COVID-19 differential diagnosis with explainable ultrasound image analysis Leveraging uncertainty information from deep neural networks for disease detection Deep learning for classification and localization of COVID-19 markers in point-of-care lung ultrasound Detection of line artefacts in lung ultrasound images of COVID-19 patients via non-convex regularization Line detection as an inverse problem: Application to lung ultrasound imaging Masked face recognition dataset and application A review of open source ventilators for COVID-19 and future pandemics Infrared thermography to mass-screen suspected SARS patients with fever Performance of convolutional neural networks for identification of bacteria in 3d microscopy datasets Rational use of Personal Protective Equipment for Coronavirus Disease (COVID-19) and Considerations During Severe Shortages: Interim Guidance Open Source Face Mask Detection Data + Model + Code + Online Web Experience, All Open Source Mobile-platform for automatic fever screening system based on infrared forehead temperature Best practices for standardized performance testing of infrared thermographs intended for fever screening Infection screening system using thermography and ccd camera with good stability and swiftness for non-contact vital-signs measurement by feature matching and music algorithm Life signs detector using a drone in disaster zones Preparing for the most critically ill patients with COVID-19: The potential role of extracorporeal membrane oxygenation Abnormal respiratory patterns classifier may contribute to largescale screening of people infected with COVID-19 in an accurate and unobtrusive manner Cryo-EM structure of the 2019-nCoV spike in the prefusion conformation Classification of crystallization outcomes using deep convolutional neural networks Quantitative structureactivity relationship methods: Perspectives on drug discovery and toxicology Quantitative structure-activity relationship analysis using deep learning based on a novel molecular image input technique COVID-CTdataset: A CT scan dataset about COVID-19 Can AI help in screening viral and COVID-19 pneumonia? ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases A fully annotated thermal face database and its application for thermal facial expression recognition