cord-002901-u4ybz8ds 2018 We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist''s and non-expert''s evaluation. CONCLUSION: Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet. In the result of group B by the training of group A images, CNN also showed a higher diagnostic accuracy (80.23%) than that of the non-expert (62.71%) but was similar to that of the expert (81.64%). cord-024491-f16d1zov 2020 To achieve simultaneous segmentation and classification, we present a Faster R-CNN based model that has been customized to handle ECG data. Since deep learning methods can produce feature maps from raw data, heartbeat segmentation can be simultaneously conducted with classification with a single neural network. To achieve simultaneous segmentation and classification, we present a Faster R-CNN [2] based model that has been customized to handle ECG sequences. In our method, we present a modified Faster R-CNN for arrhythmia detection which works in only two steps: preprocessing, and simultaneous heartbeat segmentation and classification. The architecture of our model is shown in Fig. 2 , which takes 1-D ECG sequence as its input and conducts heartbeat segmentation and classification simultaneously. Different from most deep learning methods which compute feature maps for a single heartbeat, our backbone model takes a long ECG sequence as its input. cord-027732-8i8bwlh8 2020 Given, its high temporal resolution, portability and reasonable cost, the present work focus on hypo-vigilance detection by analyzing EEG signal of various brain''s functionalities using fourteen electrodes placed on the participant''s scalp. On the other hand, deep learning networks offer great potential for biomedical signals analysis through the simplification of raw input signals (i.e., through various steps including feature extraction, denoising and feature selection) and the improvement of the classification results. In this paper, we focus on the EEG signal study recorded by fourteen electrodes for hypo-vigilance detection by analyzing the various functionalities of the brain from the electrodes placed on the participant''s scalp. In this paper, we propose a CNN hypo-vigilance detection method using EEG data in order to classify drowsiness and awakeness states. In the proposed simple CNN architecture for EEG signals classification, we use the Keras deep learning library. cord-028792-6a4jfz94 2020 Traditionally, to deal with such problem of recognition, researcher are obliged to anticipate their algorithms of Human activity recognition by prior data training preprocessing in order to extract a set of features using different types of descriptors such as HOG3D [1] , extended SURF [2] and Space Time Interest Points (STIPs) [3] before inputting them to the specific classification algorithm such as HMM, SVM, Random Forest [4] [5] [6] . In this study, we proposed an advanced human activity recognition method from video sequence using CNN, where the large scale dataset ImageNet pretrains the network. Finally, all the resulting features have been merged to be fed as input to a simulated annealing multiple instance learning support vector machine (SMILE-SVM) classifier for human activity recognition. We proposed to use a pre-trained CNN approach based ResNet model in order to extract spatial and temporal features from consecutive video frames. cord-032684-muh5rwla 2020 Specifically, the authors in [3] used both information-retrieval methodologies and classification methodologies (CNN with crisis word embeddings) to extract the Need and Availability of Resource tweets during the disaster. The main drawback of CNN with crisis embeddings is that it does not work well if the number of training tweets is small and, in the case of information retrieval methodologies, keywords must be given manually to identify the need and availability of resource tweets during the disaster. Initially, the experiment is performed on the SVM classifier based on the proposed domainspecific features for the identification of NAR tweets and compared to the BoW model shown in Table 5 . This paper proposes a method named as CKS (CNN and KNN are used as base-level classifiers, and SVM is used as a Meta-level classifier) for identifying tweets related to the Need and Availability of Resources during the disaster. cord-034614-r429idtl 2020 title: A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks In this study, which aims at early diagnosis of Covid-19 disease using X-ray images, the deep-learning approach, a state-of-the-art artificial intelligence method, was used, and automatic classification of images was performed using convolutional neural networks (CNN). Within the scope of the study, the results were obtained using chest X-ray images directly in the training-test procedures and the sub-band images obtained by applying dual tree complex wavelet transform (DT-CWT) to the above-mentioned images. In the study, experiments were carried out for the use of images directly, using local binary pattern (LBP) as a pre-process and dual tree complex wavelet transform (DT-CWT) as a secondary operation, and the results of the automatic classification were calculated separately. cord-102774-mtbo1tnq 2020 In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. [16] projected the range-Doppler-measurement-cycles into rangetime and Doppler-time to reduce the input dimension of the LSTM layer and achieved a good classification accuracy in real-time, the proposed algorithms were implemented on a personal computer with powerful computational capability. cord-103297-4stnx8dw 2020 In this work, we present our novel method DeepRC that integrates transformer-like attention, or equivalently modern Hopfield networks, into deep learning architectures for massive MIL such as immune repertoire classification. DeepRC sets out to avoid the above-mentioned constraints of current methods by (a) applying transformer-like attention-pooling instead of max-pooling and learning a classifier on the repertoire rather than on the sequence-representation, (b) pooling learned representations rather than predictions, and (c) using less rigid feature extractors, such as 1D convolutions or LSTMs. In this work, we contribute the following: We demonstrate that continuous generalizations of binary modern Hopfield-networks (Krotov & Hopfield, 2016 Demircigil et al., 2017) have an update rule that is known as the attention mechanisms in the transformer. We evaluate the predictive performance of DeepRC and other machine learning approaches for the classification of immune repertoires in a large comparative study (Section "Experimental Results") Exponential storage capacity of continuous state modern Hopfield networks with transformer attention as update rule cord-121200-2qys8j4u 2020 While many previous works have largely studied the problem on a small-scale by assuming uni-modality of data which may not give us faithful results, we propose a novel scalable hybrid model that combines Bidirectional Gated Recurrent Units (BiGRUs) and Convolutional Neural Networks to detect depressed users on social media such as Twitter-based on multi-modal features. To be specific, this work aims to develop a new novel deep learning-based solution for improving depression detection by utilizing multi-modal features from diverse behaviour of the depressed user in social media. To this end, we propose a hybrid model comprising Bidirectional Gated Recurrent Unit (BiGRU) and Conventional Neural network (CNN) model to boost the classification of depressed users using multi-modal features and word embedding features. The most closely related recent work to ours is [23] where the authors propose a CNN-based deep learning model to classify Twitter users based on depression using multi-modal features. cord-127759-wpqdtdjs 2020 In this study, we design a novel multi-feature convolutional neural network (CNN) architecture for multi-class improved classification of COVID-19 from CXR images. In this work we show how local phase CXR features based image enhancement improves the accuracy of CNN architectures for COVID-19 diagnosis. Our proposed method is designed for processing CXR images and consists of two main stages as illustrated in Figure 1 : 1-We enhance the CXR images (CXR(x, y)) using local phase-based image processing method in order to obtain a multi-feature CXR image (M F (x, y)), and 2-we classify CXR(x, y) by designing a deep learning approach where multi feature CXR images (M F (x, y)), together with original CXR data (CXR(x, y)), is used for improving the classification performance. Our proposed multi-feature CNN architectures were trained on a large dataset in terms of the number of COVID-19 CXR scans and have achieved improved classification accuracy across all classes. cord-131094-1zz8rd3h 2020 Despite a higher computational cost, results indicated an overall higher classification accuracy, precision, recall and F1-score brought about by either quantum AFs on five of the seven bench-mark datasets, thus demonstrating its potential to be the new benchmark or gold standard AF in CNNs and aid image classification tasks involved in critical applications, such as medical diagnoses of COVID-19 and PD. Despite a higher computational cost (four-fold with respect to the other AFs except for the CReLU''s increase being almost three-fold), the results achieved by either or both the proposed QReLU and m-ReLU AFs, assessed on classification accuracy, precision, recall and F1-score, indicate an overall higher generalisation achieved on five of the seven benchmark datasets ( Table 2 on the MNIST data, Tables 3 and 5 on PD-related spiral drawings, Tables 7 and 8 on COVID-19 lung US images). cord-133273-kvyzuayp 2020 cord-135296-qv7pacau 2020 We propose a light CNN design based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with other CT images (community-acquired pneumonia and/or healthy images). On the tested datasets, the proposed modified SqueezeNet CNN achieved 83.00% of accuracy, 85.00% of sensitivity, 81.00% of specificity, 81.73% of precision and 0.8333 of F1Score in a very efficient way (7.81 seconds medium-end laptot without GPU acceleration). In the present work, we aim at obtaining acceptable performances for an automatic method in recognizing COVID-19 CT images of lungs while, at the same time, dealing with reduced datasets for training and validation and reducing the computational overhead imposed by more complex automatic systems. In this work we developed, trained and tested a light CNN (based on the SqueezeNet) to discriminate between COVID-19 and community-acquired pneumonia and/or healthy CT images. cord-168974-w80gndka 2020 In this study, a novel method was proposed as fusing and ranking deep features to detect COVID-19 in early phase. Within the scope of the proposed method, 3000 patch images have been labelled as CoVID-19 and No finding for using in training and testing phase. According to other pre-trained Convolutional Neural Network (CNN) models used in transfer learning, the proposed method shows high performance on Subset-2 with 98.27% accuracy, 98.93% sensitivity, 97.60% specificity, 97.63% precision, 98.28% F1-score and 96.54% Matthews Correlation Coefficient (MCC) metrics. When the studies in the literature are examined, Shan et al proposed a neural network model called VB-Net in order to segment the COVID-19 regions in CT images. were able to successfully diagnose COVID-19 using deep learning models that could obtain graphical features in CT images [8] . Deep features were obtained with pre-trained Convolutional Neural Network (CNN) models. In the study, deep features were obtained by using pre-trained CNN networks. cord-175846-aguwenwo 2020 We use manually annotated political manifestos as training data to train a local topic ConvolutionalNeural Network (CNN) classifier; then apply it to the COVID-19PressBriefings Corpus to automatically classify sentences in the test corpus.We report on a series of experiments with CNN trained on top of pre-trained embeddings for sentence-level classification tasks. To aid fellow scholars with the systematic study of such a large and dynamic set of unstructured data, we set out to employ a text categorization classifier trained on similar domains (like existing manually annotated sentences from political manifestos) and use it to classify press briefings about the pandemic in a more effective and scalable way. cord-190424-466a35jf 2020 Here we adapt the theory of survival of the fittest in the field of computer vision and machine perception to introduce a new framework of multi-class instance segmentation deep learning, Darwin''s Neural Network (DNN), to carry out morphometric analysis and classification of COVID19 and MERS-CoV collected in vivo and of multiple mammalian cell types in vitro. U-Net with Inception ResNet v2 backbone yielded the highest global accuracy of 0.8346, as seen in Figure 4(E) ; therefore, Inception-ResNet-v2 was integrated in the place of CNN II for DNN for cells. For overall instance segmentation results, DNN produced both superior global accuracy and Jaccard Similarity Coefficient for cells and viruses. As observed in Figure 6 (C1-C2) , the DNN analysis showed statistical significance in area and circularity of the COVID19 in comparison to the MERS virus particles, which aligned with findings in the ground truth data of the viruses. cord-202184-hh7hugqi 2020 In this work, we propose three strategies to improve the capability of EfficientNet, including developing a cropping method called Random Center Cropping (RCC) to retain significant features on the center area of images, reducing the downsampling scale of EfficientNet to facilitate the small resolution images of RPCam datasets, and integrating Attention and Feature Fusion mechanisms with EfficientNet to obtain features containing rich semantic information. This work has three main contributions: (1) To our limited knowledge, we are the first study to explore the power of EfficientNet on MBCs classification, and elaborate experiments are conducted to compare the performance of EfficientNet with other state-of-the-art CNN models, which might offer inspirations for researchers who are interested in image-based diagnosis using DL; (2) We propose a novel data augmentation method RCC to facilitate the data enrichment of small resolution datasets; (3) All of our four technological improvements boost the performance of original EfficientNet. The best accuracy and AUC achieve 97.96% and 99.68%, respectively, confirming the applicability of utilizing CNN-based methods for BC diagnosis. cord-249065-6yt3uqyy 2020 To the best of our knowledge, this research is the first comprehensive study of the application of machine learning (ML) algorithms (15 deep CNN visual feature extractor and 6 ML classifier) for automatic diagnoses of COVID-19 from X-ray and CT images. • With extensive experiments, we show that the combination of a deep CNN with Bagging trees classifier achieves very good classification performance applied on COVID-19 data despite the limited number of image samples. Motivated by the success of deep learning models in computer vision, the focus of this research is to provide an extensive comprehensive study on the classification of COVID-19 pneumonia in chest X-ray and CT imaging using features extracted by the stateof-the-art deep CNN architectures and trained on machine learning algorithms. The experimental results on available chest X-ray and CT dataset demonstrate that the features extracted by DesnseNet121 architecture and trained by a Bagging tree classifier generates very accurate prediction of 99.00% in terms of classification accuracy. cord-255884-0qqg10y4 2020 Therefore, the main goal of this study is to bridge this gap by carrying out an in-depth survey with bibliometric analysis on the adoption of machine-learning-based technologies to fight the COVID-19 pandemic from a different perspective, including an extensive systematic literature review and a bibliometric analysis. Moreover, the machine-learning-based algorithm predominantly utilized by researchers in developing the diagnostic tool is CNN mainly from X-rays and CT scan images. We believe that the presented survey with bibliometric analysis can help researchers determine areas that need further development and identify potential collaborators at author, country, and institutional levels to advance research in the focused area of machine learning application for disease control. (2020) proposed a joint model comprising CNN, support vector machine (SVM), random forest (RF), and multilayer perceptron integrated with chest CT scan result and non-image clinical information to predict COVID-19 infection in a patient. cord-256756-8w5rtucg 2020 The proposed algorithm employs a convolutional neural network (CNN) to denoise MR images corrupted with Rician noise. Dictionary learning for MRI (DLMRI) provided an effective solution to recover MR images from sparse k-space data [2] , but had a drawback of high computational time. The proposed denoising algorithm reconstructs MR images with high visual quality, further; it can be directly employed without optimization and prediction of the Rician noise level. The proposed CNN based algorithm is capable of denoising the Rician noise corrupted sparse MR images and also reduces the computation time substantially. This section presents the proposed CNN-based formulation for denoising and reconstruction of MR images from the sparse k-space data. The proposed CNN based denoising algorithm has been compared with various state-ofthe-art-techniques namely (1) Dictionary learning magnetic resonance imaging (DLMRI) [2] (2) Non-local means (NLM) and its variants namely unbiased NLM (UNLM), Rician NLM (RNLM), enhanced NLM (ENLM) and enhanced NLM filter with preprocessing (PENLM) [5] . cord-258170-kyztc1jp 2020 In particular, we make the following contributions: (a) A deep learning-based framework is presented for monitoring social distancing in the context of sustainable smart cities in an effort to curb the spread of COVID-19 or similar infectious diseases; (b) The proposed system leverages state-of-the-art, deep learning-based real-time object detection models for the detection of people in videos, captured with a monocular camera, to implement social distancing monitoring use cases; (c) A J o u r n a l P r e -p r o o f perspective transformation is presented, where the captured video is transformed from a perspective view to a bird''s eye (top-down) view to identify the region of interest (ROI) in which social distancing will be monitored; (d) A detailed performance evaluation is provided to show the effectiveness of the proposed system on a video surveillance dataset. cord-266055-ki4gkoc8 2020 title: Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs This study employed deep-learning convolutional neural networks to classify COVID-19 lung infections on pCXR from normal and related lung infections to potentially enable more timely and accurate diagnosis. This retrospect study employed deep-learning convolutional neural network (CNN) with transfer learning to classify based on pCXRs COVID-19 pneumonia (N=455) on pCXR from normal (N=532), bacterial pneumonia (N=492), and non-COVID viral pneumonia (N=552). Deep-learning convolutional neural network with transfer learning accurately classifies COVID-19 on portable chest x-ray against normal, bacterial pneumonia or non-COVID viral pneumonia. The goal of this pilot study is to employ deep-learning convolutional neural networks to classify normal, bacterial infection, and non-COVID-19 viral infection (such as influenza) All rights reserved. In conclusion, deep learning convolutional neural networks with transfer learning accurately classify COVID-19 pCXR from pCXR of normal, bacterial pneumonia, and non-COVID viral pneumonia patients in a multiclass model. cord-269270-i2odcsx7 2020 In this paper, we propose an improved hybrid classification approach for COVID-19 images by combining the strengths of CNNs (using a powerful architecture called Inception) to extract features and a swarm-based feature selection algorithm (Marine Predators Algorithm) to select the most relevant features. The proposed COVID-19 X-ray classification approach starts by applying a CNN (especially, a powerful architecture called Inception which pre-trained on Imagnet dataset) to extract the discriminant features from raw images (with no pre-processing or segmentation) from the dataset that contains positive and negative COVID-19 images. 1. Propose an efficient hybrid classification approach for COVID-19 using a combination of CNN and an improved swarm-based feature selection algorithm. 4. Evaluate the proposed approach by performing extensive comparisons to several state-of-art feature selection algorithms, most recent CNN architectures and most recent relevant works and existing classification methods of COVID-19 images. cord-275258-azpg5yrh 2019 title: Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling This paper presents the first use of force-directed graphs for the visualization of sequence space in two dimensions, and applies them to the choice of suitable RNA-dependent RNA polymerase (RdRP) target-template pairs within human-infective RNA virus genera. Measures of centrality in protein sequence space for each genus were also derived and used to identify centroid nearest-neighbour sequences (CNNs) potentially useful for production of homology models most representative of their genera. We then present the first use of force-directed graphs to produce an intuitive visualization of sequence space, and select target RdRPs without solved structures for homology modelling. The solved structure has 10 other sequences in its proximity in the three-dimensional space, roughly Table 5 Homology modelling at intra-order, inter-family level. cord-286887-s8lvimt3 2020 The proposed model is based on the convolution neural network (CNN) architecture and can automatically reveal discriminative features on chest X-ray images through its convolution with rich filter families, abstraction, and weight-sharing characteristics. study [5] , they used Chest Computed Tomography (CT) images and Deep Transfer Learning (DTL) method to detect COVID-19 and obtained a high diagnostic accuracy. proposed a novel hybrid method called the Fuzzy Color technique + deep learning models (MobileNetV2, SqueezeNet) with a Social Mimic optimization method to classify the COVID-19 cases and achieved high success rate in their work [6] . (2) The deep features extracted from deep layers of CNNs have been applied as the input to machine learning models to further improve COVID-19 infection detection. Only the number of samples in the COVID-19 class is increased by using the offline data augmentation approach, and then the proposed CNN model is trained and tested. cord-296359-pt86juvr 2020 In this work we propose a light Convolutional Neural Network (CNN) design, based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with respect to other community-acquired pneumonia and/or healthy CT images. Also the average classification time on a high-end workstation, 1.25 seconds, is very competitive with respect to that of more complex CNN designs, 13.41 seconds, witch require pre-processing. We started from the model of the SqueezeNet CNN to discriminate between COVID-19 and community-acquired pneumonia and/or healthy CT images. In this arrangement the number of images from the italian dataset used to train, validate and Test-1 are 60, 20 and 20, respectively. For each dataset arrangement we organized 4 experiments in which we tested different CNN models, transfer learning and the effectiveness of data augmentation. For each attempt, the CNN model has been trained for 20 epochs and evaluated by the accuracy results calculated on the validation dataset. cord-308219-97gor71p 2020 By combining 20% of the samples collected from test subjects into the training data, the calibrated generic models'' accuracy was improved and outperformed the generic performance across both the spatial and frequency domain images. The average classification accuracy of 99.6%, 99.9%, and 88.1%, and 99.2%, 97.4%, and 87.6% were obtained for the training set, validation set, and test set, respectively, using the calibrated generic classification-based method for the series of inter-beat interval (IBI) spatial and frequency domain images. The main contribution of this study is the use of the frequency domain images that are generated from the spatial domain images of the IBI extracted from the PPG signal to classify the stress state of the individual by building person-specific models and calibrated generic models. In this study, a new stress classification approach is proposed to classify the individual stress state into stressed or non-stressed by converting spatial images of inter-beat intervals of a PPG signal to frequency domain images and we use these pictures to train several CNN models. cord-317643-pk8cabxj 2020 With this motivation, this paper considers eight different fine-tuned pre-trained models to observe how these models classify breast cancers applying on ultrasound images. Authors in [18] proposed a convolutional neural network leveraging Inception-v3 pre-trained model to classify breast cancer using breast ultrasound images. Authors in [24] compared three CNN-based transfer learning models ResNet50, Xception, and InceptionV3, and proposed a base model that consists of three convolutional layers to classify breast cancers from the breast Neural Computing and Applications ultrasound images dataset. Authors in [27] proposed a novel deep neural network consisting of clustering method and CNN model for breast cancer classification using Histopathological images. Then eight different pre-trained models after fine tuning are applied on the combined dataset to observe the performance results of breast cancer classification. This study implemented eight pre-trained CNN models with fine tuning leveraging transfer learning to observe the classification performance of breast cancer from ultrasound images. cord-319868-rtt9i7wu 2020 In recent months, much research came out addressing the problem of COVID-19 detection in chest X-rays using deep learning approaches in general, and convolutional neural networks (CNNs) in particular [3] [4] [5] [6] [7] [8] [9] [10] . [3] built a deep convolutional neural network (CNN) based on ResNet50, InceptionV3 and Inception-ResNetV2 models for the classification of COVID-19 Chest X-ray images to normal and COVID-19 classes. [9] , authors use CT images to predict COVID-19 cases where they deployed Inception transfer-learning model to establish an accuracy of 89.5% with specificity of 88.0% and sensitivity of 87.0%. Wang and Wong [2] investigated a dataset that they called COVIDx and a neural network architecture called COVID-Net designed for the detection of COVID-19 cases from an open source chest X-ray radiography images. The deep learning architectures that we used for the purpose of COVID19 detection from X-ray images are AlexNet, VGG16, VGG19, ResNet18, ResNet50, ResNet101, Goog-leNet, InceptionV3, SqueezeNet, Inception-ReseNet-v2, Xception and DenseNet201. cord-325235-uupiv7wh 2020 In this research work the effectiveness of several state-of-the-art pre-trained convolutional neural networks was evaluated regarding the automatic detection of COVID-19 disease from chest X-Ray images. A collection of 336 X-Ray scans in total from patients with COVID-19 disease, bacterial pneumonia and normal incidents is processed and utilized to train and test the CNNs. Due to the limited available data related to COVID-19, the transfer learning strategy is employed. The proposed CNN is based on pre-trained transfer models (ResNet50, InceptionV3 and Inception-ResNetV2), in order to obtain high prediction accuracy from a small sample of X-ray images. Abbas et al [22] presented a novel CNN architecture based on transfer learning and class decomposition in order to improve the performance of pre-trained models on the classification of X-ray images. 22.20110817 doi: medRxiv preprint In this research work the effectiveness of several state-of-the-art pre-trained convolutional neural networks was evaluated regarding the detection of COVID-19 disease from chest X-Ray images. cord-330239-l8fp8cvz 2020 The proposed model is then applied to the COVID-19 X-ray dataset in this study which is the National Institutes of Health (NIH) Chest X-Ray dataset obtained from Kaggle for the purpose of promoting early detection and screening of coronavirus disease. Several studies [4, 5, 6, 78, 26, 30] and reviews which have adapted CNN to the task of detection and classification of COVID-19 have proven that the deep learning model is one of the most popular and effective approaches in the diagnosis of COVD-19 from digitized images. In this paper, we propose the application of deep learning model in the category of Convolutional Neural Network (CNN) techniques to automate the process of extracting important features and then classification or detection of COVID-19 from digital images, and this may eventually be supportive in overcoming the issue of a shortage of trained physicians in remote communities [24] . cord-337740-8ujk830g 2020 There have been many reviews of the cyclic oligosaccharide cyclodextrin (CD) and CD-based materials used for drug delivery, but the capacity of CDs to complex different agents and their own intrinsic properties suggest they might also be considered for use as active drugs, not only as carriers. The review is divided into lipid-related diseases, aggregation diseases, antiviral and antiparasitic activities, anti-anesthetic agent, function in diet, removal of organic toxins, CDs and collagen, cell differentiation, and finally, their use in contact lenses in which no drug other than CDs are involved. In addition to CDs, another dietary indigestible cyclic oligosaccharide formed by four D-glucopyranosyl residues linked by alternating α(1→3) and α(1→6) glucosidic linkages was recently found to have intrinsic bioactivity cyclic nigerosyl-1,6-nigerose or cyclotetraglucose (CNN, Figure 1 [21] ) The present review will update the most relevant applications mentioned in the review made by Braga et al., 2019 , including applications, such as the ability of CDs to combat aggregation diseases, their dietary functions, toxins removal, cell differentiation, and their application in contact lenses. cord-354819-gkbfbh00 2020 title: A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images. Therefore, this paper aims to propose a deep learning based system that combines the CNN and LSTM networks to automatically detect COVID-19 from X-ray images. By analyzing the results, it is demonstrated that a combination of CNN and LSTM has significant effects on the detection of COVID-19 based on the automatic extraction of features from X-ray images. We introduced a deep CNN-LSTM network for the detection of novel COVID-19 from X-ray images. Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks Automated detection of COVID-19 cases using deep neural networks with X-ray images