Summary of your 'study carrel' ============================== This is a summary of your Distant Reader 'study carrel'. The Distant Reader harvested & cached your content into a collection/corpus. It then applied sets of natural language processing and text mining against the collection. The results of this process was reduced to a database file -- a 'study carrel'. The study carrel can then be queried, thus bringing light specific characteristics for your collection. These characteristics can help you summarize the collection as well as enumerate things you might want to investigate more closely. This report is a terse narrative report, and when processing is complete you will be linked to a more complete narrative report. Eric Lease Morgan Number of items in the collection; 'How big is my corpus?' ---------------------------------------------------------- 33 Average length of all items measured in words; "More or less, how big is each item?" ------------------------------------------------------------------------------------ 582 Average readability score of all items (0 = difficult; 100 = easy) ------------------------------------------------------------------ 52 Top 50 statistically significant keywords; "What is my collection about?" ------------------------------------------------------------------------- 34 CNN 8 COVID-19 6 image 6 covid-19 5 model 4 feature 4 LSTM 3 sequence 2 table 2 figure 2 MIL 2 Hopfield 2 Fig 2 CMV 1 user 1 tweet 1 system 1 robot 1 rician 1 result 1 ray 1 network 1 melanoma 1 learn 1 international 1 heartbeat 1 gesture 1 fast 1 detection 1 datum 1 cholesterol 1 cell 1 base 1 VGG16 1 Twitter 1 September 1 SVM 1 RCC 1 Project 1 Pick 1 November 1 Niemann 1 NAR 1 Manifestos 1 MPA 1 MNIST 1 ICU 1 EEG 1 ECG 1 Doppler Top 50 lemmatized nouns; "What is discussed?" --------------------------------------------- 1821 image 1582 model 1127 sequence 1071 network 1057 feature 1028 dataset 991 datum 857 method 829 classification 774 % 743 learning 685 result 632 repertoire 624 - 534 training 516 performance 508 number 503 detection 492 study 491 layer 474 value 467 input 463 accuracy 441 algorithm 434 approach 412 case 399 time 381 architecture 370 x 366 attention 364 machine 362 system 362 class 352 size 348 motif 324 ray 324 instance 322 set 318 problem 311 disease 298 cell 298 analysis 290 test 285 object 285 information 274 function 272 signal 267 patient 262 pattern 258 author Top 50 proper nouns; "What are the names of persons or places?" -------------------------------------------------------------- 1100 CNN 822 al 651 et 639 COVID-19 580 . 288 CT 280 DeepRC 275 Table 262 ± 229 LSTM 214 Fig 146 AUC 142 Hopfield 122 SVM 119 k 114 AI 112 MIL 105 QReLU 103 Deep 101 Convolutional 100 ReLU 90 CD 88 CMV 87 y 85 Neural 84 X 80 R 78 Learning 78 Inception 77 • 77 ResNet 77 CXR 76 MR 76 ML 75 N 73 F 70 GPU 69 K 64 AA 62 Adam 60 m 58 Sect 58 MPA 58 EfficientNet 56 Eq 55 Networks 54 Coronavirus 53 T 53 Figure 52 u Top 50 personal pronouns nouns; "To whom are things referred?" ------------------------------------------------------------- 1585 we 829 it 245 they 215 i 95 them 40 us 25 itself 23 one 21 he 17 you 8 themselves 2 she 1 ζ 1 â 1 yolov2 1 s 1 ourselves 1 ours 1 me 1 icam-5 1 him 1 f 1 d Top 50 lemmatized verbs; "What do things do?" --------------------------------------------- 8509 be 2224 use 1317 have 887 base 655 propose 590 learn 559 show 435 train 402 apply 345 obtain 324 see 303 consider 289 detect 289 compare 286 give 281 follow 277 provide 262 perform 262 achieve 247 generate 242 extract 241 present 238 reduce 236 implant 235 include 229 do 224 classify 212 contain 211 represent 203 make 199 identify 193 set 183 improve 173 increase 171 indicate 169 predict 169 evaluate 165 take 162 lead 160 describe 159 require 149 develop 149 allow 148 know 145 find 138 calculate 132 combine 125 report 125 demonstrate 122 consist Top 50 lemmatized adjectives and adverbs; "How are things described?" --------------------------------------------------------------------- 734 deep 719 - 595 not 518 also 500 different 498 neural 493 high 440 other 418 such 413 more 398 well 383 immune 342 large 313 covid-19 302 only 294 first 287 real 247 then 234 good 218 positive 213 low 209 however 203 respectively 200 most 192 specific 192 new 191 same 191 medical 184 non 182 multi 179 pre 178 small 178 available 174 second 168 many 168 as 165 negative 157 computational 154 several 153 social 152 very 150 multiple 149 clinical 146 therefore 145 average 144 thus 143 novel 142 further 136 fast 130 modern Top 50 lemmatized superlative adjectives; "How are things described to the extreme?" ------------------------------------------------------------------------- 157 good 80 high 62 most 25 near 21 least 21 large 17 low 12 bad 11 late 8 short 7 Most 6 simple 6 fast 4 small 4 close 3 trainingt 3 long 3 fit 2 topmost 2 slow 2 great 1 wide 1 weak 1 poor 1 easy 1 deep 1 big 1 ImageNet 1 D(p 1 -t 1 -near 1 -Removal Top 50 lemmatized superlative adverbs; "How do things do to the extreme?" ------------------------------------------------------------------------ 138 most 16 well 4 least 1 worst 1 fast Top 50 Internet domains; "What Webbed places are alluded to in this corpus?" ---------------------------------------------------------------------------- 52 doi.org 7 github.com 3 www.tensorflow.org 2 www.kaggle.com 2 keras.io 2 clients.adaptivebiotech.com 1 youtu.be 1 www.fda.gov 1 www.ema.europa.eu 1 doi 1 creativecommons.org 1 creat 1 coronavirus.jhu.edu 1 colab.research.google.com Top 50 URLs; "What is hyperlinked from this corpus?" ---------------------------------------------------- 11 http://doi.org/10.1101/2020.09.02.20186759 10 http://doi.org/10 9 http://doi.org/10.1101/2020.11.04.20225698 7 http://doi.org/10.1101 5 http://doi.org/10.1101/2020.10.30.20222786 5 http://doi.org/10.1101/2020.05.22.20110817 4 http://github.com/ml-jku/DeepRC 2 http://www.tensorflow.org/ 2 http://www.kaggle.com/paultimothymooney/chest-xray-pneumonia 2 http://keras.io/ 2 http://github.com/spro/practical-pytorch 2 http://doi.org/10.1101/2020.05 2 http://clients.adaptivebiotech.com/pub/Emerson-2017-NatGen 1 http://youtu.be/IR5NnZvZBLk 1 http://www.tensorflow.org/datasets/catalog/mnist 1 http://www.fda.gov/Drugs/InformationOnDrugs/ucm113978.htm 1 http://www.ema.europa.eu/en/cyclodextrins 1 http://github.com/AntonisMakris/COVID19-XRay-Dataset 1 http://doi.org/10.1101/2020 1 http://doi.org/10.1016/j.jmgm.2019.07.014 1 http://doi.org/ 1 http://doi 1 http://creativecommons.org/licenses/by/4.0/ 1 http://creat 1 http://coronavirus.jhu.edu 1 http://colab.research.google.com/ Top 50 email addresses; "Who are you gonna call?" ------------------------------------------------- Top 50 positive assertions; "What sentences are in the shape of noun-verb-noun?" ------------------------------------------------------------------------------- 50 - trained models 48 - based sequence 36 - generated data 19 - based deep 14 images using deep 13 - based gesture 10 - based walk 9 - based learning 8 - based mil 8 - based model 8 - generated immunosequencing 8 models using spatial 8 networks have exponential 7 - based cnn 7 - based method 7 method does not 6 - based analysis 6 - based approach 6 - learning convolutional 6 algorithm does not 6 models using frequency 5 - based algorithms 5 - based convolutional 5 - based detection 5 - based methods 5 - based models 5 - based protein 5 - generated content 5 - trained convolutional 5 algorithm using ct 5 classification using deep 5 cnn based denoising 5 model is more 5 model was also 4 - based algorithm 4 - based averaging 4 - based covid-19 4 - based denoising 4 - based design 4 - based feature 4 - based framework 4 - based human 4 - based image 4 - based permutation 4 - based prediction 4 - based subsampling 4 cnn is capable 4 dataset is not 4 dataset is publicly 4 features are then Top 50 negative assertions; "What sentences are in the shape of noun-verb-no|not-noun?" --------------------------------------------------------------------------------------- 4 algorithm does not directly 1 - was not able 1 algorithms are not partial 1 algorithms is not possible 1 approaches are not very 1 data have no output 1 data is not uniform 1 dataset was not available 1 datasets are not always 1 feature is not very 1 image are not equal 1 images are not available 1 learning is not technical 1 model is not general 1 model is not robust 1 models are not able 1 results showing no toxicity 1 results were not interesting 1 sequences are not necessarily 1 sequences do not necessarily 1 study was not significant A rudimentary bibliography -------------------------- id = cord-028792-6a4jfz94 author = Basly, Hend title = CNN-SVM Learning Approach Based Human Activity Recognition date = 2020-06-05 keywords = CNN; SVM; feature summary = Traditionally, to deal with such problem of recognition, researcher are obliged to anticipate their algorithms of Human activity recognition by prior data training preprocessing in order to extract a set of features using different types of descriptors such as HOG3D [1] , extended SURF [2] and Space Time Interest Points (STIPs) [3] before inputting them to the specific classification algorithm such as HMM, SVM, Random Forest [4] [5] [6] . In this study, we proposed an advanced human activity recognition method from video sequence using CNN, where the large scale dataset ImageNet pretrains the network. Finally, all the resulting features have been merged to be fed as input to a simulated annealing multiple instance learning support vector machine (SMILE-SVM) classifier for human activity recognition. We proposed to use a pre-trained CNN approach based ResNet model in order to extract spatial and temporal features from consecutive video frames. doi = 10.1007/978-3-030-51935-3_29 id = cord-027732-8i8bwlh8 author = Boudaya, Amal title = EEG-Based Hypo-vigilance Detection Using Convolutional Neural Network date = 2020-05-31 keywords = CNN; EEG; detection summary = Given, its high temporal resolution, portability and reasonable cost, the present work focus on hypo-vigilance detection by analyzing EEG signal of various brain''s functionalities using fourteen electrodes placed on the participant''s scalp. On the other hand, deep learning networks offer great potential for biomedical signals analysis through the simplification of raw input signals (i.e., through various steps including feature extraction, denoising and feature selection) and the improvement of the classification results. In this paper, we focus on the EEG signal study recorded by fourteen electrodes for hypo-vigilance detection by analyzing the various functionalities of the brain from the electrodes placed on the participant''s scalp. In this paper, we propose a CNN hypo-vigilance detection method using EEG data in order to classify drowsiness and awakeness states. In the proposed simple CNN architecture for EEG signals classification, we use the Keras deep learning library. doi = 10.1007/978-3-030-51517-1_6 id = cord-175846-aguwenwo author = Chatsiou, Kakia title = Text Classification of Manifestos and COVID-19 Press Briefings using BERT and Convolutional Neural Networks date = 2020-10-20 keywords = CNN; Manifestos; Project summary = We use manually annotated political manifestos as training data to train a local topic ConvolutionalNeural Network (CNN) classifier; then apply it to the COVID-19PressBriefings Corpus to automatically classify sentences in the test corpus.We report on a series of experiments with CNN trained on top of pre-trained embeddings for sentence-level classification tasks. To aid fellow scholars with the systematic study of such a large and dynamic set of unstructured data, we set out to employ a text categorization classifier trained on similar domains (like existing manually annotated sentences from political manifestos) and use it to classify press briefings about the pandemic in a more effective and scalable way. doi = nan id = cord-255884-0qqg10y4 author = Chiroma, H. title = Early survey with bibliometric analysis on machine learning approaches in controlling coronavirus date = 2020-11-05 keywords = CNN; COVID-19; November; international summary = Therefore, the main goal of this study is to bridge this gap by carrying out an in-depth survey with bibliometric analysis on the adoption of machine-learning-based technologies to fight the COVID-19 pandemic from a different perspective, including an extensive systematic literature review and a bibliometric analysis. Moreover, the machine-learning-based algorithm predominantly utilized by researchers in developing the diagnostic tool is CNN mainly from X-rays and CT scan images. We believe that the presented survey with bibliometric analysis can help researchers determine areas that need further development and identify potential collaborators at author, country, and institutional levels to advance research in the focused area of machine learning application for disease control. (2020) proposed a joint model comprising CNN, support vector machine (SVM), random forest (RF), and multilayer perceptron integrated with chest CT scan result and non-image clinical information to predict COVID-19 infection in a patient. doi = 10.1101/2020.11.04.20225698 id = cord-133273-kvyzuayp author = Christ, Andreas title = Artificial Intelligence: Research Impact on Key Industries; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2020) date = 2020-10-05 keywords = CNN; Fig; ICU; base; datum; feature; figure; learn; model; network; result; robot; system summary = doi = nan id = cord-308219-97gor71p author = Elzeiny, Sami title = Stress Classification Using Photoplethysmogram-Based Spatial and Frequency Domain Images date = 2020-09-17 keywords = CNN; image; model summary = By combining 20% of the samples collected from test subjects into the training data, the calibrated generic models'' accuracy was improved and outperformed the generic performance across both the spatial and frequency domain images. The average classification accuracy of 99.6%, 99.9%, and 88.1%, and 99.2%, 97.4%, and 87.6% were obtained for the training set, validation set, and test set, respectively, using the calibrated generic classification-based method for the series of inter-beat interval (IBI) spatial and frequency domain images. The main contribution of this study is the use of the frequency domain images that are generated from the spatial domain images of the IBI extracted from the PPG signal to classify the stress state of the individual by building person-specific models and calibrated generic models. In this study, a new stress classification approach is proposed to classify the individual stress state into stressed or non-stressed by converting spatial images of inter-beat intervals of a PPG signal to frequency domain images and we use these pictures to train several CNN models. doi = 10.3390/s20185312 id = cord-354819-gkbfbh00 author = Islam, Md. Zabirul title = A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images date = 2020-08-15 keywords = CNN; COVID-19; LSTM summary = title: A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images. Therefore, this paper aims to propose a deep learning based system that combines the CNN and LSTM networks to automatically detect COVID-19 from X-ray images. By analyzing the results, it is demonstrated that a combination of CNN and LSTM has significant effects on the detection of COVID-19 based on the automatic extraction of features from X-ray images. We introduced a deep CNN-LSTM network for the detection of novel COVID-19 from X-ray images. Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks Automated detection of COVID-19 cases using deep neural networks with X-ray images doi = 10.1016/j.imu.2020.100412 id = cord-249065-6yt3uqyy author = Kassani, Sara Hosseinzadeh title = Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning-Based Approach date = 2020-04-22 keywords = CNN; covid-19; image summary = To the best of our knowledge, this research is the first comprehensive study of the application of machine learning (ML) algorithms (15 deep CNN visual feature extractor and 6 ML classifier) for automatic diagnoses of COVID-19 from X-ray and CT images. • With extensive experiments, we show that the combination of a deep CNN with Bagging trees classifier achieves very good classification performance applied on COVID-19 data despite the limited number of image samples. Motivated by the success of deep learning models in computer vision, the focus of this research is to provide an extensive comprehensive study on the classification of COVID-19 pneumonia in chest X-ray and CT imaging using features extracted by the stateof-the-art deep CNN architectures and trained on machine learning algorithms. The experimental results on available chest X-ray and CT dataset demonstrate that the features extracted by DesnseNet121 architecture and trained by a Bagging tree classifier generates very accurate prediction of 99.00% in terms of classification accuracy. doi = nan id = cord-266055-ki4gkoc8 author = Kikkisetti, S. title = Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs date = 2020-09-02 keywords = CNN; COVID-19; September summary = title: Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs This study employed deep-learning convolutional neural networks to classify COVID-19 lung infections on pCXR from normal and related lung infections to potentially enable more timely and accurate diagnosis. This retrospect study employed deep-learning convolutional neural network (CNN) with transfer learning to classify based on pCXRs COVID-19 pneumonia (N=455) on pCXR from normal (N=532), bacterial pneumonia (N=492), and non-COVID viral pneumonia (N=552). Deep-learning convolutional neural network with transfer learning accurately classifies COVID-19 on portable chest x-ray against normal, bacterial pneumonia or non-COVID viral pneumonia. The goal of this pilot study is to employ deep-learning convolutional neural networks to classify normal, bacterial infection, and non-COVID-19 viral infection (such as influenza) All rights reserved. In conclusion, deep learning convolutional neural networks with transfer learning accurately classify COVID-19 pCXR from pCXR of normal, bacterial pneumonia, and non-COVID viral pneumonia patients in a multiclass model. doi = 10.1101/2020.09.02.20186759 id = cord-190424-466a35jf author = Lee, Sang Won title = Darwin''s Neural Network: AI-based Strategies for Rapid and Scalable Cell and Coronavirus Screening date = 2020-07-22 keywords = CNN; DNN; cell; figure summary = Here we adapt the theory of survival of the fittest in the field of computer vision and machine perception to introduce a new framework of multi-class instance segmentation deep learning, Darwin''s Neural Network (DNN), to carry out morphometric analysis and classification of COVID19 and MERS-CoV collected in vivo and of multiple mammalian cell types in vitro. U-Net with Inception ResNet v2 backbone yielded the highest global accuracy of 0.8346, as seen in Figure 4(E) ; therefore, Inception-ResNet-v2 was integrated in the place of CNN II for DNN for cells. For overall instance segmentation results, DNN produced both superior global accuracy and Jaccard Similarity Coefficient for cells and viruses. As observed in Figure 6 (C1-C2) , the DNN analysis showed statistical significance in area and circularity of the COVID19 in comparison to the MERS virus particles, which aligned with findings in the ground truth data of the viruses. doi = nan id = cord-032684-muh5rwla author = Madichetty, Sreenivasulu title = A stacked convolutional neural network for detecting the resource tweets during a disaster date = 2020-09-25 keywords = CNN; NAR; tweet summary = Specifically, the authors in [3] used both information-retrieval methodologies and classification methodologies (CNN with crisis word embeddings) to extract the Need and Availability of Resource tweets during the disaster. The main drawback of CNN with crisis embeddings is that it does not work well if the number of training tweets is small and, in the case of information retrieval methodologies, keywords must be given manually to identify the need and availability of resource tweets during the disaster. Initially, the experiment is performed on the SVM classifier based on the proposed domainspecific features for the identification of NAR tweets and compared to the BoW model shown in Table 5 . This paper proposes a method named as CKS (CNN and KNN are used as base-level classifiers, and SVM is used as a Meta-level classifier) for identifying tweets related to the Need and Availability of Resources during the disaster. doi = 10.1007/s11042-020-09873-8 id = cord-319868-rtt9i7wu author = Majeed, Taban title = Issues associated with deploying CNN transfer learning to detect COVID-19 from chest X-rays date = 2020-10-06 keywords = CNN; covid-19; ray summary = In recent months, much research came out addressing the problem of COVID-19 detection in chest X-rays using deep learning approaches in general, and convolutional neural networks (CNNs) in particular [3] [4] [5] [6] [7] [8] [9] [10] . [3] built a deep convolutional neural network (CNN) based on ResNet50, InceptionV3 and Inception-ResNetV2 models for the classification of COVID-19 Chest X-ray images to normal and COVID-19 classes. [9] , authors use CT images to predict COVID-19 cases where they deployed Inception transfer-learning model to establish an accuracy of 89.5% with specificity of 88.0% and sensitivity of 87.0%. Wang and Wong [2] investigated a dataset that they called COVIDx and a neural network architecture called COVID-Net designed for the detection of COVID-19 cases from an open source chest X-ray radiography images. The deep learning architectures that we used for the purpose of COVID19 detection from X-ray images are AlexNet, VGG16, VGG19, ResNet18, ResNet50, ResNet101, Goog-leNet, InceptionV3, SqueezeNet, Inception-ReseNet-v2, Xception and DenseNet201. doi = 10.1007/s13246-020-00934-8 id = cord-325235-uupiv7wh author = Makris, A. title = COVID-19 detection from chest X-Ray images using Deep Learning and Convolutional Neural Networks date = 2020-05-24 keywords = CNN; VGG16; covid-19 summary = In this research work the effectiveness of several state-of-the-art pre-trained convolutional neural networks was evaluated regarding the automatic detection of COVID-19 disease from chest X-Ray images. A collection of 336 X-Ray scans in total from patients with COVID-19 disease, bacterial pneumonia and normal incidents is processed and utilized to train and test the CNNs. Due to the limited available data related to COVID-19, the transfer learning strategy is employed. The proposed CNN is based on pre-trained transfer models (ResNet50, InceptionV3 and Inception-ResNetV2), in order to obtain high prediction accuracy from a small sample of X-ray images. Abbas et al [22] presented a novel CNN architecture based on transfer learning and class decomposition in order to improve the performance of pre-trained models on the classification of X-ray images. 22.20110817 doi: medRxiv preprint In this research work the effectiveness of several state-of-the-art pre-trained convolutional neural networks was evaluated regarding the detection of COVID-19 disease from chest X-Ray images. doi = 10.1101/2020.05.22.20110817 id = cord-256756-8w5rtucg author = Manimala, M. V. R. title = Sparse MR Image Reconstruction Considering Rician Noise Models: A CNN Approach date = 2020-08-11 keywords = CNN; Fig; rician summary = The proposed algorithm employs a convolutional neural network (CNN) to denoise MR images corrupted with Rician noise. Dictionary learning for MRI (DLMRI) provided an effective solution to recover MR images from sparse k-space data [2] , but had a drawback of high computational time. The proposed denoising algorithm reconstructs MR images with high visual quality, further; it can be directly employed without optimization and prediction of the Rician noise level. The proposed CNN based algorithm is capable of denoising the Rician noise corrupted sparse MR images and also reduces the computation time substantially. This section presents the proposed CNN-based formulation for denoising and reconstruction of MR images from the sparse k-space data. The proposed CNN based denoising algorithm has been compared with various state-ofthe-art-techniques namely (1) Dictionary learning magnetic resonance imaging (DLMRI) [2] (2) Non-local means (NLM) and its variants namely unbiased NLM (UNLM), Rician NLM (RNLM), enhanced NLM (ENLM) and enhanced NLM filter with preprocessing (PENLM) [5] . doi = 10.1007/s11277-020-07725-0 id = cord-317643-pk8cabxj author = Masud, Mehedi title = Convolutional neural network-based models for diagnosis of breast cancer date = 2020-10-09 keywords = CNN; image; model summary = With this motivation, this paper considers eight different fine-tuned pre-trained models to observe how these models classify breast cancers applying on ultrasound images. Authors in [18] proposed a convolutional neural network leveraging Inception-v3 pre-trained model to classify breast cancer using breast ultrasound images. Authors in [24] compared three CNN-based transfer learning models ResNet50, Xception, and InceptionV3, and proposed a base model that consists of three convolutional layers to classify breast cancers from the breast Neural Computing and Applications ultrasound images dataset. Authors in [27] proposed a novel deep neural network consisting of clustering method and CNN model for breast cancer classification using Histopathological images. Then eight different pre-trained models after fine tuning are applied on the combined dataset to observe the performance results of breast cancer classification. This study implemented eight pre-trained CNN models with fine tuning leveraging transfer learning to observe the classification performance of breast cancer from ultrasound images. doi = 10.1007/s00521-020-05394-5 id = cord-337740-8ujk830g author = Matencio, Adrián title = Cyclic Oligosaccharides as Active Drugs, an Updated Review date = 2020-09-29 keywords = CNN; Cyclodextrin; Niemann; Pick; cholesterol summary = There have been many reviews of the cyclic oligosaccharide cyclodextrin (CD) and CD-based materials used for drug delivery, but the capacity of CDs to complex different agents and their own intrinsic properties suggest they might also be considered for use as active drugs, not only as carriers. The review is divided into lipid-related diseases, aggregation diseases, antiviral and antiparasitic activities, anti-anesthetic agent, function in diet, removal of organic toxins, CDs and collagen, cell differentiation, and finally, their use in contact lenses in which no drug other than CDs are involved. In addition to CDs, another dietary indigestible cyclic oligosaccharide formed by four D-glucopyranosyl residues linked by alternating α(1→3) and α(1→6) glucosidic linkages was recently found to have intrinsic bioactivity cyclic nigerosyl-1,6-nigerose or cyclotetraglucose (CNN, Figure 1 [21] ) The present review will update the most relevant applications mentioned in the review made by Braga et al., 2019 , including applications, such as the ability of CDs to combat aggregation diseases, their dietary functions, toxins removal, cell differentiation, and their application in contact lenses. doi = 10.3390/ph13100281 id = cord-275258-azpg5yrh author = Mead, Dylan J.T. title = Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling date = 2019-07-26 keywords = CNN; model; sequence; table summary = title: Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling This paper presents the first use of force-directed graphs for the visualization of sequence space in two dimensions, and applies them to the choice of suitable RNA-dependent RNA polymerase (RdRP) target-template pairs within human-infective RNA virus genera. Measures of centrality in protein sequence space for each genus were also derived and used to identify centroid nearest-neighbour sequences (CNNs) potentially useful for production of homology models most representative of their genera. We then present the first use of force-directed graphs to produce an intuitive visualization of sequence space, and select target RdRPs without solved structures for homology modelling. The solved structure has 10 other sequences in its proximity in the three-dimensional space, roughly Table 5 Homology modelling at intra-order, inter-family level. doi = 10.1016/j.jmgm.2019.07.014 id = cord-286887-s8lvimt3 author = Nour, Majid title = A Novel Medical Diagnosis model for COVID-19 infection detection based on Deep Features and Bayesian Optimization date = 2020-07-28 keywords = CNN; covid-19 summary = The proposed model is based on the convolution neural network (CNN) architecture and can automatically reveal discriminative features on chest X-ray images through its convolution with rich filter families, abstraction, and weight-sharing characteristics. study [5] , they used Chest Computed Tomography (CT) images and Deep Transfer Learning (DTL) method to detect COVID-19 and obtained a high diagnostic accuracy. proposed a novel hybrid method called the Fuzzy Color technique + deep learning models (MobileNetV2, SqueezeNet) with a Social Mimic optimization method to classify the COVID-19 cases and achieved high success rate in their work [6] . (2) The deep features extracted from deep layers of CNNs have been applied as the input to machine learning models to further improve COVID-19 infection detection. Only the number of samples in the COVID-19 class is increased by using the offline data augmentation approach, and then the proposed CNN model is trained and tested. doi = 10.1016/j.asoc.2020.106580 id = cord-330239-l8fp8cvz author = Oyelade, O. N. title = Deep Learning Model for Improving the Characterization of Coronavirus on Chest X-ray Images Using CNN date = 2020-11-03 keywords = CNN; COVID-19; image summary = The proposed model is then applied to the COVID-19 X-ray dataset in this study which is the National Institutes of Health (NIH) Chest X-Ray dataset obtained from Kaggle for the purpose of promoting early detection and screening of coronavirus disease. Several studies [4, 5, 6, 78, 26, 30] and reviews which have adapted CNN to the task of detection and classification of COVID-19 have proven that the deep learning model is one of the most popular and effective approaches in the diagnosis of COVD-19 from digitized images. In this paper, we propose the application of deep learning model in the category of Convolutional Neural Network (CNN) techniques to automate the process of extracting important features and then classification or detection of COVID-19 from digital images, and this may eventually be supportive in overcoming the issue of a shortage of trained physicians in remote communities [24] . doi = 10.1101/2020.10.30.20222786 id = cord-168974-w80gndka author = Ozkaya, Umut title = Coronavirus (COVID-19) Classification using Deep Features Fusion and Ranking Technique date = 2020-04-07 keywords = CNN; COVID-19 summary = In this study, a novel method was proposed as fusing and ranking deep features to detect COVID-19 in early phase. Within the scope of the proposed method, 3000 patch images have been labelled as CoVID-19 and No finding for using in training and testing phase. According to other pre-trained Convolutional Neural Network (CNN) models used in transfer learning, the proposed method shows high performance on Subset-2 with 98.27% accuracy, 98.93% sensitivity, 97.60% specificity, 97.63% precision, 98.28% F1-score and 96.54% Matthews Correlation Coefficient (MCC) metrics. When the studies in the literature are examined, Shan et al proposed a neural network model called VB-Net in order to segment the COVID-19 regions in CT images. were able to successfully diagnose COVID-19 using deep learning models that could obtain graphical features in CT images [8] . Deep features were obtained with pre-trained Convolutional Neural Network (CNN) models. In the study, deep features were obtained by using pre-trained CNN networks. doi = nan id = cord-131094-1zz8rd3h author = Parisi, L. title = QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics date = 2020-10-15 keywords = CNN; COVID-19; MNIST; table summary = Despite a higher computational cost, results indicated an overall higher classification accuracy, precision, recall and F1-score brought about by either quantum AFs on five of the seven bench-mark datasets, thus demonstrating its potential to be the new benchmark or gold standard AF in CNNs and aid image classification tasks involved in critical applications, such as medical diagnoses of COVID-19 and PD. Despite a higher computational cost (four-fold with respect to the other AFs except for the CReLU''s increase being almost three-fold), the results achieved by either or both the proposed QReLU and m-ReLU AFs, assessed on classification accuracy, precision, recall and F1-score, indicate an overall higher generalisation achieved on five of the seven benchmark datasets ( Table 2 on the MNIST data, Tables 3 and 5 on PD-related spiral drawings, Tables 7 and 8 on COVID-19 lung US images). doi = nan id = cord-135296-qv7pacau author = Polsinelli, Matteo title = A Light CNN for detecting COVID-19 from CT scans of the chest date = 2020-04-24 keywords = CNN; covid-19 summary = We propose a light CNN design based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with other CT images (community-acquired pneumonia and/or healthy images). On the tested datasets, the proposed modified SqueezeNet CNN achieved 83.00% of accuracy, 85.00% of sensitivity, 81.00% of specificity, 81.73% of precision and 0.8333 of F1Score in a very efficient way (7.81 seconds medium-end laptot without GPU acceleration). In the present work, we aim at obtaining acceptable performances for an automatic method in recognizing COVID-19 CT images of lungs while, at the same time, dealing with reduced datasets for training and validation and reducing the computational overhead imposed by more complex automatic systems. In this work we developed, trained and tested a light CNN (based on the SqueezeNet) to discriminate between COVID-19 and community-acquired pneumonia and/or healthy CT images. doi = nan id = cord-296359-pt86juvr author = Polsinelli, Matteo title = A Light CNN for detecting COVID-19 from CT scans of the chest date = 2020-10-03 keywords = CNN; CNN-2 summary = In this work we propose a light Convolutional Neural Network (CNN) design, based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with respect to other community-acquired pneumonia and/or healthy CT images. Also the average classification time on a high-end workstation, 1.25 seconds, is very competitive with respect to that of more complex CNN designs, 13.41 seconds, witch require pre-processing. We started from the model of the SqueezeNet CNN to discriminate between COVID-19 and community-acquired pneumonia and/or healthy CT images. In this arrangement the number of images from the italian dataset used to train, validate and Test-1 are 60, 20 and 20, respectively. For each dataset arrangement we organized 4 experiments in which we tested different CNN models, transfer learning and the effectiveness of data augmentation. For each attempt, the CNN model has been trained for 20 epochs and evaluated by the accuracy results calculated on the validation dataset. doi = 10.1016/j.patrec.2020.10.001 id = cord-127759-wpqdtdjs author = Qi, Xiao title = Chest X-ray Image Phase Features for Improved Diagnosis of COVID-19 Using Convolutional Neural Network date = 2020-11-06 keywords = CNN; COVID-19; CXR summary = In this study, we design a novel multi-feature convolutional neural network (CNN) architecture for multi-class improved classification of COVID-19 from CXR images. In this work we show how local phase CXR features based image enhancement improves the accuracy of CNN architectures for COVID-19 diagnosis. Our proposed method is designed for processing CXR images and consists of two main stages as illustrated in Figure 1 : 1-We enhance the CXR images (CXR(x, y)) using local phase-based image processing method in order to obtain a multi-feature CXR image (M F (x, y)), and 2-we classify CXR(x, y) by designing a deep learning approach where multi feature CXR images (M F (x, y)), together with original CXR data (CXR(x, y)), is used for improving the classification performance. Our proposed multi-feature CNN architectures were trained on a large dataset in terms of the number of COVID-19 CXR scans and have achieved improved classification accuracy across all classes. doi = nan id = cord-024491-f16d1zov author = Qiu, Xi title = Simultaneous ECG Heartbeat Segmentation and Classification with Feature Fusion and Long Term Context Dependencies date = 2020-04-17 keywords = CNN; ECG; heartbeat summary = To achieve simultaneous segmentation and classification, we present a Faster R-CNN based model that has been customized to handle ECG data. Since deep learning methods can produce feature maps from raw data, heartbeat segmentation can be simultaneously conducted with classification with a single neural network. To achieve simultaneous segmentation and classification, we present a Faster R-CNN [2] based model that has been customized to handle ECG sequences. In our method, we present a modified Faster R-CNN for arrhythmia detection which works in only two steps: preprocessing, and simultaneous heartbeat segmentation and classification. The architecture of our model is shown in Fig. 2 , which takes 1-D ECG sequence as its input and conducts heartbeat segmentation and classification simultaneously. Different from most deep learning methods which compute feature maps for a single heartbeat, our backbone model takes a long ECG sequence as its input. doi = 10.1007/978-3-030-47436-2_28 id = cord-269270-i2odcsx7 author = Sahlol, Ahmed T. title = COVID-19 image classification using deep features and fractional-order marine predators algorithm date = 2020-09-21 keywords = CNN; MPA; feature; image summary = In this paper, we propose an improved hybrid classification approach for COVID-19 images by combining the strengths of CNNs (using a powerful architecture called Inception) to extract features and a swarm-based feature selection algorithm (Marine Predators Algorithm) to select the most relevant features. The proposed COVID-19 X-ray classification approach starts by applying a CNN (especially, a powerful architecture called Inception which pre-trained on Imagnet dataset) to extract the discriminant features from raw images (with no pre-processing or segmentation) from the dataset that contains positive and negative COVID-19 images. 1. Propose an efficient hybrid classification approach for COVID-19 using a combination of CNN and an improved swarm-based feature selection algorithm. 4. Evaluate the proposed approach by performing extensive comparisons to several state-of-art feature selection algorithms, most recent CNN architectures and most recent relevant works and existing classification methods of COVID-19 images. doi = 10.1038/s41598-020-71294-2 id = cord-258170-kyztc1jp author = Shorfuzzaman, Mohammad title = Towards the sustainable development of smart cities through mass video surveillance: A response to the COVID-19 pandemic date = 2020-11-05 keywords = CNN; COVID-19; fast summary = In particular, we make the following contributions: (a) A deep learning-based framework is presented for monitoring social distancing in the context of sustainable smart cities in an effort to curb the spread of COVID-19 or similar infectious diseases; (b) The proposed system leverages state-of-the-art, deep learning-based real-time object detection models for the detection of people in videos, captured with a monocular camera, to implement social distancing monitoring use cases; (c) A J o u r n a l P r e -p r o o f perspective transformation is presented, where the captured video is transformed from a perspective view to a bird''s eye (top-down) view to identify the region of interest (ROI) in which social distancing will be monitored; (d) A detailed performance evaluation is provided to show the effectiveness of the proposed system on a video surveillance dataset. doi = 10.1016/j.scs.2020.102582 id = cord-102774-mtbo1tnq author = Sun, Yuliang title = Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform date = 2020-05-20 keywords = CNN; Doppler; LSTM; gesture summary = In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. [16] projected the range-Doppler-measurement-cycles into rangetime and Doppler-time to reduce the input dimension of the LSTM layer and achieved a good classification accuracy in real-time, the proposed algorithms were implemented on a personal computer with powerful computational capability. doi = 10.1109/jsen.2020.2994292 id = cord-202184-hh7hugqi author = Wang, Jun title = Boosted EfficientNet: Detection of Lymph Node Metastases in Breast Cancer Using Convolutional Neural Network date = 2020-10-10 keywords = AUC; CNN; RCC summary = In this work, we propose three strategies to improve the capability of EfficientNet, including developing a cropping method called Random Center Cropping (RCC) to retain significant features on the center area of images, reducing the downsampling scale of EfficientNet to facilitate the small resolution images of RPCam datasets, and integrating Attention and Feature Fusion mechanisms with EfficientNet to obtain features containing rich semantic information. This work has three main contributions: (1) To our limited knowledge, we are the first study to explore the power of EfficientNet on MBCs classification, and elaborate experiments are conducted to compare the performance of EfficientNet with other state-of-the-art CNN models, which might offer inspirations for researchers who are interested in image-based diagnosis using DL; (2) We propose a novel data augmentation method RCC to facilitate the data enrichment of small resolution datasets; (3) All of our four technological improvements boost the performance of original EfficientNet. The best accuracy and AUC achieve 97.96% and 99.68%, respectively, confirming the applicability of utilizing CNN-based methods for BC diagnosis. doi = nan id = cord-103297-4stnx8dw author = Widrich, Michael title = Modern Hopfield Networks and Attention for Immune Repertoire Classification date = 2020-08-17 keywords = CMV; CNN; Hopfield; LSTM; MIL; sequence summary = In this work, we present our novel method DeepRC that integrates transformer-like attention, or equivalently modern Hopfield networks, into deep learning architectures for massive MIL such as immune repertoire classification. DeepRC sets out to avoid the above-mentioned constraints of current methods by (a) applying transformer-like attention-pooling instead of max-pooling and learning a classifier on the repertoire rather than on the sequence-representation, (b) pooling learned representations rather than predictions, and (c) using less rigid feature extractors, such as 1D convolutions or LSTMs. In this work, we contribute the following: We demonstrate that continuous generalizations of binary modern Hopfield-networks (Krotov & Hopfield, 2016 Demircigil et al., 2017) have an update rule that is known as the attention mechanisms in the transformer. We evaluate the predictive performance of DeepRC and other machine learning approaches for the classification of immune repertoires in a large comparative study (Section "Experimental Results") Exponential storage capacity of continuous state modern Hopfield networks with transformer attention as update rule doi = 10.1101/2020.04.12.038158 id = cord-034614-r429idtl author = Yasar, Huseyin title = A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks date = 2020-11-04 keywords = CNN; covid-19 summary = title: A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks In this study, which aims at early diagnosis of Covid-19 disease using X-ray images, the deep-learning approach, a state-of-the-art artificial intelligence method, was used, and automatic classification of images was performed using convolutional neural networks (CNN). Within the scope of the study, the results were obtained using chest X-ray images directly in the training-test procedures and the sub-band images obtained by applying dual tree complex wavelet transform (DT-CWT) to the above-mentioned images. In the study, experiments were carried out for the use of images directly, using local binary pattern (LBP) as a pre-process and dual tree complex wavelet transform (DT-CWT) as a secondary operation, and the results of the automatic classification were calculated separately. doi = 10.1007/s10489-020-02019-1 id = cord-002901-u4ybz8ds author = Yu, Chanki title = Acral melanoma detection using a convolutional neural network for dermoscopy images date = 2018-03-07 keywords = CNN; image; melanoma summary = We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist''s and non-expert''s evaluation. CONCLUSION: Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet. In the result of group B by the training of group A images, CNN also showed a higher diagnostic accuracy (80.23%) than that of the non-expert (62.71%) but was similar to that of the expert (81.64%). doi = 10.1371/journal.pone.0193321 id = cord-121200-2qys8j4u author = Zogan, Hamad title = Depression Detection with Multi-Modalities Using a Hybrid Deep Learning Model on Social Media date = 2020-07-03 keywords = CNN; Twitter; feature; model; user summary = While many previous works have largely studied the problem on a small-scale by assuming uni-modality of data which may not give us faithful results, we propose a novel scalable hybrid model that combines Bidirectional Gated Recurrent Units (BiGRUs) and Convolutional Neural Networks to detect depressed users on social media such as Twitter-based on multi-modal features. To be specific, this work aims to develop a new novel deep learning-based solution for improving depression detection by utilizing multi-modal features from diverse behaviour of the depressed user in social media. To this end, we propose a hybrid model comprising Bidirectional Gated Recurrent Unit (BiGRU) and Conventional Neural network (CNN) model to boost the classification of depressed users using multi-modal features and word embedding features. The most closely related recent work to ours is [23] where the authors propose a CNN-based deep learning model to classify Twitter users based on depression using multi-modal features. doi = nan