Carrel name: keyword-cnn-cord Creating study carrel named keyword-cnn-cord Initializing database file: cache/cord-024491-f16d1zov.json key: cord-024491-f16d1zov authors: Qiu, Xi; Liang, Shen; Zhang, Yanchun title: Simultaneous ECG Heartbeat Segmentation and Classification with Feature Fusion and Long Term Context Dependencies date: 2020-04-17 journal: Advances in Knowledge Discovery and Data Mining DOI: 10.1007/978-3-030-47436-2_28 sha: doc_id: 24491 cord_uid: f16d1zov file: cache/cord-131094-1zz8rd3h.json key: cord-131094-1zz8rd3h authors: Parisi, L.; Neagu, D.; Ma, R.; Campean, F. title: QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics date: 2020-10-15 journal: nan DOI: nan sha: doc_id: 131094 cord_uid: 1zz8rd3h file: cache/cord-002901-u4ybz8ds.json key: cord-002901-u4ybz8ds authors: Yu, Chanki; Yang, Sejung; Kim, Wonoh; Jung, Jinwoong; Chung, Kee-Yang; Lee, Sang Wook; Oh, Byungho title: Acral melanoma detection using a convolutional neural network for dermoscopy images date: 2018-03-07 journal: PLoS One DOI: 10.1371/journal.pone.0193321 sha: doc_id: 2901 cord_uid: u4ybz8ds file: cache/cord-027732-8i8bwlh8.json key: cord-027732-8i8bwlh8 authors: Boudaya, Amal; Bouaziz, Bassem; Chaabene, Siwar; Chaari, Lotfi; Ammar, Achraf; Hökelmann, Anita title: EEG-Based Hypo-vigilance Detection Using Convolutional Neural Network date: 2020-05-31 journal: The Impact of Digital Technologies on Public Health in Developed and Developing Countries DOI: 10.1007/978-3-030-51517-1_6 sha: doc_id: 27732 cord_uid: 8i8bwlh8 file: cache/cord-102774-mtbo1tnq.json key: cord-102774-mtbo1tnq authors: Sun, Yuliang; Fei, Tai; Li, Xibo; Warnecke, Alexander; Warsitz, Ernst; Pohl, Nils title: Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform date: 2020-05-20 journal: nan DOI: 10.1109/jsen.2020.2994292 sha: doc_id: 102774 cord_uid: mtbo1tnq file: cache/cord-175846-aguwenwo.json key: cord-175846-aguwenwo authors: Chatsiou, Kakia title: Text Classification of Manifestos and COVID-19 Press Briefings using BERT and Convolutional Neural Networks date: 2020-10-20 journal: nan DOI: nan sha: doc_id: 175846 cord_uid: aguwenwo file: cache/cord-034614-r429idtl.json key: cord-034614-r429idtl authors: Yasar, Huseyin; Ceylan, Murat title: A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks date: 2020-11-04 journal: Appl Intell DOI: 10.1007/s10489-020-02019-1 sha: doc_id: 34614 cord_uid: r429idtl file: cache/cord-135296-qv7pacau.json key: cord-135296-qv7pacau authors: Polsinelli, Matteo; Cinque, Luigi; Placidi, Giuseppe title: A Light CNN for detecting COVID-19 from CT scans of the chest date: 2020-04-24 journal: nan DOI: nan sha: doc_id: 135296 cord_uid: qv7pacau file: cache/cord-127759-wpqdtdjs.json key: cord-127759-wpqdtdjs authors: Qi, Xiao; Brown, Lloyd; Foran, David J.; Hacihaliloglu, Ilker title: Chest X-ray Image Phase Features for Improved Diagnosis of COVID-19 Using Convolutional Neural Network date: 2020-11-06 journal: nan DOI: nan sha: doc_id: 127759 cord_uid: wpqdtdjs file: cache/cord-028792-6a4jfz94.json key: cord-028792-6a4jfz94 authors: Basly, Hend; Ouarda, Wael; Sayadi, Fatma Ezahra; Ouni, Bouraoui; Alimi, Adel M. title: CNN-SVM Learning Approach Based Human Activity Recognition date: 2020-06-05 journal: Image and Signal Processing DOI: 10.1007/978-3-030-51935-3_29 sha: doc_id: 28792 cord_uid: 6a4jfz94 file: cache/cord-249065-6yt3uqyy.json key: cord-249065-6yt3uqyy authors: Kassani, Sara Hosseinzadeh; Kassasni, Peyman Hosseinzadeh; Wesolowski, Michal J.; Schneider, Kevin A.; Deters, Ralph title: Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning-Based Approach date: 2020-04-22 journal: nan DOI: nan sha: doc_id: 249065 cord_uid: 6yt3uqyy file: cache/cord-266055-ki4gkoc8.json key: cord-266055-ki4gkoc8 authors: Kikkisetti, S.; Zhu, J.; Shen, B.; Li, H.; Duong, T. title: Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs date: 2020-09-02 journal: nan DOI: 10.1101/2020.09.02.20186759 sha: doc_id: 266055 cord_uid: ki4gkoc8 file: cache/cord-202184-hh7hugqi.json key: cord-202184-hh7hugqi authors: Wang, Jun; Liu, Qianying; Xie, Haotian; Yang, Zhaogang; Zhou, Hefeng title: Boosted EfficientNet: Detection of Lymph Node Metastases in Breast Cancer Using Convolutional Neural Network date: 2020-10-10 journal: nan DOI: nan sha: doc_id: 202184 cord_uid: hh7hugqi file: cache/cord-168974-w80gndka.json key: cord-168974-w80gndka authors: Ozkaya, Umut; Ozturk, Saban; Barstugan, Mucahid title: Coronavirus (COVID-19) Classification using Deep Features Fusion and Ranking Technique date: 2020-04-07 journal: nan DOI: nan sha: doc_id: 168974 cord_uid: w80gndka file: cache/cord-275258-azpg5yrh.json key: cord-275258-azpg5yrh authors: Mead, Dylan J.T.; Lunagomez, Simón; Gatherer, Derek title: Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling date: 2019-07-26 journal: J Mol Graph Model DOI: 10.1016/j.jmgm.2019.07.014 sha: doc_id: 275258 cord_uid: azpg5yrh file: cache/cord-032684-muh5rwla.json key: cord-032684-muh5rwla authors: Madichetty, Sreenivasulu; M., Sridevi title: A stacked convolutional neural network for detecting the resource tweets during a disaster date: 2020-09-25 journal: Multimed Tools Appl DOI: 10.1007/s11042-020-09873-8 sha: doc_id: 32684 cord_uid: muh5rwla file: cache/cord-121200-2qys8j4u.json key: cord-121200-2qys8j4u authors: Zogan, Hamad; Wang, Xianzhi; Jameel, Shoaib; Xu, Guandong title: Depression Detection with Multi-Modalities Using a Hybrid Deep Learning Model on Social Media date: 2020-07-03 journal: nan DOI: nan sha: doc_id: 121200 cord_uid: 2qys8j4u file: cache/cord-256756-8w5rtucg.json key: cord-256756-8w5rtucg authors: Manimala, M. V. R.; Dhanunjaya Naidu, C.; Giri Prasad, M. N. title: Sparse MR Image Reconstruction Considering Rician Noise Models: A CNN Approach date: 2020-08-11 journal: Wirel Pers Commun DOI: 10.1007/s11277-020-07725-0 sha: doc_id: 256756 cord_uid: 8w5rtucg file: cache/cord-258170-kyztc1jp.json key: cord-258170-kyztc1jp authors: Shorfuzzaman, Mohammad; Hossain, M. Shamim; Alhamid, Mohammed F. title: Towards the sustainable development of smart cities through mass video surveillance: A response to the COVID-19 pandemic date: 2020-11-05 journal: Sustain Cities Soc DOI: 10.1016/j.scs.2020.102582 sha: doc_id: 258170 cord_uid: kyztc1jp file: cache/cord-255884-0qqg10y4.json key: cord-255884-0qqg10y4 authors: Chiroma, H.; Ezugwu, A. E.; Jauro, F.; Al-Garadi, M. A.; Abdullahi, I. N.; Shuib, L. title: Early survey with bibliometric analysis on machine learning approaches in controlling coronavirus date: 2020-11-05 journal: nan DOI: 10.1101/2020.11.04.20225698 sha: doc_id: 255884 cord_uid: 0qqg10y4 file: cache/cord-286887-s8lvimt3.json key: cord-286887-s8lvimt3 authors: Nour, Majid; Cömert, Zafer; Polat, Kemal title: A Novel Medical Diagnosis model for COVID-19 infection detection based on Deep Features and Bayesian Optimization date: 2020-07-28 journal: Appl Soft Comput DOI: 10.1016/j.asoc.2020.106580 sha: doc_id: 286887 cord_uid: s8lvimt3 file: cache/cord-190424-466a35jf.json key: cord-190424-466a35jf authors: Lee, Sang Won; Chiu, Yueh-Ting; Brudnicki, Philip; Bischoff, Audrey M.; Jelinek, Angus; Wang, Jenny Zijun; Bogdanowicz, Danielle R.; Laine, Andrew F.; Guo, Jia; Lu, Helen H. title: Darwin's Neural Network: AI-based Strategies for Rapid and Scalable Cell and Coronavirus Screening date: 2020-07-22 journal: nan DOI: nan sha: doc_id: 190424 cord_uid: 466a35jf file: cache/cord-317643-pk8cabxj.json key: cord-317643-pk8cabxj authors: Masud, Mehedi; Eldin Rashed, Amr E.; Hossain, M. Shamim title: Convolutional neural network-based models for diagnosis of breast cancer date: 2020-10-09 journal: Neural Comput Appl DOI: 10.1007/s00521-020-05394-5 sha: doc_id: 317643 cord_uid: pk8cabxj file: cache/cord-308219-97gor71p.json key: cord-308219-97gor71p authors: Elzeiny, Sami; Qaraqe, Marwa title: Stress Classification Using Photoplethysmogram-Based Spatial and Frequency Domain Images date: 2020-09-17 journal: Sensors (Basel) DOI: 10.3390/s20185312 sha: doc_id: 308219 cord_uid: 97gor71p file: cache/cord-133273-kvyzuayp.json key: cord-133273-kvyzuayp authors: Christ, Andreas; Quint, Franz title: Artificial Intelligence: Research Impact on Key Industries; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2020) date: 2020-10-05 journal: nan DOI: nan sha: doc_id: 133273 cord_uid: kvyzuayp file: cache/cord-269270-i2odcsx7.json key: cord-269270-i2odcsx7 authors: Sahlol, Ahmed T.; Yousri, Dalia; Ewees, Ahmed A.; Al-qaness, Mohammed A. A.; Damasevicius, Robertas; Elaziz, Mohamed Abd title: COVID-19 image classification using deep features and fractional-order marine predators algorithm date: 2020-09-21 journal: Sci Rep DOI: 10.1038/s41598-020-71294-2 sha: doc_id: 269270 cord_uid: i2odcsx7 file: cache/cord-296359-pt86juvr.json key: cord-296359-pt86juvr authors: Polsinelli, Matteo; Cinque, Luigi; Placidi, Giuseppe title: A Light CNN for detecting COVID-19 from CT scans of the chest date: 2020-10-03 journal: Pattern Recognit Lett DOI: 10.1016/j.patrec.2020.10.001 sha: doc_id: 296359 cord_uid: pt86juvr file: cache/cord-319868-rtt9i7wu.json key: cord-319868-rtt9i7wu authors: Majeed, Taban; Rashid, Rasber; Ali, Dashti; Asaad, Aras title: Issues associated with deploying CNN transfer learning to detect COVID-19 from chest X-rays date: 2020-10-06 journal: Phys Eng Sci Med DOI: 10.1007/s13246-020-00934-8 sha: doc_id: 319868 cord_uid: rtt9i7wu file: cache/cord-330239-l8fp8cvz.json key: cord-330239-l8fp8cvz authors: Oyelade, O. N.; Ezugwu, A. E. title: Deep Learning Model for Improving the Characterization of Coronavirus on Chest X-ray Images Using CNN date: 2020-11-03 journal: nan DOI: 10.1101/2020.10.30.20222786 sha: doc_id: 330239 cord_uid: l8fp8cvz file: cache/cord-354819-gkbfbh00.json key: cord-354819-gkbfbh00 authors: Islam, Md. Zabirul; Islam, Md. Milon; Asraf, Amanullah title: A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images date: 2020-08-15 journal: Inform Med Unlocked DOI: 10.1016/j.imu.2020.100412 sha: doc_id: 354819 cord_uid: gkbfbh00 file: cache/cord-325235-uupiv7wh.json key: cord-325235-uupiv7wh authors: Makris, A.; Kontopoulos, I.; Tserpes, K. title: COVID-19 detection from chest X-Ray images using Deep Learning and Convolutional Neural Networks date: 2020-05-24 journal: nan DOI: 10.1101/2020.05.22.20110817 sha: doc_id: 325235 cord_uid: uupiv7wh file: cache/cord-337740-8ujk830g.json key: cord-337740-8ujk830g authors: Matencio, Adrián; Caldera, Fabrizio; Cecone, Claudio; López-Nicolás, José Manuel; Trotta, Francesco title: Cyclic Oligosaccharides as Active Drugs, an Updated Review date: 2020-09-29 journal: Pharmaceuticals (Basel) DOI: 10.3390/ph13100281 sha: doc_id: 337740 cord_uid: 8ujk830g file: cache/cord-103297-4stnx8dw.json key: cord-103297-4stnx8dw authors: Widrich, Michael; Schäfl, Bernhard; Pavlović, Milena; Ramsauer, Hubert; Gruber, Lukas; Holzleitner, Markus; Brandstetter, Johannes; Sandve, Geir Kjetil; Greiff, Victor; Hochreiter, Sepp; Klambauer, Günter title: Modern Hopfield Networks and Attention for Immune Repertoire Classification date: 2020-08-17 journal: bioRxiv DOI: 10.1101/2020.04.12.038158 sha: doc_id: 103297 cord_uid: 4stnx8dw key: cord-193356-hqbstgg7 authors: Widrich, Michael; Schafl, Bernhard; Ramsauer, Hubert; Pavlovi'c, Milena; Gruber, Lukas; Holzleitner, Markus; Brandstetter, Johannes; Sandve, Geir Kjetil; Greiff, Victor; Hochreiter, Sepp; Klambauer, Gunter title: Modern Hopfield Networks and Attention for Immune Repertoire Classification date: 2020-07-16 journal: nan DOI: nan sha: doc_id: 193356 cord_uid: hqbstgg7 Reading metadata file and updating bibliogrpahics === updating bibliographic database Building study carrel named keyword-cnn-cord === file2bib.sh === Traceback (most recent call last): File "/data-disk/python/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2646, in get_loc return self._engine.get_loc(key) File "pandas/_libs/index.pyx", line 111, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 138, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 1619, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 1627, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'cord-193356-hqbstgg7' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/data-disk/reader-compute/reader-cord/bin/file2bib.py", line 64, in if ( bibliographics.loc[ escape ,'author'] ) : author = bibliographics.loc[ escape,'author'] File "/data-disk/python/lib/python3.8/site-packages/pandas/core/indexing.py", line 1762, in __getitem__ return self._getitem_tuple(key) File "/data-disk/python/lib/python3.8/site-packages/pandas/core/indexing.py", line 1272, in _getitem_tuple return self._getitem_lowerdim(tup) File "/data-disk/python/lib/python3.8/site-packages/pandas/core/indexing.py", line 1389, in _getitem_lowerdim section = self._getitem_axis(key, axis=i) File "/data-disk/python/lib/python3.8/site-packages/pandas/core/indexing.py", line 1965, in _getitem_axis return self._get_label(key, axis=axis) File "/data-disk/python/lib/python3.8/site-packages/pandas/core/indexing.py", line 625, in _get_label return self.obj._xs(label, axis=axis) File "/data-disk/python/lib/python3.8/site-packages/pandas/core/generic.py", line 3537, in xs loc = self.index.get_loc(key) File "/data-disk/python/lib/python3.8/site-packages/pandas/core/indexes/base.py", line 2648, in get_loc return self._engine.get_loc(self._maybe_cast_indexer(key)) File "pandas/_libs/index.pyx", line 111, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/index.pyx", line 138, in pandas._libs.index.IndexEngine.get_loc File "pandas/_libs/hashtable_class_helper.pxi", line 1619, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas/_libs/hashtable_class_helper.pxi", line 1627, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'cord-193356-hqbstgg7' === file2bib.sh === id: cord-027732-8i8bwlh8 author: Boudaya, Amal title: EEG-Based Hypo-vigilance Detection Using Convolutional Neural Network date: 2020-05-31 pages: extension: .txt txt: ./txt/cord-027732-8i8bwlh8.txt cache: ./cache/cord-027732-8i8bwlh8.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-027732-8i8bwlh8.txt' === file2bib.sh === id: cord-028792-6a4jfz94 author: Basly, Hend title: CNN-SVM Learning Approach Based Human Activity Recognition date: 2020-06-05 pages: extension: .txt txt: ./txt/cord-028792-6a4jfz94.txt cache: ./cache/cord-028792-6a4jfz94.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-028792-6a4jfz94.txt' === file2bib.sh === id: cord-002901-u4ybz8ds author: Yu, Chanki title: Acral melanoma detection using a convolutional neural network for dermoscopy images date: 2018-03-07 pages: extension: .txt txt: ./txt/cord-002901-u4ybz8ds.txt cache: ./cache/cord-002901-u4ybz8ds.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-002901-u4ybz8ds.txt' === file2bib.sh === id: cord-024491-f16d1zov author: Qiu, Xi title: Simultaneous ECG Heartbeat Segmentation and Classification with Feature Fusion and Long Term Context Dependencies date: 2020-04-17 pages: extension: .txt txt: ./txt/cord-024491-f16d1zov.txt cache: ./cache/cord-024491-f16d1zov.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-024491-f16d1zov.txt' === file2bib.sh === id: cord-266055-ki4gkoc8 author: Kikkisetti, S. title: Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs date: 2020-09-02 pages: extension: .txt txt: ./txt/cord-266055-ki4gkoc8.txt cache: ./cache/cord-266055-ki4gkoc8.txt Content-Encoding ISO-8859-1 Content-Type text/plain; charset=ISO-8859-1 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-266055-ki4gkoc8.txt' === file2bib.sh === id: cord-354819-gkbfbh00 author: Islam, Md. Zabirul title: A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images date: 2020-08-15 pages: extension: .txt txt: ./txt/cord-354819-gkbfbh00.txt cache: ./cache/cord-354819-gkbfbh00.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-354819-gkbfbh00.txt' === file2bib.sh === id: cord-296359-pt86juvr author: Polsinelli, Matteo title: A Light CNN for detecting COVID-19 from CT scans of the chest date: 2020-10-03 pages: extension: .txt txt: ./txt/cord-296359-pt86juvr.txt cache: ./cache/cord-296359-pt86juvr.txt Content-Encoding ISO-8859-1 Content-Type text/plain; charset=ISO-8859-1 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 2 resourceName b'cord-296359-pt86juvr.txt' === file2bib.sh === id: cord-175846-aguwenwo author: Chatsiou, Kakia title: Text Classification of Manifestos and COVID-19 Press Briefings using BERT and Convolutional Neural Networks date: 2020-10-20 pages: extension: .txt txt: ./txt/cord-175846-aguwenwo.txt cache: ./cache/cord-175846-aguwenwo.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-175846-aguwenwo.txt' === file2bib.sh === id: cord-168974-w80gndka author: Ozkaya, Umut title: Coronavirus (COVID-19) Classification using Deep Features Fusion and Ranking Technique date: 2020-04-07 pages: extension: .txt txt: ./txt/cord-168974-w80gndka.txt cache: ./cache/cord-168974-w80gndka.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-168974-w80gndka.txt' === file2bib.sh === id: cord-317643-pk8cabxj author: Masud, Mehedi title: Convolutional neural network-based models for diagnosis of breast cancer date: 2020-10-09 pages: extension: .txt txt: ./txt/cord-317643-pk8cabxj.txt cache: ./cache/cord-317643-pk8cabxj.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-317643-pk8cabxj.txt' === file2bib.sh === id: cord-249065-6yt3uqyy author: Kassani, Sara Hosseinzadeh title: Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning-Based Approach date: 2020-04-22 pages: extension: .txt txt: ./txt/cord-249065-6yt3uqyy.txt cache: ./cache/cord-249065-6yt3uqyy.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 2 resourceName b'cord-249065-6yt3uqyy.txt' === file2bib.sh === id: cord-127759-wpqdtdjs author: Qi, Xiao title: Chest X-ray Image Phase Features for Improved Diagnosis of COVID-19 Using Convolutional Neural Network date: 2020-11-06 pages: extension: .txt txt: ./txt/cord-127759-wpqdtdjs.txt cache: ./cache/cord-127759-wpqdtdjs.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-127759-wpqdtdjs.txt' === file2bib.sh === id: cord-286887-s8lvimt3 author: Nour, Majid title: A Novel Medical Diagnosis model for COVID-19 infection detection based on Deep Features and Bayesian Optimization date: 2020-07-28 pages: extension: .txt txt: ./txt/cord-286887-s8lvimt3.txt cache: ./cache/cord-286887-s8lvimt3.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-286887-s8lvimt3.txt' === file2bib.sh === id: cord-135296-qv7pacau author: Polsinelli, Matteo title: A Light CNN for detecting COVID-19 from CT scans of the chest date: 2020-04-24 pages: extension: .txt txt: ./txt/cord-135296-qv7pacau.txt cache: ./cache/cord-135296-qv7pacau.txt Content-Encoding ISO-8859-1 Content-Type text/plain; charset=ISO-8859-1 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-135296-qv7pacau.txt' === file2bib.sh === id: cord-258170-kyztc1jp author: Shorfuzzaman, Mohammad title: Towards the sustainable development of smart cities through mass video surveillance: A response to the COVID-19 pandemic date: 2020-11-05 pages: extension: .txt txt: ./txt/cord-258170-kyztc1jp.txt cache: ./cache/cord-258170-kyztc1jp.txt Content-Encoding ISO-8859-1 Content-Type text/plain; charset=ISO-8859-1 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-258170-kyztc1jp.txt' === file2bib.sh === id: cord-308219-97gor71p author: Elzeiny, Sami title: Stress Classification Using Photoplethysmogram-Based Spatial and Frequency Domain Images date: 2020-09-17 pages: extension: .txt txt: ./txt/cord-308219-97gor71p.txt cache: ./cache/cord-308219-97gor71p.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-308219-97gor71p.txt' === file2bib.sh === id: cord-202184-hh7hugqi author: Wang, Jun title: Boosted EfficientNet: Detection of Lymph Node Metastases in Breast Cancer Using Convolutional Neural Network date: 2020-10-10 pages: extension: .txt txt: ./txt/cord-202184-hh7hugqi.txt cache: ./cache/cord-202184-hh7hugqi.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-202184-hh7hugqi.txt' === file2bib.sh === id: cord-325235-uupiv7wh author: Makris, A. title: COVID-19 detection from chest X-Ray images using Deep Learning and Convolutional Neural Networks date: 2020-05-24 pages: extension: .txt txt: ./txt/cord-325235-uupiv7wh.txt cache: ./cache/cord-325235-uupiv7wh.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 2 resourceName b'cord-325235-uupiv7wh.txt' === file2bib.sh === id: cord-330239-l8fp8cvz author: Oyelade, O. N. title: Deep Learning Model for Improving the Characterization of Coronavirus on Chest X-ray Images Using CNN date: 2020-11-03 pages: extension: .txt txt: ./txt/cord-330239-l8fp8cvz.txt cache: ./cache/cord-330239-l8fp8cvz.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-330239-l8fp8cvz.txt' === file2bib.sh === id: cord-275258-azpg5yrh author: Mead, Dylan J.T. title: Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling date: 2019-07-26 pages: extension: .txt txt: ./txt/cord-275258-azpg5yrh.txt cache: ./cache/cord-275258-azpg5yrh.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-275258-azpg5yrh.txt' === file2bib.sh === id: cord-256756-8w5rtucg author: Manimala, M. V. R. title: Sparse MR Image Reconstruction Considering Rician Noise Models: A CNN Approach date: 2020-08-11 pages: extension: .txt txt: ./txt/cord-256756-8w5rtucg.txt cache: ./cache/cord-256756-8w5rtucg.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-256756-8w5rtucg.txt' === file2bib.sh === id: cord-102774-mtbo1tnq author: Sun, Yuliang title: Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform date: 2020-05-20 pages: extension: .txt txt: ./txt/cord-102774-mtbo1tnq.txt cache: ./cache/cord-102774-mtbo1tnq.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-102774-mtbo1tnq.txt' === file2bib.sh === id: cord-034614-r429idtl author: Yasar, Huseyin title: A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks date: 2020-11-04 pages: extension: .txt txt: ./txt/cord-034614-r429idtl.txt cache: ./cache/cord-034614-r429idtl.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-034614-r429idtl.txt' === file2bib.sh === id: cord-032684-muh5rwla author: Madichetty, Sreenivasulu title: A stacked convolutional neural network for detecting the resource tweets during a disaster date: 2020-09-25 pages: extension: .txt txt: ./txt/cord-032684-muh5rwla.txt cache: ./cache/cord-032684-muh5rwla.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-032684-muh5rwla.txt' === file2bib.sh === id: cord-190424-466a35jf author: Lee, Sang Won title: Darwin's Neural Network: AI-based Strategies for Rapid and Scalable Cell and Coronavirus Screening date: 2020-07-22 pages: extension: .txt txt: ./txt/cord-190424-466a35jf.txt cache: ./cache/cord-190424-466a35jf.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-190424-466a35jf.txt' === file2bib.sh === id: cord-319868-rtt9i7wu author: Majeed, Taban title: Issues associated with deploying CNN transfer learning to detect COVID-19 from chest X-rays date: 2020-10-06 pages: extension: .txt txt: ./txt/cord-319868-rtt9i7wu.txt cache: ./cache/cord-319868-rtt9i7wu.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-319868-rtt9i7wu.txt' === file2bib.sh === id: cord-131094-1zz8rd3h author: Parisi, L. title: QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics date: 2020-10-15 pages: extension: .txt txt: ./txt/cord-131094-1zz8rd3h.txt cache: ./cache/cord-131094-1zz8rd3h.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-131094-1zz8rd3h.txt' === file2bib.sh === id: cord-269270-i2odcsx7 author: Sahlol, Ahmed T. title: COVID-19 image classification using deep features and fractional-order marine predators algorithm date: 2020-09-21 pages: extension: .txt txt: ./txt/cord-269270-i2odcsx7.txt cache: ./cache/cord-269270-i2odcsx7.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-269270-i2odcsx7.txt' === file2bib.sh === id: cord-121200-2qys8j4u author: Zogan, Hamad title: Depression Detection with Multi-Modalities Using a Hybrid Deep Learning Model on Social Media date: 2020-07-03 pages: extension: .txt txt: ./txt/cord-121200-2qys8j4u.txt cache: ./cache/cord-121200-2qys8j4u.txt Content-Encoding ISO-8859-1 Content-Type text/plain; charset=ISO-8859-1 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 2 resourceName b'cord-121200-2qys8j4u.txt' === file2bib.sh === id: cord-337740-8ujk830g author: Matencio, Adrián title: Cyclic Oligosaccharides as Active Drugs, an Updated Review date: 2020-09-29 pages: extension: .txt txt: ./txt/cord-337740-8ujk830g.txt cache: ./cache/cord-337740-8ujk830g.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-337740-8ujk830g.txt' === file2bib.sh === id: cord-255884-0qqg10y4 author: Chiroma, H. title: Early survey with bibliometric analysis on machine learning approaches in controlling coronavirus date: 2020-11-05 pages: extension: .txt txt: ./txt/cord-255884-0qqg10y4.txt cache: ./cache/cord-255884-0qqg10y4.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 3 resourceName b'cord-255884-0qqg10y4.txt' === file2bib.sh === id: cord-103297-4stnx8dw author: Widrich, Michael title: Modern Hopfield Networks and Attention for Immune Repertoire Classification date: 2020-08-17 pages: extension: .txt txt: ./txt/cord-103297-4stnx8dw.txt cache: ./cache/cord-103297-4stnx8dw.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 X-Parsed-By ['org.apache.tika.parser.DefaultParser', 'org.apache.tika.parser.csv.TextAndCSVParser'] X-TIKA:content_handler ToTextContentHandler X-TIKA:embedded_depth 0 X-TIKA:parse_time_millis 4 resourceName b'cord-103297-4stnx8dw.txt' === file2bib.sh === OMP: Error #34: System unable to allocate necessary resources for OMP thread: OMP: System error #11: Resource temporarily unavailable OMP: Hint Try decreasing the value of OMP_NUM_THREADS. /data-disk/reader-compute/reader-cord/bin/file2bib.sh: line 39: 23636 Aborted $FILE2BIB "$FILE" > "$OUTPUT" Que is empty; done keyword-cnn-cord === reduce.pl bib === id = cord-024491-f16d1zov author = Qiu, Xi title = Simultaneous ECG Heartbeat Segmentation and Classification with Feature Fusion and Long Term Context Dependencies date = 2020-04-17 pages = extension = .txt mime = text/plain words = 3465 sentences = 245 flesch = 55 summary = To achieve simultaneous segmentation and classification, we present a Faster R-CNN based model that has been customized to handle ECG data. Since deep learning methods can produce feature maps from raw data, heartbeat segmentation can be simultaneously conducted with classification with a single neural network. To achieve simultaneous segmentation and classification, we present a Faster R-CNN [2] based model that has been customized to handle ECG sequences. In our method, we present a modified Faster R-CNN for arrhythmia detection which works in only two steps: preprocessing, and simultaneous heartbeat segmentation and classification. The architecture of our model is shown in Fig. 2 , which takes 1-D ECG sequence as its input and conducts heartbeat segmentation and classification simultaneously. Different from most deep learning methods which compute feature maps for a single heartbeat, our backbone model takes a long ECG sequence as its input. cache = ./cache/cord-024491-f16d1zov.txt txt = ./txt/cord-024491-f16d1zov.txt === reduce.pl bib === id = cord-131094-1zz8rd3h author = Parisi, L. title = QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics date = 2020-10-15 pages = extension = .txt mime = text/plain words = 7546 sentences = 325 flesch = 48 summary = Despite a higher computational cost, results indicated an overall higher classification accuracy, precision, recall and F1-score brought about by either quantum AFs on five of the seven bench-mark datasets, thus demonstrating its potential to be the new benchmark or gold standard AF in CNNs and aid image classification tasks involved in critical applications, such as medical diagnoses of COVID-19 and PD. Despite a higher computational cost (four-fold with respect to the other AFs except for the CReLU's increase being almost three-fold), the results achieved by either or both the proposed QReLU and m-ReLU AFs, assessed on classification accuracy, precision, recall and F1-score, indicate an overall higher generalisation achieved on five of the seven benchmark datasets ( Table 2 on the MNIST data, Tables 3 and 5 on PD-related spiral drawings, Tables 7 and 8 on COVID-19 lung US images). cache = ./cache/cord-131094-1zz8rd3h.txt txt = ./txt/cord-131094-1zz8rd3h.txt === reduce.pl bib === id = cord-002901-u4ybz8ds author = Yu, Chanki title = Acral melanoma detection using a convolutional neural network for dermoscopy images date = 2018-03-07 pages = extension = .txt mime = text/plain words = 3513 sentences = 180 flesch = 52 summary = We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist's and non-expert's evaluation. CONCLUSION: Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet. In the result of group B by the training of group A images, CNN also showed a higher diagnostic accuracy (80.23%) than that of the non-expert (62.71%) but was similar to that of the expert (81.64%). cache = ./cache/cord-002901-u4ybz8ds.txt txt = ./txt/cord-002901-u4ybz8ds.txt === reduce.pl bib === id = cord-027732-8i8bwlh8 author = Boudaya, Amal title = EEG-Based Hypo-vigilance Detection Using Convolutional Neural Network date = 2020-05-31 pages = extension = .txt mime = text/plain words = 2337 sentences = 148 flesch = 49 summary = Given, its high temporal resolution, portability and reasonable cost, the present work focus on hypo-vigilance detection by analyzing EEG signal of various brain's functionalities using fourteen electrodes placed on the participant's scalp. On the other hand, deep learning networks offer great potential for biomedical signals analysis through the simplification of raw input signals (i.e., through various steps including feature extraction, denoising and feature selection) and the improvement of the classification results. In this paper, we focus on the EEG signal study recorded by fourteen electrodes for hypo-vigilance detection by analyzing the various functionalities of the brain from the electrodes placed on the participant's scalp. In this paper, we propose a CNN hypo-vigilance detection method using EEG data in order to classify drowsiness and awakeness states. In the proposed simple CNN architecture for EEG signals classification, we use the Keras deep learning library. cache = ./cache/cord-027732-8i8bwlh8.txt txt = ./txt/cord-027732-8i8bwlh8.txt === reduce.pl bib === id = cord-102774-mtbo1tnq author = Sun, Yuliang title = Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform date = 2020-05-20 pages = extension = .txt mime = text/plain words = 6381 sentences = 348 flesch = 60 summary = In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. [16] projected the range-Doppler-measurement-cycles into rangetime and Doppler-time to reduce the input dimension of the LSTM layer and achieved a good classification accuracy in real-time, the proposed algorithms were implemented on a personal computer with powerful computational capability. cache = ./cache/cord-102774-mtbo1tnq.txt txt = ./txt/cord-102774-mtbo1tnq.txt === reduce.pl bib === id = cord-034614-r429idtl author = Yasar, Huseyin title = A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks date = 2020-11-04 pages = extension = .txt mime = text/plain words = 7750 sentences = 385 flesch = 60 summary = title: A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks In this study, which aims at early diagnosis of Covid-19 disease using X-ray images, the deep-learning approach, a state-of-the-art artificial intelligence method, was used, and automatic classification of images was performed using convolutional neural networks (CNN). Within the scope of the study, the results were obtained using chest X-ray images directly in the training-test procedures and the sub-band images obtained by applying dual tree complex wavelet transform (DT-CWT) to the above-mentioned images. In the study, experiments were carried out for the use of images directly, using local binary pattern (LBP) as a pre-process and dual tree complex wavelet transform (DT-CWT) as a secondary operation, and the results of the automatic classification were calculated separately. cache = ./cache/cord-034614-r429idtl.txt txt = ./txt/cord-034614-r429idtl.txt === reduce.pl bib === id = cord-175846-aguwenwo author = Chatsiou, Kakia title = Text Classification of Manifestos and COVID-19 Press Briefings using BERT and Convolutional Neural Networks date = 2020-10-20 pages = extension = .txt mime = text/plain words = 3188 sentences = 192 flesch = 47 summary = We use manually annotated political manifestos as training data to train a local topic ConvolutionalNeural Network (CNN) classifier; then apply it to the COVID-19PressBriefings Corpus to automatically classify sentences in the test corpus.We report on a series of experiments with CNN trained on top of pre-trained embeddings for sentence-level classification tasks. To aid fellow scholars with the systematic study of such a large and dynamic set of unstructured data, we set out to employ a text categorization classifier trained on similar domains (like existing manually annotated sentences from political manifestos) and use it to classify press briefings about the pandemic in a more effective and scalable way. cache = ./cache/cord-175846-aguwenwo.txt txt = ./txt/cord-175846-aguwenwo.txt === reduce.pl bib === id = cord-135296-qv7pacau author = Polsinelli, Matteo title = A Light CNN for detecting COVID-19 from CT scans of the chest date = 2020-04-24 pages = extension = .txt mime = text/plain words = 3833 sentences = 194 flesch = 56 summary = We propose a light CNN design based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with other CT images (community-acquired pneumonia and/or healthy images). On the tested datasets, the proposed modified SqueezeNet CNN achieved 83.00% of accuracy, 85.00% of sensitivity, 81.00% of specificity, 81.73% of precision and 0.8333 of F1Score in a very efficient way (7.81 seconds medium-end laptot without GPU acceleration). In the present work, we aim at obtaining acceptable performances for an automatic method in recognizing COVID-19 CT images of lungs while, at the same time, dealing with reduced datasets for training and validation and reducing the computational overhead imposed by more complex automatic systems. In this work we developed, trained and tested a light CNN (based on the SqueezeNet) to discriminate between COVID-19 and community-acquired pneumonia and/or healthy CT images. cache = ./cache/cord-135296-qv7pacau.txt txt = ./txt/cord-135296-qv7pacau.txt === reduce.pl bib === id = cord-127759-wpqdtdjs author = Qi, Xiao title = Chest X-ray Image Phase Features for Improved Diagnosis of COVID-19 Using Convolutional Neural Network date = 2020-11-06 pages = extension = .txt mime = text/plain words = 3896 sentences = 250 flesch = 50 summary = In this study, we design a novel multi-feature convolutional neural network (CNN) architecture for multi-class improved classification of COVID-19 from CXR images. In this work we show how local phase CXR features based image enhancement improves the accuracy of CNN architectures for COVID-19 diagnosis. Our proposed method is designed for processing CXR images and consists of two main stages as illustrated in Figure 1 : 1-We enhance the CXR images (CXR(x, y)) using local phase-based image processing method in order to obtain a multi-feature CXR image (M F (x, y)), and 2-we classify CXR(x, y) by designing a deep learning approach where multi feature CXR images (M F (x, y)), together with original CXR data (CXR(x, y)), is used for improving the classification performance. Our proposed multi-feature CNN architectures were trained on a large dataset in terms of the number of COVID-19 CXR scans and have achieved improved classification accuracy across all classes. cache = ./cache/cord-127759-wpqdtdjs.txt txt = ./txt/cord-127759-wpqdtdjs.txt === reduce.pl bib === id = cord-028792-6a4jfz94 author = Basly, Hend title = CNN-SVM Learning Approach Based Human Activity Recognition date = 2020-06-05 pages = extension = .txt mime = text/plain words = 3570 sentences = 178 flesch = 49 summary = Traditionally, to deal with such problem of recognition, researcher are obliged to anticipate their algorithms of Human activity recognition by prior data training preprocessing in order to extract a set of features using different types of descriptors such as HOG3D [1] , extended SURF [2] and Space Time Interest Points (STIPs) [3] before inputting them to the specific classification algorithm such as HMM, SVM, Random Forest [4] [5] [6] . In this study, we proposed an advanced human activity recognition method from video sequence using CNN, where the large scale dataset ImageNet pretrains the network. Finally, all the resulting features have been merged to be fed as input to a simulated annealing multiple instance learning support vector machine (SMILE-SVM) classifier for human activity recognition. We proposed to use a pre-trained CNN approach based ResNet model in order to extract spatial and temporal features from consecutive video frames. cache = ./cache/cord-028792-6a4jfz94.txt txt = ./txt/cord-028792-6a4jfz94.txt === reduce.pl bib === id = cord-249065-6yt3uqyy author = Kassani, Sara Hosseinzadeh title = Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning-Based Approach date = 2020-04-22 pages = extension = .txt mime = text/plain words = 4285 sentences = 229 flesch = 45 summary = To the best of our knowledge, this research is the first comprehensive study of the application of machine learning (ML) algorithms (15 deep CNN visual feature extractor and 6 ML classifier) for automatic diagnoses of COVID-19 from X-ray and CT images. • With extensive experiments, we show that the combination of a deep CNN with Bagging trees classifier achieves very good classification performance applied on COVID-19 data despite the limited number of image samples. Motivated by the success of deep learning models in computer vision, the focus of this research is to provide an extensive comprehensive study on the classification of COVID-19 pneumonia in chest X-ray and CT imaging using features extracted by the stateof-the-art deep CNN architectures and trained on machine learning algorithms. The experimental results on available chest X-ray and CT dataset demonstrate that the features extracted by DesnseNet121 architecture and trained by a Bagging tree classifier generates very accurate prediction of 99.00% in terms of classification accuracy. cache = ./cache/cord-249065-6yt3uqyy.txt txt = ./txt/cord-249065-6yt3uqyy.txt === reduce.pl bib === id = cord-266055-ki4gkoc8 author = Kikkisetti, S. title = Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs date = 2020-09-02 pages = extension = .txt mime = text/plain words = 3433 sentences = 228 flesch = 49 summary = title: Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs This study employed deep-learning convolutional neural networks to classify COVID-19 lung infections on pCXR from normal and related lung infections to potentially enable more timely and accurate diagnosis. This retrospect study employed deep-learning convolutional neural network (CNN) with transfer learning to classify based on pCXRs COVID-19 pneumonia (N=455) on pCXR from normal (N=532), bacterial pneumonia (N=492), and non-COVID viral pneumonia (N=552). Deep-learning convolutional neural network with transfer learning accurately classifies COVID-19 on portable chest x-ray against normal, bacterial pneumonia or non-COVID viral pneumonia. The goal of this pilot study is to employ deep-learning convolutional neural networks to classify normal, bacterial infection, and non-COVID-19 viral infection (such as influenza) All rights reserved. In conclusion, deep learning convolutional neural networks with transfer learning accurately classify COVID-19 pCXR from pCXR of normal, bacterial pneumonia, and non-COVID viral pneumonia patients in a multiclass model. cache = ./cache/cord-266055-ki4gkoc8.txt txt = ./txt/cord-266055-ki4gkoc8.txt === reduce.pl bib === id = cord-202184-hh7hugqi author = Wang, Jun title = Boosted EfficientNet: Detection of Lymph Node Metastases in Breast Cancer Using Convolutional Neural Network date = 2020-10-10 pages = extension = .txt mime = text/plain words = 5291 sentences = 319 flesch = 43 summary = In this work, we propose three strategies to improve the capability of EfficientNet, including developing a cropping method called Random Center Cropping (RCC) to retain significant features on the center area of images, reducing the downsampling scale of EfficientNet to facilitate the small resolution images of RPCam datasets, and integrating Attention and Feature Fusion mechanisms with EfficientNet to obtain features containing rich semantic information. This work has three main contributions: (1) To our limited knowledge, we are the first study to explore the power of EfficientNet on MBCs classification, and elaborate experiments are conducted to compare the performance of EfficientNet with other state-of-the-art CNN models, which might offer inspirations for researchers who are interested in image-based diagnosis using DL; (2) We propose a novel data augmentation method RCC to facilitate the data enrichment of small resolution datasets; (3) All of our four technological improvements boost the performance of original EfficientNet. The best accuracy and AUC achieve 97.96% and 99.68%, respectively, confirming the applicability of utilizing CNN-based methods for BC diagnosis. cache = ./cache/cord-202184-hh7hugqi.txt txt = ./txt/cord-202184-hh7hugqi.txt === reduce.pl bib === id = cord-275258-azpg5yrh author = Mead, Dylan J.T. title = Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling date = 2019-07-26 pages = extension = .txt mime = text/plain words = 6333 sentences = 346 flesch = 53 summary = title: Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling This paper presents the first use of force-directed graphs for the visualization of sequence space in two dimensions, and applies them to the choice of suitable RNA-dependent RNA polymerase (RdRP) target-template pairs within human-infective RNA virus genera. Measures of centrality in protein sequence space for each genus were also derived and used to identify centroid nearest-neighbour sequences (CNNs) potentially useful for production of homology models most representative of their genera. We then present the first use of force-directed graphs to produce an intuitive visualization of sequence space, and select target RdRPs without solved structures for homology modelling. The solved structure has 10 other sequences in its proximity in the three-dimensional space, roughly Table 5 Homology modelling at intra-order, inter-family level. cache = ./cache/cord-275258-azpg5yrh.txt txt = ./txt/cord-275258-azpg5yrh.txt === reduce.pl bib === id = cord-168974-w80gndka author = Ozkaya, Umut title = Coronavirus (COVID-19) Classification using Deep Features Fusion and Ranking Technique date = 2020-04-07 pages = extension = .txt mime = text/plain words = 3585 sentences = 254 flesch = 59 summary = In this study, a novel method was proposed as fusing and ranking deep features to detect COVID-19 in early phase. Within the scope of the proposed method, 3000 patch images have been labelled as CoVID-19 and No finding for using in training and testing phase. According to other pre-trained Convolutional Neural Network (CNN) models used in transfer learning, the proposed method shows high performance on Subset-2 with 98.27% accuracy, 98.93% sensitivity, 97.60% specificity, 97.63% precision, 98.28% F1-score and 96.54% Matthews Correlation Coefficient (MCC) metrics. When the studies in the literature are examined, Shan et al proposed a neural network model called VB-Net in order to segment the COVID-19 regions in CT images. were able to successfully diagnose COVID-19 using deep learning models that could obtain graphical features in CT images [8] . Deep features were obtained with pre-trained Convolutional Neural Network (CNN) models. In the study, deep features were obtained by using pre-trained CNN networks. cache = ./cache/cord-168974-w80gndka.txt txt = ./txt/cord-168974-w80gndka.txt === reduce.pl bib === id = cord-032684-muh5rwla author = Madichetty, Sreenivasulu title = A stacked convolutional neural network for detecting the resource tweets during a disaster date = 2020-09-25 pages = extension = .txt mime = text/plain words = 6980 sentences = 418 flesch = 55 summary = Specifically, the authors in [3] used both information-retrieval methodologies and classification methodologies (CNN with crisis word embeddings) to extract the Need and Availability of Resource tweets during the disaster. The main drawback of CNN with crisis embeddings is that it does not work well if the number of training tweets is small and, in the case of information retrieval methodologies, keywords must be given manually to identify the need and availability of resource tweets during the disaster. Initially, the experiment is performed on the SVM classifier based on the proposed domainspecific features for the identification of NAR tweets and compared to the BoW model shown in Table 5 . This paper proposes a method named as CKS (CNN and KNN are used as base-level classifiers, and SVM is used as a Meta-level classifier) for identifying tweets related to the Need and Availability of Resources during the disaster. cache = ./cache/cord-032684-muh5rwla.txt txt = ./txt/cord-032684-muh5rwla.txt === reduce.pl bib === id = cord-121200-2qys8j4u author = Zogan, Hamad title = Depression Detection with Multi-Modalities Using a Hybrid Deep Learning Model on Social Media date = 2020-07-03 pages = extension = .txt mime = text/plain words = 10036 sentences = 521 flesch = 51 summary = While many previous works have largely studied the problem on a small-scale by assuming uni-modality of data which may not give us faithful results, we propose a novel scalable hybrid model that combines Bidirectional Gated Recurrent Units (BiGRUs) and Convolutional Neural Networks to detect depressed users on social media such as Twitter-based on multi-modal features. To be specific, this work aims to develop a new novel deep learning-based solution for improving depression detection by utilizing multi-modal features from diverse behaviour of the depressed user in social media. To this end, we propose a hybrid model comprising Bidirectional Gated Recurrent Unit (BiGRU) and Conventional Neural network (CNN) model to boost the classification of depressed users using multi-modal features and word embedding features. The most closely related recent work to ours is [23] where the authors propose a CNN-based deep learning model to classify Twitter users based on depression using multi-modal features. cache = ./cache/cord-121200-2qys8j4u.txt txt = ./txt/cord-121200-2qys8j4u.txt === reduce.pl bib === id = cord-256756-8w5rtucg author = Manimala, M. V. R. title = Sparse MR Image Reconstruction Considering Rician Noise Models: A CNN Approach date = 2020-08-11 pages = extension = .txt mime = text/plain words = 6681 sentences = 411 flesch = 55 summary = The proposed algorithm employs a convolutional neural network (CNN) to denoise MR images corrupted with Rician noise. Dictionary learning for MRI (DLMRI) provided an effective solution to recover MR images from sparse k-space data [2] , but had a drawback of high computational time. The proposed denoising algorithm reconstructs MR images with high visual quality, further; it can be directly employed without optimization and prediction of the Rician noise level. The proposed CNN based algorithm is capable of denoising the Rician noise corrupted sparse MR images and also reduces the computation time substantially. This section presents the proposed CNN-based formulation for denoising and reconstruction of MR images from the sparse k-space data. The proposed CNN based denoising algorithm has been compared with various state-ofthe-art-techniques namely (1) Dictionary learning magnetic resonance imaging (DLMRI) [2] (2) Non-local means (NLM) and its variants namely unbiased NLM (UNLM), Rician NLM (RNLM), enhanced NLM (ENLM) and enhanced NLM filter with preprocessing (PENLM) [5] . cache = ./cache/cord-256756-8w5rtucg.txt txt = ./txt/cord-256756-8w5rtucg.txt === reduce.pl bib === id = cord-103297-4stnx8dw author = Widrich, Michael title = Modern Hopfield Networks and Attention for Immune Repertoire Classification date = 2020-08-17 pages = extension = .txt mime = text/plain words = 14093 sentences = 926 flesch = 57 summary = In this work, we present our novel method DeepRC that integrates transformer-like attention, or equivalently modern Hopfield networks, into deep learning architectures for massive MIL such as immune repertoire classification. DeepRC sets out to avoid the above-mentioned constraints of current methods by (a) applying transformer-like attention-pooling instead of max-pooling and learning a classifier on the repertoire rather than on the sequence-representation, (b) pooling learned representations rather than predictions, and (c) using less rigid feature extractors, such as 1D convolutions or LSTMs. In this work, we contribute the following: We demonstrate that continuous generalizations of binary modern Hopfield-networks (Krotov & Hopfield, 2016 Demircigil et al., 2017) have an update rule that is known as the attention mechanisms in the transformer. We evaluate the predictive performance of DeepRC and other machine learning approaches for the classification of immune repertoires in a large comparative study (Section "Experimental Results") Exponential storage capacity of continuous state modern Hopfield networks with transformer attention as update rule cache = ./cache/cord-103297-4stnx8dw.txt txt = ./txt/cord-103297-4stnx8dw.txt === reduce.pl bib === id = cord-258170-kyztc1jp author = Shorfuzzaman, Mohammad title = Towards the sustainable development of smart cities through mass video surveillance: A response to the COVID-19 pandemic date = 2020-11-05 pages = extension = .txt mime = text/plain words = 5371 sentences = 300 flesch = 54 summary = In particular, we make the following contributions: (a) A deep learning-based framework is presented for monitoring social distancing in the context of sustainable smart cities in an effort to curb the spread of COVID-19 or similar infectious diseases; (b) The proposed system leverages state-of-the-art, deep learning-based real-time object detection models for the detection of people in videos, captured with a monocular camera, to implement social distancing monitoring use cases; (c) A J o u r n a l P r e -p r o o f perspective transformation is presented, where the captured video is transformed from a perspective view to a bird's eye (top-down) view to identify the region of interest (ROI) in which social distancing will be monitored; (d) A detailed performance evaluation is provided to show the effectiveness of the proposed system on a video surveillance dataset. cache = ./cache/cord-258170-kyztc1jp.txt txt = ./txt/cord-258170-kyztc1jp.txt === reduce.pl bib === id = cord-255884-0qqg10y4 author = Chiroma, H. title = Early survey with bibliometric analysis on machine learning approaches in controlling coronavirus date = 2020-11-05 pages = extension = .txt mime = text/plain words = 13197 sentences = 767 flesch = 47 summary = Therefore, the main goal of this study is to bridge this gap by carrying out an in-depth survey with bibliometric analysis on the adoption of machine-learning-based technologies to fight the COVID-19 pandemic from a different perspective, including an extensive systematic literature review and a bibliometric analysis. Moreover, the machine-learning-based algorithm predominantly utilized by researchers in developing the diagnostic tool is CNN mainly from X-rays and CT scan images. We believe that the presented survey with bibliometric analysis can help researchers determine areas that need further development and identify potential collaborators at author, country, and institutional levels to advance research in the focused area of machine learning application for disease control. (2020) proposed a joint model comprising CNN, support vector machine (SVM), random forest (RF), and multilayer perceptron integrated with chest CT scan result and non-image clinical information to predict COVID-19 infection in a patient. cache = ./cache/cord-255884-0qqg10y4.txt txt = ./txt/cord-255884-0qqg10y4.txt === reduce.pl bib === id = cord-286887-s8lvimt3 author = Nour, Majid title = A Novel Medical Diagnosis model for COVID-19 infection detection based on Deep Features and Bayesian Optimization date = 2020-07-28 pages = extension = .txt mime = text/plain words = 3686 sentences = 250 flesch = 55 summary = The proposed model is based on the convolution neural network (CNN) architecture and can automatically reveal discriminative features on chest X-ray images through its convolution with rich filter families, abstraction, and weight-sharing characteristics. study [5] , they used Chest Computed Tomography (CT) images and Deep Transfer Learning (DTL) method to detect COVID-19 and obtained a high diagnostic accuracy. proposed a novel hybrid method called the Fuzzy Color technique + deep learning models (MobileNetV2, SqueezeNet) with a Social Mimic optimization method to classify the COVID-19 cases and achieved high success rate in their work [6] . (2) The deep features extracted from deep layers of CNNs have been applied as the input to machine learning models to further improve COVID-19 infection detection. Only the number of samples in the COVID-19 class is increased by using the offline data augmentation approach, and then the proposed CNN model is trained and tested. cache = ./cache/cord-286887-s8lvimt3.txt txt = ./txt/cord-286887-s8lvimt3.txt === reduce.pl bib === id = cord-190424-466a35jf author = Lee, Sang Won title = Darwin's Neural Network: AI-based Strategies for Rapid and Scalable Cell and Coronavirus Screening date = 2020-07-22 pages = extension = .txt mime = text/plain words = 5680 sentences = 302 flesch = 51 summary = Here we adapt the theory of survival of the fittest in the field of computer vision and machine perception to introduce a new framework of multi-class instance segmentation deep learning, Darwin's Neural Network (DNN), to carry out morphometric analysis and classification of COVID19 and MERS-CoV collected in vivo and of multiple mammalian cell types in vitro. U-Net with Inception ResNet v2 backbone yielded the highest global accuracy of 0.8346, as seen in Figure 4(E) ; therefore, Inception-ResNet-v2 was integrated in the place of CNN II for DNN for cells. For overall instance segmentation results, DNN produced both superior global accuracy and Jaccard Similarity Coefficient for cells and viruses. As observed in Figure 6 (C1-C2) , the DNN analysis showed statistical significance in area and circularity of the COVID19 in comparison to the MERS virus particles, which aligned with findings in the ground truth data of the viruses. cache = ./cache/cord-190424-466a35jf.txt txt = ./txt/cord-190424-466a35jf.txt === reduce.pl bib === id = cord-317643-pk8cabxj author = Masud, Mehedi title = Convolutional neural network-based models for diagnosis of breast cancer date = 2020-10-09 pages = extension = .txt mime = text/plain words = 4149 sentences = 276 flesch = 53 summary = With this motivation, this paper considers eight different fine-tuned pre-trained models to observe how these models classify breast cancers applying on ultrasound images. Authors in [18] proposed a convolutional neural network leveraging Inception-v3 pre-trained model to classify breast cancer using breast ultrasound images. Authors in [24] compared three CNN-based transfer learning models ResNet50, Xception, and InceptionV3, and proposed a base model that consists of three convolutional layers to classify breast cancers from the breast Neural Computing and Applications ultrasound images dataset. Authors in [27] proposed a novel deep neural network consisting of clustering method and CNN model for breast cancer classification using Histopathological images. Then eight different pre-trained models after fine tuning are applied on the combined dataset to observe the performance results of breast cancer classification. This study implemented eight pre-trained CNN models with fine tuning leveraging transfer learning to observe the classification performance of breast cancer from ultrasound images. cache = ./cache/cord-317643-pk8cabxj.txt txt = ./txt/cord-317643-pk8cabxj.txt === reduce.pl bib === id = cord-308219-97gor71p author = Elzeiny, Sami title = Stress Classification Using Photoplethysmogram-Based Spatial and Frequency Domain Images date = 2020-09-17 pages = extension = .txt mime = text/plain words = 5697 sentences = 312 flesch = 52 summary = By combining 20% of the samples collected from test subjects into the training data, the calibrated generic models' accuracy was improved and outperformed the generic performance across both the spatial and frequency domain images. The average classification accuracy of 99.6%, 99.9%, and 88.1%, and 99.2%, 97.4%, and 87.6% were obtained for the training set, validation set, and test set, respectively, using the calibrated generic classification-based method for the series of inter-beat interval (IBI) spatial and frequency domain images. The main contribution of this study is the use of the frequency domain images that are generated from the spatial domain images of the IBI extracted from the PPG signal to classify the stress state of the individual by building person-specific models and calibrated generic models. In this study, a new stress classification approach is proposed to classify the individual stress state into stressed or non-stressed by converting spatial images of inter-beat intervals of a PPG signal to frequency domain images and we use these pictures to train several CNN models. cache = ./cache/cord-308219-97gor71p.txt txt = ./txt/cord-308219-97gor71p.txt === reduce.pl bib === === reduce.pl bib === id = cord-269270-i2odcsx7 author = Sahlol, Ahmed T. title = COVID-19 image classification using deep features and fractional-order marine predators algorithm date = 2020-09-21 pages = extension = .txt mime = text/plain words = 7058 sentences = 437 flesch = 53 summary = In this paper, we propose an improved hybrid classification approach for COVID-19 images by combining the strengths of CNNs (using a powerful architecture called Inception) to extract features and a swarm-based feature selection algorithm (Marine Predators Algorithm) to select the most relevant features. The proposed COVID-19 X-ray classification approach starts by applying a CNN (especially, a powerful architecture called Inception which pre-trained on Imagnet dataset) to extract the discriminant features from raw images (with no pre-processing or segmentation) from the dataset that contains positive and negative COVID-19 images. 1. Propose an efficient hybrid classification approach for COVID-19 using a combination of CNN and an improved swarm-based feature selection algorithm. 4. Evaluate the proposed approach by performing extensive comparisons to several state-of-art feature selection algorithms, most recent CNN architectures and most recent relevant works and existing classification methods of COVID-19 images. cache = ./cache/cord-269270-i2odcsx7.txt txt = ./txt/cord-269270-i2odcsx7.txt === reduce.pl bib === id = cord-319868-rtt9i7wu author = Majeed, Taban title = Issues associated with deploying CNN transfer learning to detect COVID-19 from chest X-rays date = 2020-10-06 pages = extension = .txt mime = text/plain words = 7666 sentences = 377 flesch = 52 summary = In recent months, much research came out addressing the problem of COVID-19 detection in chest X-rays using deep learning approaches in general, and convolutional neural networks (CNNs) in particular [3] [4] [5] [6] [7] [8] [9] [10] . [3] built a deep convolutional neural network (CNN) based on ResNet50, InceptionV3 and Inception-ResNetV2 models for the classification of COVID-19 Chest X-ray images to normal and COVID-19 classes. [9] , authors use CT images to predict COVID-19 cases where they deployed Inception transfer-learning model to establish an accuracy of 89.5% with specificity of 88.0% and sensitivity of 87.0%. Wang and Wong [2] investigated a dataset that they called COVIDx and a neural network architecture called COVID-Net designed for the detection of COVID-19 cases from an open source chest X-ray radiography images. The deep learning architectures that we used for the purpose of COVID19 detection from X-ray images are AlexNet, VGG16, VGG19, ResNet18, ResNet50, ResNet101, Goog-leNet, InceptionV3, SqueezeNet, Inception-ReseNet-v2, Xception and DenseNet201. cache = ./cache/cord-319868-rtt9i7wu.txt txt = ./txt/cord-319868-rtt9i7wu.txt === reduce.pl bib === id = cord-296359-pt86juvr author = Polsinelli, Matteo title = A Light CNN for detecting COVID-19 from CT scans of the chest date = 2020-10-03 pages = extension = .txt mime = text/plain words = 3887 sentences = 201 flesch = 54 summary = In this work we propose a light Convolutional Neural Network (CNN) design, based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with respect to other community-acquired pneumonia and/or healthy CT images. Also the average classification time on a high-end workstation, 1.25 seconds, is very competitive with respect to that of more complex CNN designs, 13.41 seconds, witch require pre-processing. We started from the model of the SqueezeNet CNN to discriminate between COVID-19 and community-acquired pneumonia and/or healthy CT images. In this arrangement the number of images from the italian dataset used to train, validate and Test-1 are 60, 20 and 20, respectively. For each dataset arrangement we organized 4 experiments in which we tested different CNN models, transfer learning and the effectiveness of data augmentation. For each attempt, the CNN model has been trained for 20 epochs and evaluated by the accuracy results calculated on the validation dataset. cache = ./cache/cord-296359-pt86juvr.txt txt = ./txt/cord-296359-pt86juvr.txt === reduce.pl bib === id = cord-330239-l8fp8cvz author = Oyelade, O. N. title = Deep Learning Model for Improving the Characterization of Coronavirus on Chest X-ray Images Using CNN date = 2020-11-03 pages = extension = .txt mime = text/plain words = 6444 sentences = 325 flesch = 50 summary = The proposed model is then applied to the COVID-19 X-ray dataset in this study which is the National Institutes of Health (NIH) Chest X-Ray dataset obtained from Kaggle for the purpose of promoting early detection and screening of coronavirus disease. Several studies [4, 5, 6, 78, 26, 30] and reviews which have adapted CNN to the task of detection and classification of COVID-19 have proven that the deep learning model is one of the most popular and effective approaches in the diagnosis of COVD-19 from digitized images. In this paper, we propose the application of deep learning model in the category of Convolutional Neural Network (CNN) techniques to automate the process of extracting important features and then classification or detection of COVID-19 from digital images, and this may eventually be supportive in overcoming the issue of a shortage of trained physicians in remote communities [24] . cache = ./cache/cord-330239-l8fp8cvz.txt txt = ./txt/cord-330239-l8fp8cvz.txt === reduce.pl bib === id = cord-354819-gkbfbh00 author = Islam, Md. Zabirul title = A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images date = 2020-08-15 pages = extension = .txt mime = text/plain words = 3669 sentences = 238 flesch = 56 summary = title: A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images. Therefore, this paper aims to propose a deep learning based system that combines the CNN and LSTM networks to automatically detect COVID-19 from X-ray images. By analyzing the results, it is demonstrated that a combination of CNN and LSTM has significant effects on the detection of COVID-19 based on the automatic extraction of features from X-ray images. We introduced a deep CNN-LSTM network for the detection of novel COVID-19 from X-ray images. Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks Automated detection of COVID-19 cases using deep neural networks with X-ray images cache = ./cache/cord-354819-gkbfbh00.txt txt = ./txt/cord-354819-gkbfbh00.txt === reduce.pl bib === id = cord-325235-uupiv7wh author = Makris, A. title = COVID-19 detection from chest X-Ray images using Deep Learning and Convolutional Neural Networks date = 2020-05-24 pages = extension = .txt mime = text/plain words = 5435 sentences = 304 flesch = 50 summary = In this research work the effectiveness of several state-of-the-art pre-trained convolutional neural networks was evaluated regarding the automatic detection of COVID-19 disease from chest X-Ray images. A collection of 336 X-Ray scans in total from patients with COVID-19 disease, bacterial pneumonia and normal incidents is processed and utilized to train and test the CNNs. Due to the limited available data related to COVID-19, the transfer learning strategy is employed. The proposed CNN is based on pre-trained transfer models (ResNet50, InceptionV3 and Inception-ResNetV2), in order to obtain high prediction accuracy from a small sample of X-ray images. Abbas et al [22] presented a novel CNN architecture based on transfer learning and class decomposition in order to improve the performance of pre-trained models on the classification of X-ray images. 22.20110817 doi: medRxiv preprint In this research work the effectiveness of several state-of-the-art pre-trained convolutional neural networks was evaluated regarding the detection of COVID-19 disease from chest X-Ray images. cache = ./cache/cord-325235-uupiv7wh.txt txt = ./txt/cord-325235-uupiv7wh.txt === reduce.pl bib === id = cord-337740-8ujk830g author = Matencio, Adrián title = Cyclic Oligosaccharides as Active Drugs, an Updated Review date = 2020-09-29 pages = extension = .txt mime = text/plain words = 8089 sentences = 385 flesch = 39 summary = There have been many reviews of the cyclic oligosaccharide cyclodextrin (CD) and CD-based materials used for drug delivery, but the capacity of CDs to complex different agents and their own intrinsic properties suggest they might also be considered for use as active drugs, not only as carriers. The review is divided into lipid-related diseases, aggregation diseases, antiviral and antiparasitic activities, anti-anesthetic agent, function in diet, removal of organic toxins, CDs and collagen, cell differentiation, and finally, their use in contact lenses in which no drug other than CDs are involved. In addition to CDs, another dietary indigestible cyclic oligosaccharide formed by four D-glucopyranosyl residues linked by alternating α(1→3) and α(1→6) glucosidic linkages was recently found to have intrinsic bioactivity cyclic nigerosyl-1,6-nigerose or cyclotetraglucose (CNN, Figure 1 [21] ) The present review will update the most relevant applications mentioned in the review made by Braga et al., 2019 , including applications, such as the ability of CDs to combat aggregation diseases, their dietary functions, toxins removal, cell differentiation, and their application in contact lenses. cache = ./cache/cord-337740-8ujk830g.txt txt = ./txt/cord-337740-8ujk830g.txt === reduce.pl bib === ===== Reducing email addresses Creating transaction Updating adr table ===== Reducing keywords cord-024491-f16d1zov cord-131094-1zz8rd3h cord-002901-u4ybz8ds cord-027732-8i8bwlh8 cord-102774-mtbo1tnq cord-034614-r429idtl cord-175846-aguwenwo cord-135296-qv7pacau cord-127759-wpqdtdjs cord-028792-6a4jfz94 cord-249065-6yt3uqyy cord-266055-ki4gkoc8 cord-202184-hh7hugqi cord-275258-azpg5yrh cord-168974-w80gndka cord-032684-muh5rwla cord-121200-2qys8j4u cord-256756-8w5rtucg cord-103297-4stnx8dw cord-258170-kyztc1jp cord-255884-0qqg10y4 cord-286887-s8lvimt3 cord-190424-466a35jf cord-317643-pk8cabxj cord-308219-97gor71p cord-133273-kvyzuayp cord-269270-i2odcsx7 cord-319868-rtt9i7wu cord-296359-pt86juvr cord-330239-l8fp8cvz cord-354819-gkbfbh00 cord-325235-uupiv7wh cord-337740-8ujk830g cord-193356-hqbstgg7 Creating transaction Updating wrd table ===== Reducing urls cord-131094-1zz8rd3h cord-027732-8i8bwlh8 cord-102774-mtbo1tnq cord-266055-ki4gkoc8 cord-275258-azpg5yrh cord-103297-4stnx8dw cord-255884-0qqg10y4 cord-269270-i2odcsx7 cord-330239-l8fp8cvz cord-325235-uupiv7wh cord-337740-8ujk830g cord-193356-hqbstgg7 Creating transaction Updating url table ===== Reducing named entities cord-024491-f16d1zov cord-131094-1zz8rd3h cord-002901-u4ybz8ds cord-027732-8i8bwlh8 cord-102774-mtbo1tnq cord-034614-r429idtl cord-135296-qv7pacau cord-175846-aguwenwo cord-127759-wpqdtdjs cord-028792-6a4jfz94 cord-249065-6yt3uqyy cord-266055-ki4gkoc8 cord-202184-hh7hugqi cord-275258-azpg5yrh cord-168974-w80gndka cord-032684-muh5rwla cord-121200-2qys8j4u cord-256756-8w5rtucg cord-103297-4stnx8dw cord-258170-kyztc1jp cord-255884-0qqg10y4 cord-286887-s8lvimt3 cord-190424-466a35jf cord-317643-pk8cabxj cord-308219-97gor71p cord-133273-kvyzuayp cord-269270-i2odcsx7 cord-319868-rtt9i7wu cord-296359-pt86juvr cord-330239-l8fp8cvz cord-354819-gkbfbh00 cord-325235-uupiv7wh cord-337740-8ujk830g cord-193356-hqbstgg7 Creating transaction Updating ent table ===== Reducing parts of speech cord-024491-f16d1zov cord-002901-u4ybz8ds cord-027732-8i8bwlh8 cord-175846-aguwenwo cord-131094-1zz8rd3h cord-102774-mtbo1tnq cord-135296-qv7pacau cord-034614-r429idtl cord-028792-6a4jfz94 cord-127759-wpqdtdjs cord-249065-6yt3uqyy cord-266055-ki4gkoc8 cord-275258-azpg5yrh cord-202184-hh7hugqi cord-168974-w80gndka cord-256756-8w5rtucg cord-032684-muh5rwla cord-258170-kyztc1jp cord-286887-s8lvimt3 cord-121200-2qys8j4u cord-190424-466a35jf cord-255884-0qqg10y4 cord-317643-pk8cabxj cord-308219-97gor71p cord-269270-i2odcsx7 cord-296359-pt86juvr cord-354819-gkbfbh00 cord-319868-rtt9i7wu cord-330239-l8fp8cvz cord-325235-uupiv7wh cord-103297-4stnx8dw cord-337740-8ujk830g cord-193356-hqbstgg7 cord-133273-kvyzuayp Creating transaction Updating pos table Building ./etc/reader.txt cord-255884-0qqg10y4 cord-034614-r429idtl cord-330239-l8fp8cvz cord-133273-kvyzuayp cord-255884-0qqg10y4 cord-330239-l8fp8cvz number of items: 33 sum of words: 186,224 average size in words: 5,819 average readability score: 51 nouns: data; images; model; classification; learning; image; dataset; features; networks; results; sequence; models; sequences; training; performance; network; detection; feature; number; methods; accuracy; datasets; input; method; repertoire; study; attention; time; machine; approach; size; analysis; information; system; test; cases; ray; layer; class; algorithm; value; chest; section; repertoires; layers; values; disease; validation; problem; table verbs: using; based; propose; learning; showed; trained; applied; obtained; seen; considers; detect; comparing; given; followed; provide; perform; achieving; generated; extracted; presented; reduce; implanted; including; classifying; containing; represented; making; identifies; set; improve; increase; indicating; predicted; evaluated; takes; lead; described; required; developed; allow; known; finds; calculated; combined; reported; demonstrated; consisted; define; produce; results adjectives: deep; different; neural; immune; covid-19; high; real; large; positive; first; specific; medical; new; non; multi; pre; available; many; negative; computational; best; several; second; social; multiple; low; clinical; average; novel; better; small; modern; human; normal; similar; important; single; various; standard; main; possible; random; particular; higher; relevant; early; local; current; able; visual adverbs: also; well; however; respectively; therefore; furthermore; even; finally; randomly; moreover; fully; first; hence; especially; directly; currently; highly; automatically; usually; additionally; often; still; rather; already; specifically; instead; generally; correctly; significantly; previously; now; manually; almost; widely; typically; much; recently; subsequently; just; successfully; namely; better; together; similarly; relatively; easily; less; mostly; mainly; effectively pronouns: we; it; our; their; its; they; i; them; us; itself; one; he; you; his; themselves; your; her; she; ζ; â; yolov2; s; ourselves; ours; me; icam-5; him; f; d proper nouns: CNN; COVID-19; CT; DeepRC; Table; ±; LSTM; Fig; AUC; Hopfield; SVM; AI; MIL; QReLU; Deep; Convolutional; ReLU; CD; CMV; Neural; X; R; Learning; Inception; •; ResNet; CXR; MR; ML; N; F; GPU; K; AA; Adam; Sect; MPA; EfficientNet; Eq; Networks; Coronavirus; T; Figure; Emerson; Chest; SARS; Network; J; ECG; Detection keywords: cnn; covid-19; image; model; lstm; feature; sequence; table; mil; hopfield; cmv; vgg16; user; twitter; tweet; system; svm; september; robot; rician; result; rcc; ray; project; pick; november; niemann; network; nar; mpa; mnist; melanoma; manifestos; learn; international; icu; heartbeat; gesture; fast; eeg; ecg; doppler; dnn; detection; datum; cyclodextrin; cxr; cnn-2; cholesterol; cell one topic; one dimension: covid file(s): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206251/ titles(s): Simultaneous ECG Heartbeat Segmentation and Classification with Feature Fusion and Long Term Context Dependencies three topics; one dimension: covid; data; sequences file(s): http://medrxiv.org/cgi/content/short/2020.11.04.20225698v1?rss=1, https://arxiv.org/pdf/2010.16241v1.pdf, titles(s): Early survey with bibliometric analysis on machine learning approaches in controlling coronavirus | Artificial Intelligence: Research Impact on Key Industries; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2020) | five topics; three dimensions: images covid cnn; data based tweets; sequences sequence repertoire; cnn based models; covid 2020 learning file(s): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7609830/, https://arxiv.org/pdf/2010.10267v2.pdf, https://doi.org/10.1101/2020.04.12.038158, https://doi.org/10.3390/ph13100281, https://arxiv.org/pdf/2010.08031v1.pdf titles(s): A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks | Text Classification of Manifestos and COVID-19 Press Briefings using BERT and Convolutional Neural Networks | Modern Hopfield Networks and Attention for Immune Repertoire Classification | Cyclic Oligosaccharides as Active Drugs, an Updated Review | QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics Type: cord title: keyword-cnn-cord date: 2021-05-24 time: 22:41 username: emorgan patron: Eric Morgan email: emorgan@nd.edu input: keywords:cnn ==== make-pages.sh htm files ==== make-pages.sh complex files ==== make-pages.sh named enities ==== making bibliographics id: cord-028792-6a4jfz94 author: Basly, Hend title: CNN-SVM Learning Approach Based Human Activity Recognition date: 2020-06-05 words: 3570.0 sentences: 178.0 pages: flesch: 49.0 cache: ./cache/cord-028792-6a4jfz94.txt txt: ./txt/cord-028792-6a4jfz94.txt summary: Traditionally, to deal with such problem of recognition, researcher are obliged to anticipate their algorithms of Human activity recognition by prior data training preprocessing in order to extract a set of features using different types of descriptors such as HOG3D [1] , extended SURF [2] and Space Time Interest Points (STIPs) [3] before inputting them to the specific classification algorithm such as HMM, SVM, Random Forest [4] [5] [6] . In this study, we proposed an advanced human activity recognition method from video sequence using CNN, where the large scale dataset ImageNet pretrains the network. Finally, all the resulting features have been merged to be fed as input to a simulated annealing multiple instance learning support vector machine (SMILE-SVM) classifier for human activity recognition. We proposed to use a pre-trained CNN approach based ResNet model in order to extract spatial and temporal features from consecutive video frames. abstract: Although it has been encountered for a long time, the human activity recognition remains a big challenge to tackle. Recently, several deep learning approaches have been proposed to enhance the recognition performance with different areas of application. In this paper, we aim to combine a recent deep learning-based method and a traditional classifier based hand-crafted feature extractors in order to replace the artisanal feature extraction method with a new one. To this end, we used a deep convolutional neural network that offers the possibility of having more powerful extracted features from sequence video frames. The resulting feature vector is then fed as an input to the support vector machine (SVM) classifier to assign each instance to the corresponding label and bythere, recognize the performed activity. The proposed architecture was trained and evaluated on MSR Daily activity 3D dataset. Compared to state of art methods, our proposed technique proves that it has performed better. url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7340932/ doi: 10.1007/978-3-030-51935-3_29 id: cord-027732-8i8bwlh8 author: Boudaya, Amal title: EEG-Based Hypo-vigilance Detection Using Convolutional Neural Network date: 2020-05-31 words: 2337.0 sentences: 148.0 pages: flesch: 49.0 cache: ./cache/cord-027732-8i8bwlh8.txt txt: ./txt/cord-027732-8i8bwlh8.txt summary: Given, its high temporal resolution, portability and reasonable cost, the present work focus on hypo-vigilance detection by analyzing EEG signal of various brain''s functionalities using fourteen electrodes placed on the participant''s scalp. On the other hand, deep learning networks offer great potential for biomedical signals analysis through the simplification of raw input signals (i.e., through various steps including feature extraction, denoising and feature selection) and the improvement of the classification results. In this paper, we focus on the EEG signal study recorded by fourteen electrodes for hypo-vigilance detection by analyzing the various functionalities of the brain from the electrodes placed on the participant''s scalp. In this paper, we propose a CNN hypo-vigilance detection method using EEG data in order to classify drowsiness and awakeness states. In the proposed simple CNN architecture for EEG signals classification, we use the Keras deep learning library. abstract: Hypo-vigilance detection is becoming an important active research areas in the biomedical signal processing field. For this purpose, electroencephalogram (EEG) is one of the most common modalities in drowsiness and awakeness detection. In this context, we propose a new EEG classification method for detecting fatigue state. Our method makes use of a and awakeness detection. In this context, we propose a new EEG classification method for detecting fatigue state. Our method makes use of a Convolutional Neural Network (CNN) architecture. We define an experimental protocol using the Emotiv EPOC+ headset. After that, we evaluate our proposed method on a recorded and annotated dataset. The reported results demonstrate high detection accuracy (93%) and indicate that the proposed method is an efficient alternative for hypo-vigilance detection as compared with other methods. url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7313303/ doi: 10.1007/978-3-030-51517-1_6 id: cord-175846-aguwenwo author: Chatsiou, Kakia title: Text Classification of Manifestos and COVID-19 Press Briefings using BERT and Convolutional Neural Networks date: 2020-10-20 words: 3188.0 sentences: 192.0 pages: flesch: 47.0 cache: ./cache/cord-175846-aguwenwo.txt txt: ./txt/cord-175846-aguwenwo.txt summary: We use manually annotated political manifestos as training data to train a local topic ConvolutionalNeural Network (CNN) classifier; then apply it to the COVID-19PressBriefings Corpus to automatically classify sentences in the test corpus.We report on a series of experiments with CNN trained on top of pre-trained embeddings for sentence-level classification tasks. To aid fellow scholars with the systematic study of such a large and dynamic set of unstructured data, we set out to employ a text categorization classifier trained on similar domains (like existing manually annotated sentences from political manifestos) and use it to classify press briefings about the pandemic in a more effective and scalable way. abstract: We build a sentence-level political discourse classifier using existing human expert annotated corpora of political manifestos from the Manifestos Project (Volkens et al., 2020a) and applying them to a corpus ofCOVID-19Press Briefings (Chatsiou, 2020). We use manually annotated political manifestos as training data to train a local topic ConvolutionalNeural Network (CNN) classifier; then apply it to the COVID-19PressBriefings Corpus to automatically classify sentences in the test corpus.We report on a series of experiments with CNN trained on top of pre-trained embeddings for sentence-level classification tasks. We show thatCNN combined with transformers like BERT outperforms CNN combined with other embeddings (Word2Vec, Glove, ELMo) and that it is possible to use a pre-trained classifier to conduct automatic classification on different political texts without additional training. url: https://arxiv.org/pdf/2010.10267v2.pdf doi: nan id: cord-255884-0qqg10y4 author: Chiroma, H. title: Early survey with bibliometric analysis on machine learning approaches in controlling coronavirus date: 2020-11-05 words: 13197.0 sentences: 767.0 pages: flesch: 47.0 cache: ./cache/cord-255884-0qqg10y4.txt txt: ./txt/cord-255884-0qqg10y4.txt summary: Therefore, the main goal of this study is to bridge this gap by carrying out an in-depth survey with bibliometric analysis on the adoption of machine-learning-based technologies to fight the COVID-19 pandemic from a different perspective, including an extensive systematic literature review and a bibliometric analysis. Moreover, the machine-learning-based algorithm predominantly utilized by researchers in developing the diagnostic tool is CNN mainly from X-rays and CT scan images. We believe that the presented survey with bibliometric analysis can help researchers determine areas that need further development and identify potential collaborators at author, country, and institutional levels to advance research in the focused area of machine learning application for disease control. (2020) proposed a joint model comprising CNN, support vector machine (SVM), random forest (RF), and multilayer perceptron integrated with chest CT scan result and non-image clinical information to predict COVID-19 infection in a patient. abstract: Background and Objective: The COVID-19 pandemic has caused severe mortality across the globe with the USA as the current epicenter, although the initial outbreak was in Wuhan, China. Many studies successfully applied machine learning to fight the COVID-19 pandemic from a different perspective. To the best of the authors knowledge, no comprehensive survey with bibliometric analysis has been conducted on the adoption of machine learning for fighting COVID-19. Therefore, the main goal of this study is to bridge this gap by carrying out an in-depth survey with bibliometric analysis on the adoption of machine-learning-based technologies to fight the COVID-19 pandemic from a different perspective, including an extensive systematic literature review and a bibliometric analysis. Methods: A literature survey methodology is applied to retrieve data from academic databases, and a bibliometric technique is subsequently employed to analyze the accessed records. Moreover, the concise summary, sources of COVID-19 datasets, taxonomy, synthesis, and analysis are presented. The convolutional neural network (CNN) is found mainly utilized in developing COVID-19 diagnosis and prognosis tools, mostly from chest X-ray and chest computed tomography (CT) scan images. Similarly, a bibliometric analysis of machine-learning-based COVID-19-related publications in Scopus and Web of Science citation indexes is performed. Finally, a new perspective is proposed to solve the challenges identified as directions for future research. We believe that the survey with bibliometric analysis can help researchers easily detect areas that require further development and identify potential collaborators. Results: The findings in this study reveal that machine-learning-based COVID-19 diagnostic tools received the most considerable attention from researchers. Specifically, the analyses of the results show that energy and resources are more dispensed toward COVID-19 automated diagnostic tools, while COVID-19 drugs and vaccine development remain grossly underexploited. Moreover, the machine-learning-based algorithm predominantly utilized by researchers in developing the diagnostic tool is CNN mainly from X-rays and CT scan images. Conclusions: The challenges hindering practical work on the application of machine-learning-based technologies to fight COVID-19 and a new perspective to solve the identified problems are presented in this study. We believe that the presented survey with bibliometric analysis can help researchers determine areas that need further development and identify potential collaborators at author, country, and institutional levels to advance research in the focused area of machine learning application for disease control. url: http://medrxiv.org/cgi/content/short/2020.11.04.20225698v1?rss=1 doi: 10.1101/2020.11.04.20225698 id: cord-133273-kvyzuayp author: Christ, Andreas title: Artificial Intelligence: Research Impact on Key Industries; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2020) date: 2020-10-05 words: nan sentences: nan pages: flesch: nan cache: txt: summary: abstract: The TriRhenaTech alliance presents a collection of accepted papers of the cancelled tri-national 'Upper-Rhine Artificial Inteeligence Symposium' planned for 13th May 2020 in Karlsruhe. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes 'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students. url: https://arxiv.org/pdf/2010.16241v1.pdf doi: nan id: cord-308219-97gor71p author: Elzeiny, Sami title: Stress Classification Using Photoplethysmogram-Based Spatial and Frequency Domain Images date: 2020-09-17 words: 5697.0 sentences: 312.0 pages: flesch: 52.0 cache: ./cache/cord-308219-97gor71p.txt txt: ./txt/cord-308219-97gor71p.txt summary: By combining 20% of the samples collected from test subjects into the training data, the calibrated generic models'' accuracy was improved and outperformed the generic performance across both the spatial and frequency domain images. The average classification accuracy of 99.6%, 99.9%, and 88.1%, and 99.2%, 97.4%, and 87.6% were obtained for the training set, validation set, and test set, respectively, using the calibrated generic classification-based method for the series of inter-beat interval (IBI) spatial and frequency domain images. The main contribution of this study is the use of the frequency domain images that are generated from the spatial domain images of the IBI extracted from the PPG signal to classify the stress state of the individual by building person-specific models and calibrated generic models. In this study, a new stress classification approach is proposed to classify the individual stress state into stressed or non-stressed by converting spatial images of inter-beat intervals of a PPG signal to frequency domain images and we use these pictures to train several CNN models. abstract: Stress is subjective and is manifested differently from one person to another. Thus, the performance of generic classification models that classify stress status is crude. Building a person-specific model leads to a reliable classification, but it requires the collection of new data to train a new model for every individual and needs periodic upgrades because stress is dynamic. In this paper, a new binary classification (called stressed and non-stressed) approach is proposed for a subject’s stress state in which the inter-beat intervals extracted from a photoplethysomogram (PPG) were transferred to spatial images and then to frequency domain images according to the number of consecutive. Then, the convolution neural network (CNN) was used to train and validate the classification accuracy of the person’s stress state. Three types of classification models were built: person-specific models, generic classification models, and calibrated-generic classification models. The average classification accuracies achieved by person-specific models using spatial images and frequency domain images were 99.9%, 100%, and 99.8%, and 99.68%, 98.97%, and 96.4% for the training, validation, and test, respectively. By combining 20% of the samples collected from test subjects into the training data, the calibrated generic models’ accuracy was improved and outperformed the generic performance across both the spatial and frequency domain images. The average classification accuracy of 99.6%, 99.9%, and 88.1%, and 99.2%, 97.4%, and 87.6% were obtained for the training set, validation set, and test set, respectively, using the calibrated generic classification-based method for the series of inter-beat interval (IBI) spatial and frequency domain images. The main contribution of this study is the use of the frequency domain images that are generated from the spatial domain images of the IBI extracted from the PPG signal to classify the stress state of the individual by building person-specific models and calibrated generic models. url: https://www.ncbi.nlm.nih.gov/pubmed/32957479/ doi: 10.3390/s20185312 id: cord-354819-gkbfbh00 author: Islam, Md. Zabirul title: A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images date: 2020-08-15 words: 3669.0 sentences: 238.0 pages: flesch: 56.0 cache: ./cache/cord-354819-gkbfbh00.txt txt: ./txt/cord-354819-gkbfbh00.txt summary: title: A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images. Therefore, this paper aims to propose a deep learning based system that combines the CNN and LSTM networks to automatically detect COVID-19 from X-ray images. By analyzing the results, it is demonstrated that a combination of CNN and LSTM has significant effects on the detection of COVID-19 based on the automatic extraction of features from X-ray images. We introduced a deep CNN-LSTM network for the detection of novel COVID-19 from X-ray images. Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks Automated detection of COVID-19 cases using deep neural networks with X-ray images abstract: Nowadays, automatic disease detection has become a crucial issue in medical science due to rapid population growth. An automatic disease detection framework assists doctors in the diagnosis of disease and provides exact, consistent, and fast results and reduces the death rate. Coronavirus (COVID-19) has become one of the most severe and acute diseases in recent times and has spread globally. Therefore, an automated detection system, as the fastest diagnostic option, should be implemented to impede COVID-19 from spreading. This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images. In this system, CNN is used for deep feature extraction and LSTM is used for detection using the extracted feature. A collection of 4575 X-ray images, including 1525 images of COVID-19, were used as a dataset in this system. The experimental results show that our proposed system achieved an accuracy of 99.4%, AUC of 99.9%, specificity of 99.2%, sensitivity of 99.3%, and F1-score of 98.9%. The system achieved desired results on the currently available dataset, which can be further improved when more COVID-19 images become available. The proposed system can help doc-tors to diagnose and treat COVID-19 patients easily. url: https://www.ncbi.nlm.nih.gov/pubmed/32835084/ doi: 10.1016/j.imu.2020.100412 id: cord-249065-6yt3uqyy author: Kassani, Sara Hosseinzadeh title: Automatic Detection of Coronavirus Disease (COVID-19) in X-ray and CT Images: A Machine Learning-Based Approach date: 2020-04-22 words: 4285.0 sentences: 229.0 pages: flesch: 45.0 cache: ./cache/cord-249065-6yt3uqyy.txt txt: ./txt/cord-249065-6yt3uqyy.txt summary: To the best of our knowledge, this research is the first comprehensive study of the application of machine learning (ML) algorithms (15 deep CNN visual feature extractor and 6 ML classifier) for automatic diagnoses of COVID-19 from X-ray and CT images. • With extensive experiments, we show that the combination of a deep CNN with Bagging trees classifier achieves very good classification performance applied on COVID-19 data despite the limited number of image samples. Motivated by the success of deep learning models in computer vision, the focus of this research is to provide an extensive comprehensive study on the classification of COVID-19 pneumonia in chest X-ray and CT imaging using features extracted by the stateof-the-art deep CNN architectures and trained on machine learning algorithms. The experimental results on available chest X-ray and CT dataset demonstrate that the features extracted by DesnseNet121 architecture and trained by a Bagging tree classifier generates very accurate prediction of 99.00% in terms of classification accuracy. abstract: The newly identified Coronavirus pneumonia, subsequently termed COVID-19, is highly transmittable and pathogenic with no clinically approved antiviral drug or vaccine available for treatment. The most common symptoms of COVID-19 are dry cough, sore throat, and fever. Symptoms can progress to a severe form of pneumonia with critical complications, including septic shock, pulmonary edema, acute respiratory distress syndrome and multi-organ failure. While medical imaging is not currently recommended in Canada for primary diagnosis of COVID-19, computer-aided diagnosis systems could assist in the early detection of COVID-19 abnormalities and help to monitor the progression of the disease, potentially reduce mortality rates. In this study, we compare popular deep learning-based feature extraction frameworks for automatic COVID-19 classification. To obtain the most accurate feature, which is an essential component of learning, MobileNet, DenseNet, Xception, ResNet, InceptionV3, InceptionResNetV2, VGGNet, NASNet were chosen amongst a pool of deep convolutional neural networks. The extracted features were then fed into several machine learning classifiers to classify subjects as either a case of COVID-19 or a control. This approach avoided task-specific data pre-processing methods to support a better generalization ability for unseen data. The performance of the proposed method was validated on a publicly available COVID-19 dataset of chest X-ray and CT images. The DenseNet121 feature extractor with Bagging tree classifier achieved the best performance with 99% classification accuracy. The second-best learner was a hybrid of the a ResNet50 feature extractor trained by LightGBM with an accuracy of 98%. url: https://arxiv.org/pdf/2004.10641v1.pdf doi: nan id: cord-266055-ki4gkoc8 author: Kikkisetti, S. title: Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs date: 2020-09-02 words: 3433.0 sentences: 228.0 pages: flesch: 49.0 cache: ./cache/cord-266055-ki4gkoc8.txt txt: ./txt/cord-266055-ki4gkoc8.txt summary: title: Deep-learning convolutional neural networks with transfer learning accurately classify COVID19 lung infection on portable chest radiographs This study employed deep-learning convolutional neural networks to classify COVID-19 lung infections on pCXR from normal and related lung infections to potentially enable more timely and accurate diagnosis. This retrospect study employed deep-learning convolutional neural network (CNN) with transfer learning to classify based on pCXRs COVID-19 pneumonia (N=455) on pCXR from normal (N=532), bacterial pneumonia (N=492), and non-COVID viral pneumonia (N=552). Deep-learning convolutional neural network with transfer learning accurately classifies COVID-19 on portable chest x-ray against normal, bacterial pneumonia or non-COVID viral pneumonia. The goal of this pilot study is to employ deep-learning convolutional neural networks to classify normal, bacterial infection, and non-COVID-19 viral infection (such as influenza) All rights reserved. In conclusion, deep learning convolutional neural networks with transfer learning accurately classify COVID-19 pCXR from pCXR of normal, bacterial pneumonia, and non-COVID viral pneumonia patients in a multiclass model. abstract: Portable chest x-ray (pCXR) has become an indispensable tool in the management of Coronavirus Disease 2019 (COVID-19) lung infection. This study employed deep-learning convolutional neural networks to classify COVID-19 lung infections on pCXR from normal and related lung infections to potentially enable more timely and accurate diagnosis. This retrospect study employed deep-learning convolutional neural network (CNN) with transfer learning to classify based on pCXRs COVID-19 pneumonia (N=455) on pCXR from normal (N=532), bacterial pneumonia (N=492), and non-COVID viral pneumonia (N=552). The data was split into 75% training and 25% testing. A five-fold cross-validation was used. Performance was evaluated using receiver-operating curve analysis. Comparison was made with CNN operated on the whole pCXR and segmented lungs. CNN accurately classified COVID-19 pCXR from those of normal, bacterial pneumonia, and non-COVID-19 viral pneumonia patients in a multiclass model. The overall sensitivity, specificity, accuracy, and AUC were 0.79, 0.93, and 0.79, 0.85 respectively (whole pCXR), and were 0.91, 0.93, 0.88, and 0.89 (CXR of segmented lung). The performance was generally better using segmented lungs. Heatmaps showed that CNN accurately localized areas of hazy appearance, ground glass opacity and/or consolidation on the pCXR. Deep-learning convolutional neural network with transfer learning accurately classifies COVID-19 on portable chest x-ray against normal, bacterial pneumonia or non-COVID viral pneumonia. This approach has the potential to help radiologists and frontline physicians by providing more timely and accurate diagnosis. url: https://doi.org/10.1101/2020.09.02.20186759 doi: 10.1101/2020.09.02.20186759 id: cord-190424-466a35jf author: Lee, Sang Won title: Darwin''s Neural Network: AI-based Strategies for Rapid and Scalable Cell and Coronavirus Screening date: 2020-07-22 words: 5680.0 sentences: 302.0 pages: flesch: 51.0 cache: ./cache/cord-190424-466a35jf.txt txt: ./txt/cord-190424-466a35jf.txt summary: Here we adapt the theory of survival of the fittest in the field of computer vision and machine perception to introduce a new framework of multi-class instance segmentation deep learning, Darwin''s Neural Network (DNN), to carry out morphometric analysis and classification of COVID19 and MERS-CoV collected in vivo and of multiple mammalian cell types in vitro. U-Net with Inception ResNet v2 backbone yielded the highest global accuracy of 0.8346, as seen in Figure 4(E) ; therefore, Inception-ResNet-v2 was integrated in the place of CNN II for DNN for cells. For overall instance segmentation results, DNN produced both superior global accuracy and Jaccard Similarity Coefficient for cells and viruses. As observed in Figure 6 (C1-C2) , the DNN analysis showed statistical significance in area and circularity of the COVID19 in comparison to the MERS virus particles, which aligned with findings in the ground truth data of the viruses. abstract: Recent advances in the interdisciplinary scientific field of machine perception, computer vision, and biomedical engineering underpin a collection of machine learning algorithms with a remarkable ability to decipher the contents of microscope and nanoscope images. Machine learning algorithms are transforming the interpretation and analysis of microscope and nanoscope imaging data through use in conjunction with biological imaging modalities. These advances are enabling researchers to carry out real-time experiments that were previously thought to be computationally impossible. Here we adapt the theory of survival of the fittest in the field of computer vision and machine perception to introduce a new framework of multi-class instance segmentation deep learning, Darwin's Neural Network (DNN), to carry out morphometric analysis and classification of COVID19 and MERS-CoV collected in vivo and of multiple mammalian cell types in vitro. url: https://arxiv.org/pdf/2007.11653v1.pdf doi: nan id: cord-032684-muh5rwla author: Madichetty, Sreenivasulu title: A stacked convolutional neural network for detecting the resource tweets during a disaster date: 2020-09-25 words: 6980.0 sentences: 418.0 pages: flesch: 55.0 cache: ./cache/cord-032684-muh5rwla.txt txt: ./txt/cord-032684-muh5rwla.txt summary: Specifically, the authors in [3] used both information-retrieval methodologies and classification methodologies (CNN with crisis word embeddings) to extract the Need and Availability of Resource tweets during the disaster. The main drawback of CNN with crisis embeddings is that it does not work well if the number of training tweets is small and, in the case of information retrieval methodologies, keywords must be given manually to identify the need and availability of resource tweets during the disaster. Initially, the experiment is performed on the SVM classifier based on the proposed domainspecific features for the identification of NAR tweets and compared to the BoW model shown in Table 5 . This paper proposes a method named as CKS (CNN and KNN are used as base-level classifiers, and SVM is used as a Meta-level classifier) for identifying tweets related to the Need and Availability of Resources during the disaster. abstract: Social media platform like Twitter is one of the primary sources for sharing real-time information at the time of events such as disasters, political events, etc. Detecting the resource tweets during a disaster is an essential task because tweets contain different types of information such as infrastructure damage, resources, opinions and sympathies of disaster events, etc. Tweets are posted related to Need and Availability of Resources (NAR) by humanitarian organizations and victims. Hence, reliable methodologies are required for detecting the NAR tweets during a disaster. The existing works don’t focus well on NAR tweets detection and also had poor performance. Hence, this paper focus on detection of NAR tweets during a disaster. Existing works often use features and appropriate machine learning algorithms on several Natural Language Processing (NLP) tasks. Recently, there is a wide use of Convolutional Neural Networks (CNN) in text classification problems. However, it requires a large amount of manual labeled data. There is no such large labeled data is available for NAR tweets during a disaster. To overcome this problem, stacking of Convolutional Neural Networks with traditional feature based classifiers is proposed for detecting the NAR tweets. In our approach, we propose several informative features such as aid, need, food, packets, earthquake, etc. are used in the classifier and CNN. The learned features (output of CNN and classifier with informative features) are utilized in another classifier (meta-classifier) for detection of NAR tweets. The classifiers such as SVM, KNN, Decision tree, and Naive Bayes are used in the proposed model. From the experiments, we found that the usage of KNN (base classifier) and SVM (meta classifier) with the combination of CNN in the proposed model outperform the other algorithms. This paper uses 2015 and 2016 Nepal and Italy earthquake datasets for experimentation. The experimental results proved that the proposed model achieves the best accuracy compared to baseline methods. url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7517055/ doi: 10.1007/s11042-020-09873-8 id: cord-319868-rtt9i7wu author: Majeed, Taban title: Issues associated with deploying CNN transfer learning to detect COVID-19 from chest X-rays date: 2020-10-06 words: 7666.0 sentences: 377.0 pages: flesch: 52.0 cache: ./cache/cord-319868-rtt9i7wu.txt txt: ./txt/cord-319868-rtt9i7wu.txt summary: In recent months, much research came out addressing the problem of COVID-19 detection in chest X-rays using deep learning approaches in general, and convolutional neural networks (CNNs) in particular [3] [4] [5] [6] [7] [8] [9] [10] . [3] built a deep convolutional neural network (CNN) based on ResNet50, InceptionV3 and Inception-ResNetV2 models for the classification of COVID-19 Chest X-ray images to normal and COVID-19 classes. [9] , authors use CT images to predict COVID-19 cases where they deployed Inception transfer-learning model to establish an accuracy of 89.5% with specificity of 88.0% and sensitivity of 87.0%. Wang and Wong [2] investigated a dataset that they called COVIDx and a neural network architecture called COVID-Net designed for the detection of COVID-19 cases from an open source chest X-ray radiography images. The deep learning architectures that we used for the purpose of COVID19 detection from X-ray images are AlexNet, VGG16, VGG19, ResNet18, ResNet50, ResNet101, Goog-leNet, InceptionV3, SqueezeNet, Inception-ReseNet-v2, Xception and DenseNet201. abstract: Covid-19 first occurred in Wuhan, China in December 2019. Subsequently, the virus spread throughout the world and as of June 2020 the total number of confirmed cases are above 4.7 million with over 315,000 deaths. Machine learning algorithms built on radiography images can be used as a decision support mechanism to aid radiologists to speed up the diagnostic process. The aim of this work is to conduct a critical analysis to investigate the applicability of convolutional neural networks (CNNs) for the purpose of COVID-19 detection in chest X-ray images and highlight the issues of using CNN directly on the whole image. To accomplish this task, we use 12-off-the-shelf CNN architectures in transfer learning mode on 3 publicly available chest X-ray databases together with proposing a shallow CNN architecture in which we train it from scratch. Chest X-ray images are fed into CNN models without any preprocessing to replicate researches used chest X-rays in this manner. Then a qualitative investigation performed to inspect the decisions made by CNNs using a technique known as class activation maps (CAM). Using CAMs, one can map the activations contributed to the decision of CNNs back to the original image to visualize the most discriminating region(s) on the input image. We conclude that CNN decisions should not be taken into consideration, despite their high classification accuracy, until clinicians can visually inspect and approve the region(s) of the input image used by CNNs that lead to its prediction. url: https://doi.org/10.1007/s13246-020-00934-8 doi: 10.1007/s13246-020-00934-8 id: cord-325235-uupiv7wh author: Makris, A. title: COVID-19 detection from chest X-Ray images using Deep Learning and Convolutional Neural Networks date: 2020-05-24 words: 5435.0 sentences: 304.0 pages: flesch: 50.0 cache: ./cache/cord-325235-uupiv7wh.txt txt: ./txt/cord-325235-uupiv7wh.txt summary: In this research work the effectiveness of several state-of-the-art pre-trained convolutional neural networks was evaluated regarding the automatic detection of COVID-19 disease from chest X-Ray images. A collection of 336 X-Ray scans in total from patients with COVID-19 disease, bacterial pneumonia and normal incidents is processed and utilized to train and test the CNNs. Due to the limited available data related to COVID-19, the transfer learning strategy is employed. The proposed CNN is based on pre-trained transfer models (ResNet50, InceptionV3 and Inception-ResNetV2), in order to obtain high prediction accuracy from a small sample of X-ray images. Abbas et al [22] presented a novel CNN architecture based on transfer learning and class decomposition in order to improve the performance of pre-trained models on the classification of X-ray images. 22.20110817 doi: medRxiv preprint In this research work the effectiveness of several state-of-the-art pre-trained convolutional neural networks was evaluated regarding the detection of COVID-19 disease from chest X-Ray images. abstract: The COVID-19 pandemic in 2020 has highlighted the need to pull all available resources towards the mitigation of the devastating effects of such "Black Swan" events. Towards that end, we investigated the option to employ technology in order to assist the diagnosis of patients infected by the virus. As such, several state-of-the-art pre-trained convolutional neural networks were evaluated as of their ability to detect infected patients from chest X-Ray images. A dataset was created as a mix of publicly available X-ray images from patients with confirmed COVID-19 disease, common bacterial pneumonia and healthy individuals. To mitigate the small number of samples, we employed transfer learning, which transfers knowledge extracted by pre-trained models to the model to be trained. The experimental results demonstrate that the classification performance can reach an accuracy of 95% for the best two models. url: http://medrxiv.org/cgi/content/short/2020.05.22.20110817v1?rss=1 doi: 10.1101/2020.05.22.20110817 id: cord-256756-8w5rtucg author: Manimala, M. V. R. title: Sparse MR Image Reconstruction Considering Rician Noise Models: A CNN Approach date: 2020-08-11 words: 6681.0 sentences: 411.0 pages: flesch: 55.0 cache: ./cache/cord-256756-8w5rtucg.txt txt: ./txt/cord-256756-8w5rtucg.txt summary: The proposed algorithm employs a convolutional neural network (CNN) to denoise MR images corrupted with Rician noise. Dictionary learning for MRI (DLMRI) provided an effective solution to recover MR images from sparse k-space data [2] , but had a drawback of high computational time. The proposed denoising algorithm reconstructs MR images with high visual quality, further; it can be directly employed without optimization and prediction of the Rician noise level. The proposed CNN based algorithm is capable of denoising the Rician noise corrupted sparse MR images and also reduces the computation time substantially. This section presents the proposed CNN-based formulation for denoising and reconstruction of MR images from the sparse k-space data. The proposed CNN based denoising algorithm has been compared with various state-ofthe-art-techniques namely (1) Dictionary learning magnetic resonance imaging (DLMRI) [2] (2) Non-local means (NLM) and its variants namely unbiased NLM (UNLM), Rician NLM (RNLM), enhanced NLM (ENLM) and enhanced NLM filter with preprocessing (PENLM) [5] . abstract: Compressive sensing (CS) provides a potential platform for acquiring slow and sequential data, as in magnetic resonance (MR) imaging. However, CS requires high computational time for reconstructing MR images from sparse k-space data, which restricts its usage for high speed online reconstruction and wireless communications. Another major challenge is removal of Rician noise from magnitude MR images which changes the image characteristics, and thus affects the clinical usefulness. The work carried out so far predominantly models MRI noise as a Gaussian type. The use of advanced noise models primarily Rician type in CS paradigm is less explored. In this work, we develop a novel framework to reconstruct MR images with high speed and visual quality from noisy sparse k-space data. The proposed algorithm employs a convolutional neural network (CNN) to denoise MR images corrupted with Rician noise. To extract local features, the algorithm exploits signal similarities by processing similar patches as a group. An imperative reduction in the run time has been achieved as the CNN has been trained on a GPU with Convolutional Architecture for Fast Feature Embedding framework making it suitable for online reconstruction. The CNN based reconstruction also eliminates the necessity of optimization and prediction of noise level while denoising, which is the major advantage over existing state-of-the-art-techniques. Analytical experiments have been carried out with various undersampling schemes and the experimental results demonstrate high accuracy and consistent peak signal to noise ratio even at 20-fold undersampling. High undersampling rates provide scope for wireless transmission of k-space data and high speed reconstruction provides applicability of our algorithm for remote health monitoring. url: https://www.ncbi.nlm.nih.gov/pubmed/32836885/ doi: 10.1007/s11277-020-07725-0 id: cord-317643-pk8cabxj author: Masud, Mehedi title: Convolutional neural network-based models for diagnosis of breast cancer date: 2020-10-09 words: 4149.0 sentences: 276.0 pages: flesch: 53.0 cache: ./cache/cord-317643-pk8cabxj.txt txt: ./txt/cord-317643-pk8cabxj.txt summary: With this motivation, this paper considers eight different fine-tuned pre-trained models to observe how these models classify breast cancers applying on ultrasound images. Authors in [18] proposed a convolutional neural network leveraging Inception-v3 pre-trained model to classify breast cancer using breast ultrasound images. Authors in [24] compared three CNN-based transfer learning models ResNet50, Xception, and InceptionV3, and proposed a base model that consists of three convolutional layers to classify breast cancers from the breast Neural Computing and Applications ultrasound images dataset. Authors in [27] proposed a novel deep neural network consisting of clustering method and CNN model for breast cancer classification using Histopathological images. Then eight different pre-trained models after fine tuning are applied on the combined dataset to observe the performance results of breast cancer classification. This study implemented eight pre-trained CNN models with fine tuning leveraging transfer learning to observe the classification performance of breast cancer from ultrasound images. abstract: Breast cancer is the most prevailing cancer in the world and each year affecting millions of women. It is also the cause of largest number of deaths in women dying in cancers. During the last few years, researchers are proposing different convolutional neural network models in order to facilitate diagnostic process of breast cancer. Convolutional neural networks are showing promising results to classify cancers using image datasets. There is still a lack of standard models which can claim the best model because of unavailability of large datasets that can be used for models’ training and validation. Hence, researchers are now focusing on leveraging the transfer learning approach using pre-trained models as feature extractors that are trained over millions of different images. With this motivation, this paper considers eight different fine-tuned pre-trained models to observe how these models classify breast cancers applying on ultrasound images. We also propose a shallow custom convolutional neural network that outperforms the pre-trained models with respect to different performance metrics. The proposed model shows 100% accuracy and achieves 1.0 AUC score, whereas the best pre-trained model shows 92% accuracy and 0.972 AUC score. In order to avoid biasness, the model is trained using the fivefold cross validation technique. Moreover, the model is faster in training than the pre-trained models and requires a small number of trainable parameters. The Grad-CAM heat map visualization technique also shows how perfectly the proposed model extracts important features to classify breast cancers. url: https://www.ncbi.nlm.nih.gov/pubmed/33052172/ doi: 10.1007/s00521-020-05394-5 id: cord-337740-8ujk830g author: Matencio, Adrián title: Cyclic Oligosaccharides as Active Drugs, an Updated Review date: 2020-09-29 words: 8089.0 sentences: 385.0 pages: flesch: 39.0 cache: ./cache/cord-337740-8ujk830g.txt txt: ./txt/cord-337740-8ujk830g.txt summary: There have been many reviews of the cyclic oligosaccharide cyclodextrin (CD) and CD-based materials used for drug delivery, but the capacity of CDs to complex different agents and their own intrinsic properties suggest they might also be considered for use as active drugs, not only as carriers. The review is divided into lipid-related diseases, aggregation diseases, antiviral and antiparasitic activities, anti-anesthetic agent, function in diet, removal of organic toxins, CDs and collagen, cell differentiation, and finally, their use in contact lenses in which no drug other than CDs are involved. In addition to CDs, another dietary indigestible cyclic oligosaccharide formed by four D-glucopyranosyl residues linked by alternating α(1→3) and α(1→6) glucosidic linkages was recently found to have intrinsic bioactivity cyclic nigerosyl-1,6-nigerose or cyclotetraglucose (CNN, Figure 1 [21] ) The present review will update the most relevant applications mentioned in the review made by Braga et al., 2019 , including applications, such as the ability of CDs to combat aggregation diseases, their dietary functions, toxins removal, cell differentiation, and their application in contact lenses. abstract: There have been many reviews of the cyclic oligosaccharide cyclodextrin (CD) and CD-based materials used for drug delivery, but the capacity of CDs to complex different agents and their own intrinsic properties suggest they might also be considered for use as active drugs, not only as carriers. The aim of this review is to summarize the direct use of CDs as drugs, without using its complexing potential with other substances. The direct application of another oligosaccharide called cyclic nigerosyl-1,6-nigerose (CNN) is also described. The review is divided into lipid-related diseases, aggregation diseases, antiviral and antiparasitic activities, anti-anesthetic agent, function in diet, removal of organic toxins, CDs and collagen, cell differentiation, and finally, their use in contact lenses in which no drug other than CDs are involved. In the case of CNN, its application as a dietary supplement and immunological modulator is explained. Finally, a critical structure–activity explanation is provided. url: https://doi.org/10.3390/ph13100281 doi: 10.3390/ph13100281 id: cord-275258-azpg5yrh author: Mead, Dylan J.T. title: Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling date: 2019-07-26 words: 6333.0 sentences: 346.0 pages: flesch: 53.0 cache: ./cache/cord-275258-azpg5yrh.txt txt: ./txt/cord-275258-azpg5yrh.txt summary: title: Visualization of protein sequence space with force-directed graphs, and their application to the choice of target-template pairs for homology modelling This paper presents the first use of force-directed graphs for the visualization of sequence space in two dimensions, and applies them to the choice of suitable RNA-dependent RNA polymerase (RdRP) target-template pairs within human-infective RNA virus genera. Measures of centrality in protein sequence space for each genus were also derived and used to identify centroid nearest-neighbour sequences (CNNs) potentially useful for production of homology models most representative of their genera. We then present the first use of force-directed graphs to produce an intuitive visualization of sequence space, and select target RdRPs without solved structures for homology modelling. The solved structure has 10 other sequences in its proximity in the three-dimensional space, roughly Table 5 Homology modelling at intra-order, inter-family level. abstract: The protein sequence-structure gap results from the contrast between rapid, low-cost deep sequencing, and slow, expensive experimental structure determination techniques. Comparative homology modelling may have the potential to close this gap by predicting protein structure in target sequences using existing experimentally solved structures as templates. This paper presents the first use of force-directed graphs for the visualization of sequence space in two dimensions, and applies them to the choice of suitable RNA-dependent RNA polymerase (RdRP) target-template pairs within human-infective RNA virus genera. Measures of centrality in protein sequence space for each genus were also derived and used to identify centroid nearest-neighbour sequences (CNNs) potentially useful for production of homology models most representative of their genera. Homology modelling was then carried out for target-template pairs in different species, different genera and different families, and model quality assessed using several metrics. Reconstructed ancestral RdRP sequences for individual genera were also used as templates for the production of ancestral RdRP homology models. High quality ancestral RdRP models were consistently produced, as were good quality models for target-template pairs in the same genus. Homology modelling between genera in the same family produced mixed results and inter-family modelling was unreliable. We present a protocol for the production of optimal RdRP homology models for use in further experiments, e.g. docking to discover novel anti-viral compounds. (219 words) url: https://www.sciencedirect.com/science/article/pii/S109332631930333X doi: 10.1016/j.jmgm.2019.07.014 id: cord-286887-s8lvimt3 author: Nour, Majid title: A Novel Medical Diagnosis model for COVID-19 infection detection based on Deep Features and Bayesian Optimization date: 2020-07-28 words: 3686.0 sentences: 250.0 pages: flesch: 55.0 cache: ./cache/cord-286887-s8lvimt3.txt txt: ./txt/cord-286887-s8lvimt3.txt summary: The proposed model is based on the convolution neural network (CNN) architecture and can automatically reveal discriminative features on chest X-ray images through its convolution with rich filter families, abstraction, and weight-sharing characteristics. study [5] , they used Chest Computed Tomography (CT) images and Deep Transfer Learning (DTL) method to detect COVID-19 and obtained a high diagnostic accuracy. proposed a novel hybrid method called the Fuzzy Color technique + deep learning models (MobileNetV2, SqueezeNet) with a Social Mimic optimization method to classify the COVID-19 cases and achieved high success rate in their work [6] . (2) The deep features extracted from deep layers of CNNs have been applied as the input to machine learning models to further improve COVID-19 infection detection. Only the number of samples in the COVID-19 class is increased by using the offline data augmentation approach, and then the proposed CNN model is trained and tested. abstract: A pneumonia of unknown causes, which was detected in Wuhan, China, and spread rapidly throughout the world, was declared as Coronavirus disease 2019 (COVID-19). Thousands of people have lost their lives to this disease. Its negative effects on public health are ongoing. In this study, an intelligence computer-aided model that can automatically detect positive COVID-19 cases is proposed to support daily clinical applications. The proposed model is based on the convolution neural network (CNN) architecture and can automatically reveal discriminative features on chest X-ray images through its convolution with rich filter families, abstraction, and weight-sharing characteristics. Contrary to the generally used transfer learning approach, the proposed deep CNN model was trained from scratch. Instead of the pre-trained CNNs, a novel serial network consisting of five convolution layers was designed. This CNN model was utilized as a deep feature extractor. The extracted deep discriminative features were used to feed the machine learning algorithms, which were k-nearest neighbor, support vector machine (SVM), and decision tree. The hyperparameters of the machine learning models were optimized using the Bayesian optimization algorithm. The experiments were conducted on a public COVID-19 radiology database. The database was divided into two parts as training and test sets with 70% and 30% rates, respectively. As a result, the most efficient results were ensured by the SVM classifier with an accuracy of 98.97%, a sensitivity of 89.39%, a specificity of 99.75%, and an F-score of 96.72%. Consequently, a cheap, fast, and reliable intelligence tool has been provided for COVID-19 infection detection. The developed model can be used to assist field specialists, physicians, and radiologists in the decision-making process. Thanks to the proposed tool, the misdiagnosis rates can be reduced, and the proposed model can be used as a retrospective evaluation tool to validate positive COVID-19 infection cases. url: https://doi.org/10.1016/j.asoc.2020.106580 doi: 10.1016/j.asoc.2020.106580 id: cord-330239-l8fp8cvz author: Oyelade, O. N. title: Deep Learning Model for Improving the Characterization of Coronavirus on Chest X-ray Images Using CNN date: 2020-11-03 words: 6444.0 sentences: 325.0 pages: flesch: 50.0 cache: ./cache/cord-330239-l8fp8cvz.txt txt: ./txt/cord-330239-l8fp8cvz.txt summary: The proposed model is then applied to the COVID-19 X-ray dataset in this study which is the National Institutes of Health (NIH) Chest X-Ray dataset obtained from Kaggle for the purpose of promoting early detection and screening of coronavirus disease. Several studies [4, 5, 6, 78, 26, 30] and reviews which have adapted CNN to the task of detection and classification of COVID-19 have proven that the deep learning model is one of the most popular and effective approaches in the diagnosis of COVD-19 from digitized images. In this paper, we propose the application of deep learning model in the category of Convolutional Neural Network (CNN) techniques to automate the process of extracting important features and then classification or detection of COVID-19 from digital images, and this may eventually be supportive in overcoming the issue of a shortage of trained physicians in remote communities [24] . abstract: The novel Coronavirus, also known as Covid19, is a pandemic that has weighed heavily on the socio-economic affairs of the world. Although researches into the production of relevant vaccine are being advanced, there is, however, a need for a computational solution to mediate the process of aiding quick detection of the disease. Different computational solutions comprised of natural language processing, knowledge engineering and deep learning have been adopted for this task. However, deep learning solutions have shown interesting performance compared to other methods. This paper therefore aims to advance the application deep learning technique to the problem of characterization and detection of novel coronavirus. The approach adopted in this study proposes a convolutional neural network (CNN) model which is further enhanced using the technique of data augmentation. The motive for the enhancement of the CNN model through the latter technique is to investigate the possibility of further improving the performances of deep learning models in detection of coronavirus. The proposed model is then applied to the COVID-19 X-ray dataset in this study which is the National Institutes of Health (NIH) Chest X-Ray dataset obtained from Kaggle for the purpose of promoting early detection and screening of coronavirus disease. Results obtained showed that our approach achieved a performance of 100% accuracy, recall/precision of 0.85, F-measure of 0.9, and specificity of 1.0. The proposed CNN model and data augmentation solution may be adopted in pre-screening suspected cases of Covid19 to provide support to the use of the well-known RT-PCR testing. url: https://doi.org/10.1101/2020.10.30.20222786 doi: 10.1101/2020.10.30.20222786 id: cord-168974-w80gndka author: Ozkaya, Umut title: Coronavirus (COVID-19) Classification using Deep Features Fusion and Ranking Technique date: 2020-04-07 words: 3585.0 sentences: 254.0 pages: flesch: 59.0 cache: ./cache/cord-168974-w80gndka.txt txt: ./txt/cord-168974-w80gndka.txt summary: In this study, a novel method was proposed as fusing and ranking deep features to detect COVID-19 in early phase. Within the scope of the proposed method, 3000 patch images have been labelled as CoVID-19 and No finding for using in training and testing phase. According to other pre-trained Convolutional Neural Network (CNN) models used in transfer learning, the proposed method shows high performance on Subset-2 with 98.27% accuracy, 98.93% sensitivity, 97.60% specificity, 97.63% precision, 98.28% F1-score and 96.54% Matthews Correlation Coefficient (MCC) metrics. When the studies in the literature are examined, Shan et al proposed a neural network model called VB-Net in order to segment the COVID-19 regions in CT images. were able to successfully diagnose COVID-19 using deep learning models that could obtain graphical features in CT images [8] . Deep features were obtained with pre-trained Convolutional Neural Network (CNN) models. In the study, deep features were obtained by using pre-trained CNN networks. abstract: Coronavirus (COVID-19) emerged towards the end of 2019. World Health Organization (WHO) was identified it as a global epidemic. Consensus occurred in the opinion that using Computerized Tomography (CT) techniques for early diagnosis of pandemic disease gives both fast and accurate results. It was stated by expert radiologists that COVID-19 displays different behaviours in CT images. In this study, a novel method was proposed as fusing and ranking deep features to detect COVID-19 in early phase. 16x16 (Subset-1) and 32x32 (Subset-2) patches were obtained from 150 CT images to generate sub-datasets. Within the scope of the proposed method, 3000 patch images have been labelled as CoVID-19 and No finding for using in training and testing phase. Feature fusion and ranking method have been applied in order to increase the performance of the proposed method. Then, the processed data was classified with a Support Vector Machine (SVM). According to other pre-trained Convolutional Neural Network (CNN) models used in transfer learning, the proposed method shows high performance on Subset-2 with 98.27% accuracy, 98.93% sensitivity, 97.60% specificity, 97.63% precision, 98.28% F1-score and 96.54% Matthews Correlation Coefficient (MCC) metrics. url: https://arxiv.org/pdf/2004.03698v1.pdf doi: nan id: cord-131094-1zz8rd3h author: Parisi, L. title: QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics date: 2020-10-15 words: 7546.0 sentences: 325.0 pages: flesch: 48.0 cache: ./cache/cord-131094-1zz8rd3h.txt txt: ./txt/cord-131094-1zz8rd3h.txt summary: Despite a higher computational cost, results indicated an overall higher classification accuracy, precision, recall and F1-score brought about by either quantum AFs on five of the seven bench-mark datasets, thus demonstrating its potential to be the new benchmark or gold standard AF in CNNs and aid image classification tasks involved in critical applications, such as medical diagnoses of COVID-19 and PD. Despite a higher computational cost (four-fold with respect to the other AFs except for the CReLU''s increase being almost three-fold), the results achieved by either or both the proposed QReLU and m-ReLU AFs, assessed on classification accuracy, precision, recall and F1-score, indicate an overall higher generalisation achieved on five of the seven benchmark datasets ( Table 2 on the MNIST data, Tables 3 and 5 on PD-related spiral drawings, Tables 7 and 8 on COVID-19 lung US images). abstract: The ReLU activation function (AF) has been extensively applied in deep neural networks, in particular Convolutional Neural Networks (CNN), for image classification despite its unresolved dying ReLU problem, which poses challenges to reliable applications. This issue has obvious important implications for critical applications, such as those in healthcare. Recent approaches are just proposing variations of the activation function within the same unresolved dying ReLU challenge. This contribution reports a different research direction by investigating the development of an innovative quantum approach to the ReLU AF that avoids the dying ReLU problem by disruptive design. The Leaky ReLU was leveraged as a baseline on which the two quantum principles of entanglement and superposition were applied to derive the proposed Quantum ReLU (QReLU) and the modified-QReLU (m-QReLU) activation functions. Both QReLU and m-QReLU are implemented and made freely available in TensorFlow and Keras. This original approach is effective and validated extensively in case studies that facilitate the detection of COVID-19 and Parkinson Disease (PD) from medical images. The two novel AFs were evaluated in a two-layered CNN against nine ReLU-based AFs on seven benchmark datasets, including images of spiral drawings taken via graphic tablets from patients with Parkinson Disease and healthy subjects, and point-of-care ultrasound images on the lungs of patients with COVID-19, those with pneumonia and healthy controls. Despite a higher computational cost, results indicated an overall higher classification accuracy, precision, recall and F1-score brought about by either quantum AFs on five of the seven bench-mark datasets, thus demonstrating its potential to be the new benchmark or gold standard AF in CNNs and aid image classification tasks involved in critical applications, such as medical diagnoses of COVID-19 and PD. url: https://arxiv.org/pdf/2010.08031v1.pdf doi: nan id: cord-135296-qv7pacau author: Polsinelli, Matteo title: A Light CNN for detecting COVID-19 from CT scans of the chest date: 2020-04-24 words: 3833.0 sentences: 194.0 pages: flesch: 56.0 cache: ./cache/cord-135296-qv7pacau.txt txt: ./txt/cord-135296-qv7pacau.txt summary: We propose a light CNN design based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with other CT images (community-acquired pneumonia and/or healthy images). On the tested datasets, the proposed modified SqueezeNet CNN achieved 83.00% of accuracy, 85.00% of sensitivity, 81.00% of specificity, 81.73% of precision and 0.8333 of F1Score in a very efficient way (7.81 seconds medium-end laptot without GPU acceleration). In the present work, we aim at obtaining acceptable performances for an automatic method in recognizing COVID-19 CT images of lungs while, at the same time, dealing with reduced datasets for training and validation and reducing the computational overhead imposed by more complex automatic systems. In this work we developed, trained and tested a light CNN (based on the SqueezeNet) to discriminate between COVID-19 and community-acquired pneumonia and/or healthy CT images. abstract: OVID-19 is a world-wide disease that has been declared as a pandemic by the World Health Organization. Computer Tomography (CT) imaging of the chest seems to be a valid diagnosis tool to detect COVID-19 promptly and to control the spread of the disease. Deep Learning has been extensively used in medical imaging and convolutional neural networks (CNNs) have been also used for classification of CT images. We propose a light CNN design based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with other CT images (community-acquired pneumonia and/or healthy images). On the tested datasets, the proposed modified SqueezeNet CNN achieved 83.00% of accuracy, 85.00% of sensitivity, 81.00% of specificity, 81.73% of precision and 0.8333 of F1Score in a very efficient way (7.81 seconds medium-end laptot without GPU acceleration). Besides performance, the average classification time is very competitive with respect to more complex CNN designs, thus allowing its usability also on medium power computers. In the next future we aim at improving the performances of the method along two directions: 1) by increasing the training dataset (as soon as other CT images will be available); 2) by introducing an efficient pre-processing strategy. url: https://arxiv.org/pdf/2004.12837v1.pdf doi: nan id: cord-296359-pt86juvr author: Polsinelli, Matteo title: A Light CNN for detecting COVID-19 from CT scans of the chest date: 2020-10-03 words: 3887.0 sentences: 201.0 pages: flesch: 54.0 cache: ./cache/cord-296359-pt86juvr.txt txt: ./txt/cord-296359-pt86juvr.txt summary: In this work we propose a light Convolutional Neural Network (CNN) design, based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with respect to other community-acquired pneumonia and/or healthy CT images. Also the average classification time on a high-end workstation, 1.25 seconds, is very competitive with respect to that of more complex CNN designs, 13.41 seconds, witch require pre-processing. We started from the model of the SqueezeNet CNN to discriminate between COVID-19 and community-acquired pneumonia and/or healthy CT images. In this arrangement the number of images from the italian dataset used to train, validate and Test-1 are 60, 20 and 20, respectively. For each dataset arrangement we organized 4 experiments in which we tested different CNN models, transfer learning and the effectiveness of data augmentation. For each attempt, the CNN model has been trained for 20 epochs and evaluated by the accuracy results calculated on the validation dataset. abstract: Computer Tomography (CT) imaging of the chest is a valid diagnosis tool to detect COVID-19 promptly and to control the spread of the disease. In this work we propose a light Convolutional Neural Network (CNN) design, based on the model of the SqueezeNet, for the efficient discrimination of COVID-19 CT images with respect to other community-acquired pneumonia and/or healthy CT images. The architecture allows to an accuracy of 85.03% with an improvement of about 3.2% in the first dataset arrangement and of about 2.1% in the second dataset arrangement. The obtained gain, though of low entity, can be really important in medical diagnosis and, in particular, for Covid-19 scenario. Also the average classification time on a high-end workstation, 1.25 seconds, is very competitive with respect to that of more complex CNN designs, 13.41 seconds, witch require pre-processing. The proposed CNN can be executed on medium-end laptop without GPU acceleration in 7.81 seconds: this is impossible for methods requiring GPU acceleration. The performance of the method can be further improved with efficient pre-processing strategies for witch GPU acceleration is not necessary. url: https://www.sciencedirect.com/science/article/pii/S0167865520303688?v=s5 doi: 10.1016/j.patrec.2020.10.001 id: cord-127759-wpqdtdjs author: Qi, Xiao title: Chest X-ray Image Phase Features for Improved Diagnosis of COVID-19 Using Convolutional Neural Network date: 2020-11-06 words: 3896.0 sentences: 250.0 pages: flesch: 50.0 cache: ./cache/cord-127759-wpqdtdjs.txt txt: ./txt/cord-127759-wpqdtdjs.txt summary: In this study, we design a novel multi-feature convolutional neural network (CNN) architecture for multi-class improved classification of COVID-19 from CXR images. In this work we show how local phase CXR features based image enhancement improves the accuracy of CNN architectures for COVID-19 diagnosis. Our proposed method is designed for processing CXR images and consists of two main stages as illustrated in Figure 1 : 1-We enhance the CXR images (CXR(x, y)) using local phase-based image processing method in order to obtain a multi-feature CXR image (M F (x, y)), and 2-we classify CXR(x, y) by designing a deep learning approach where multi feature CXR images (M F (x, y)), together with original CXR data (CXR(x, y)), is used for improving the classification performance. Our proposed multi-feature CNN architectures were trained on a large dataset in terms of the number of COVID-19 CXR scans and have achieved improved classification accuracy across all classes. abstract: Recently, the outbreak of the novel Coronavirus disease 2019 (COVID-19) pandemic has seriously endangered human health and life. Due to limited availability of test kits, the need for auxiliary diagnostic approach has increased. Recent research has shown radiography of COVID-19 patient, such as CT and X-ray, contains salient information about the COVID-19 virus and could be used as an alternative diagnosis method. Chest X-ray (CXR) due to its faster imaging time, wide availability, low cost and portability gains much attention and becomes very promising. Computational methods with high accuracy and robustness are required for rapid triaging of patients and aiding radiologist in the interpretation of the collected data. In this study, we design a novel multi-feature convolutional neural network (CNN) architecture for multi-class improved classification of COVID-19 from CXR images. CXR images are enhanced using a local phase-based image enhancement method. The enhanced images, together with the original CXR data, are used as an input to our proposed CNN architecture. Using ablation studies, we show the effectiveness of the enhanced images in improving the diagnostic accuracy. We provide quantitative evaluation on two datasets and qualitative results for visual inspection. Quantitative evaluation is performed on data consisting of 8,851 normal (healthy), 6,045 pneumonia, and 3,323 Covid-19 CXR scans. In Dataset-1, our model achieves 95.57% average accuracy for a three classes classification, 99% precision, recall, and F1-scores for COVID-19 cases. For Dataset-2, we have obtained 94.44% average accuracy, and 95% precision, recall, and F1-scores for detection of COVID-19. Conclusions: Our proposed multi-feature guided CNN achieves improved results compared to single-feature CNN proving the importance of the local phase-based CXR image enhancement. url: https://arxiv.org/pdf/2011.03585v1.pdf doi: nan id: cord-024491-f16d1zov author: Qiu, Xi title: Simultaneous ECG Heartbeat Segmentation and Classification with Feature Fusion and Long Term Context Dependencies date: 2020-04-17 words: 3465.0 sentences: 245.0 pages: flesch: 55.0 cache: ./cache/cord-024491-f16d1zov.txt txt: ./txt/cord-024491-f16d1zov.txt summary: To achieve simultaneous segmentation and classification, we present a Faster R-CNN based model that has been customized to handle ECG data. Since deep learning methods can produce feature maps from raw data, heartbeat segmentation can be simultaneously conducted with classification with a single neural network. To achieve simultaneous segmentation and classification, we present a Faster R-CNN [2] based model that has been customized to handle ECG sequences. In our method, we present a modified Faster R-CNN for arrhythmia detection which works in only two steps: preprocessing, and simultaneous heartbeat segmentation and classification. The architecture of our model is shown in Fig. 2 , which takes 1-D ECG sequence as its input and conducts heartbeat segmentation and classification simultaneously. Different from most deep learning methods which compute feature maps for a single heartbeat, our backbone model takes a long ECG sequence as its input. abstract: Arrhythmia detection by classifying ECG heartbeats is an important research topic for healthcare. Recently, deep learning models have been increasingly applied to ECG classification. Among them, most methods work in three steps: preprocessing, heartbeat segmentation and beat-wise classification. However, this methodology has two drawbacks. First, explicit heartbeat segmentation can undermine model simplicity and compactness. Second, beat-wise classification risks losing inter-heartbeat context information that can be useful to achieving high classification performance. Addressing these drawbacks, we propose a novel deep learning model that can simultaneously conduct heartbeat segmentation and classification. Compared to existing methods, our model is more compact as it does not require explicit heartbeat segmentation. Moreover, our model is more context-aware, for it takes into account the relationship between heartbeats. To achieve simultaneous segmentation and classification, we present a Faster R-CNN based model that has been customized to handle ECG data. To characterize inter-heartbeat context information, we exploit inverted residual blocks and a novel feature fusion subroutine that combines average pooling with max-pooling. Extensive experiments on the well-known MIT-BIH database indicate that our method can achieve competitive results for ECG segmentation and classification. url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206251/ doi: 10.1007/978-3-030-47436-2_28 id: cord-269270-i2odcsx7 author: Sahlol, Ahmed T. title: COVID-19 image classification using deep features and fractional-order marine predators algorithm date: 2020-09-21 words: 7058.0 sentences: 437.0 pages: flesch: 53.0 cache: ./cache/cord-269270-i2odcsx7.txt txt: ./txt/cord-269270-i2odcsx7.txt summary: In this paper, we propose an improved hybrid classification approach for COVID-19 images by combining the strengths of CNNs (using a powerful architecture called Inception) to extract features and a swarm-based feature selection algorithm (Marine Predators Algorithm) to select the most relevant features. The proposed COVID-19 X-ray classification approach starts by applying a CNN (especially, a powerful architecture called Inception which pre-trained on Imagnet dataset) to extract the discriminant features from raw images (with no pre-processing or segmentation) from the dataset that contains positive and negative COVID-19 images. 1. Propose an efficient hybrid classification approach for COVID-19 using a combination of CNN and an improved swarm-based feature selection algorithm. 4. Evaluate the proposed approach by performing extensive comparisons to several state-of-art feature selection algorithms, most recent CNN architectures and most recent relevant works and existing classification methods of COVID-19 images. abstract: Currently, we witness the severe spread of the pandemic of the new Corona virus, COVID-19, which causes dangerous symptoms to humans and animals, its complications may lead to death. Although convolutional neural networks (CNNs) is considered the current state-of-the-art image classification technique, it needs massive computational cost for deployment and training. In this paper, we propose an improved hybrid classification approach for COVID-19 images by combining the strengths of CNNs (using a powerful architecture called Inception) to extract features and a swarm-based feature selection algorithm (Marine Predators Algorithm) to select the most relevant features. A combination of fractional-order and marine predators algorithm (FO-MPA) is considered an integration among a robust tool in mathematics named fractional-order calculus (FO). The proposed approach was evaluated on two public COVID-19 X-ray datasets which achieves both high performance and reduction of computational complexity. The two datasets consist of X-ray COVID-19 images by international Cardiothoracic radiologist, researchers and others published on Kaggle. The proposed approach selected successfully 130 and 86 out of 51 K features extracted by inception from dataset 1 and dataset 2, while improving classification accuracy at the same time. The results are the best achieved on these datasets when compared to a set of recent feature selection algorithms. By achieving 98.7%, 98.2% and 99.6%, 99% of classification accuracy and F-Score for dataset 1 and dataset 2, respectively, the proposed approach outperforms several CNNs and all recent works on COVID-19 images. url: https://doi.org/10.1038/s41598-020-71294-2 doi: 10.1038/s41598-020-71294-2 id: cord-258170-kyztc1jp author: Shorfuzzaman, Mohammad title: Towards the sustainable development of smart cities through mass video surveillance: A response to the COVID-19 pandemic date: 2020-11-05 words: 5371.0 sentences: 300.0 pages: flesch: 54.0 cache: ./cache/cord-258170-kyztc1jp.txt txt: ./txt/cord-258170-kyztc1jp.txt summary: In particular, we make the following contributions: (a) A deep learning-based framework is presented for monitoring social distancing in the context of sustainable smart cities in an effort to curb the spread of COVID-19 or similar infectious diseases; (b) The proposed system leverages state-of-the-art, deep learning-based real-time object detection models for the detection of people in videos, captured with a monocular camera, to implement social distancing monitoring use cases; (c) A J o u r n a l P r e -p r o o f perspective transformation is presented, where the captured video is transformed from a perspective view to a bird''s eye (top-down) view to identify the region of interest (ROI) in which social distancing will be monitored; (d) A detailed performance evaluation is provided to show the effectiveness of the proposed system on a video surveillance dataset. abstract: Sustainable smart city initiatives around the world have recently had great impact on the lives of citizens and brought significant changes to society. More precisely, data-driven smart applications that efficiently manage sparse resources are offering a futuristic vision of smart, efficient, and secure city operations. However, the ongoing COVID-19 pandemic has revealed the limitations of existing smart city deployment; hence; the development of systems and architectures capable of providing fast and effective mechanisms to limit further spread of the virus has become paramount. An active surveillance system capable of monitoring and enforcing social distancing between people can effectively slow the spread of this deadly virus. In this paper, we propose a data-driven deep learning-based framework for the sustainable development of a smart city, offering a timely response to combat the COVID-19 pandemic through mass video surveillance. To implementing social distancing monitoring, we used three deep learning-based real-time object detection models for the detection of people in videos captured with a monocular camera. We validated the performance of our system using a real-world video surveillance dataset for effective deployment. url: https://www.sciencedirect.com/science/article/pii/S2210670720308003?v=s5 doi: 10.1016/j.scs.2020.102582 id: cord-102774-mtbo1tnq author: Sun, Yuliang title: Real-Time Radar-Based Gesture Detection and Recognition Built in an Edge-Computing Platform date: 2020-05-20 words: 6381.0 sentences: 348.0 pages: flesch: 60.0 cache: ./cache/cord-102774-mtbo1tnq.txt txt: ./txt/cord-102774-mtbo1tnq.txt summary: In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. [16] projected the range-Doppler-measurement-cycles into rangetime and Doppler-time to reduce the input dimension of the LSTM layer and achieved a good classification accuracy in real-time, the proposed algorithms were implemented on a personal computer with powerful computational capability. abstract: In this paper, a real-time signal processing frame-work based on a 60 GHz frequency-modulated continuous wave (FMCW) radar system to recognize gestures is proposed. In order to improve the robustness of the radar-based gesture recognition system, the proposed framework extracts a comprehensive hand profile, including range, Doppler, azimuth and elevation, over multiple measurement-cycles and encodes them into a feature cube. Rather than feeding the range-Doppler spectrum sequence into a deep convolutional neural network (CNN) connected with recurrent neural networks, the proposed framework takes the aforementioned feature cube as input of a shallow CNN for gesture recognition to reduce the computational complexity. In addition, we develop a hand activity detection (HAD) algorithm to automatize the detection of gestures in real-time case. The proposed HAD can capture the time-stamp at which a gesture finishes and feeds the hand profile of all the relevant measurement-cycles before this time-stamp into the CNN with low latency. Since the proposed framework is able to detect and classify gestures at limited computational cost, it could be deployed in an edge-computing platform for real-time applications, whose performance is notedly inferior to a state-of-the-art personal computer. The experimental results show that the proposed framework has the capability of classifying 12 gestures in real-time with a high F1-score. url: https://arxiv.org/pdf/2005.10145v1.pdf doi: 10.1109/jsen.2020.2994292 id: cord-202184-hh7hugqi author: Wang, Jun title: Boosted EfficientNet: Detection of Lymph Node Metastases in Breast Cancer Using Convolutional Neural Network date: 2020-10-10 words: 5291.0 sentences: 319.0 pages: flesch: 43.0 cache: ./cache/cord-202184-hh7hugqi.txt txt: ./txt/cord-202184-hh7hugqi.txt summary: In this work, we propose three strategies to improve the capability of EfficientNet, including developing a cropping method called Random Center Cropping (RCC) to retain significant features on the center area of images, reducing the downsampling scale of EfficientNet to facilitate the small resolution images of RPCam datasets, and integrating Attention and Feature Fusion mechanisms with EfficientNet to obtain features containing rich semantic information. This work has three main contributions: (1) To our limited knowledge, we are the first study to explore the power of EfficientNet on MBCs classification, and elaborate experiments are conducted to compare the performance of EfficientNet with other state-of-the-art CNN models, which might offer inspirations for researchers who are interested in image-based diagnosis using DL; (2) We propose a novel data augmentation method RCC to facilitate the data enrichment of small resolution datasets; (3) All of our four technological improvements boost the performance of original EfficientNet. The best accuracy and AUC achieve 97.96% and 99.68%, respectively, confirming the applicability of utilizing CNN-based methods for BC diagnosis. abstract: In recent years, advances in the development of whole-slide images have laid a foundation for the utilization of digital images in pathology. With the assistance of computer images analysis that automatically identifies tissue or cell types, they have greatly improved the histopathologic interpretation and diagnosis accuracy. In this paper, the Convolutional Neutral Network (CNN) has been adapted to predict and classify lymph node metastasis in breast cancer. Unlike traditional image cropping methods that are only suitable for large resolution images, we propose a novel data augmentation method named Random Center Cropping (RCC) to facilitate small resolution images. RCC enriches the datasets while retaining the image resolution and the center area of images. In addition, we reduce the downsampling scale of the network to further facilitate small resolution images better. Moreover, Attention and Feature Fusion (FF) mechanisms are employed to improve the semantic information of images. Experiments demonstrate that our methods boost performances of basic CNN architectures. And the best-performed method achieves an accuracy of 97.96% and an AUC of 99.68% on RPCam datasets, respectively. url: https://arxiv.org/pdf/2010.05027v1.pdf doi: nan id: cord-103297-4stnx8dw author: Widrich, Michael title: Modern Hopfield Networks and Attention for Immune Repertoire Classification date: 2020-08-17 words: 14093.0 sentences: 926.0 pages: flesch: 57.0 cache: ./cache/cord-103297-4stnx8dw.txt txt: ./txt/cord-103297-4stnx8dw.txt summary: In this work, we present our novel method DeepRC that integrates transformer-like attention, or equivalently modern Hopfield networks, into deep learning architectures for massive MIL such as immune repertoire classification. DeepRC sets out to avoid the above-mentioned constraints of current methods by (a) applying transformer-like attention-pooling instead of max-pooling and learning a classifier on the repertoire rather than on the sequence-representation, (b) pooling learned representations rather than predictions, and (c) using less rigid feature extractors, such as 1D convolutions or LSTMs. In this work, we contribute the following: We demonstrate that continuous generalizations of binary modern Hopfield-networks (Krotov & Hopfield, 2016 Demircigil et al., 2017) have an update rule that is known as the attention mechanisms in the transformer. We evaluate the predictive performance of DeepRC and other machine learning approaches for the classification of immune repertoires in a large comparative study (Section "Experimental Results") Exponential storage capacity of continuous state modern Hopfield networks with transformer attention as update rule abstract: A central mechanism in machine learning is to identify, store, and recognize patterns. How to learn, access, and retrieve such patterns is crucial in Hopfield networks and the more recent transformer architectures. We show that the attention mechanism of transformer architectures is actually the update rule of modern Hop-field networks that can store exponentially many patterns. We exploit this high storage capacity of modern Hopfield networks to solve a challenging multiple instance learning (MIL) problem in computational biology: immune repertoire classification. Accurate and interpretable machine learning methods solving this problem could pave the way towards new vaccines and therapies, which is currently a very relevant research topic intensified by the COVID-19 crisis. Immune repertoire classification based on the vast number of immunosequences of an individual is a MIL problem with an unprecedentedly massive number of instances, two orders of magnitude larger than currently considered problems, and with an extremely low witness rate. In this work, we present our novel method DeepRC that integrates transformer-like attention, or equivalently modern Hopfield networks, into deep learning architectures for massive MIL such as immune repertoire classification. We demonstrate that DeepRC outperforms all other methods with respect to predictive performance on large-scale experiments, including simulated and real-world virus infection data, and enables the extraction of sequence motifs that are connected to a given disease class. Source code and datasets: https://github.com/ml-jku/DeepRC url: https://doi.org/10.1101/2020.04.12.038158 doi: 10.1101/2020.04.12.038158 id: cord-034614-r429idtl author: Yasar, Huseyin title: A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks date: 2020-11-04 words: 7750.0 sentences: 385.0 pages: flesch: 60.0 cache: ./cache/cord-034614-r429idtl.txt txt: ./txt/cord-034614-r429idtl.txt summary: title: A new deep learning pipeline to detect Covid-19 on chest X-ray images using local binary pattern, dual tree complex wavelet transform and convolutional neural networks In this study, which aims at early diagnosis of Covid-19 disease using X-ray images, the deep-learning approach, a state-of-the-art artificial intelligence method, was used, and automatic classification of images was performed using convolutional neural networks (CNN). Within the scope of the study, the results were obtained using chest X-ray images directly in the training-test procedures and the sub-band images obtained by applying dual tree complex wavelet transform (DT-CWT) to the above-mentioned images. In the study, experiments were carried out for the use of images directly, using local binary pattern (LBP) as a pre-process and dual tree complex wavelet transform (DT-CWT) as a secondary operation, and the results of the automatic classification were calculated separately. abstract: In this study, which aims at early diagnosis of Covid-19 disease using X-ray images, the deep-learning approach, a state-of-the-art artificial intelligence method, was used, and automatic classification of images was performed using convolutional neural networks (CNN). In the first training-test data set used in the study, there were 230 X-ray images, of which 150 were Covid-19 and 80 were non-Covid-19, while in the second training-test data set there were 476 X-ray images, of which 150 were Covid-19 and 326 were non-Covid-19. Thus, classification results have been provided for two data sets, containing predominantly Covid-19 images and predominantly non-Covid-19 images, respectively. In the study, a 23-layer CNN architecture and a 54-layer CNN architecture were developed. Within the scope of the study, the results were obtained using chest X-ray images directly in the training-test procedures and the sub-band images obtained by applying dual tree complex wavelet transform (DT-CWT) to the above-mentioned images. The same experiments were repeated using images obtained by applying local binary pattern (LBP) to the chest X-ray images. Within the scope of the study, four new result generation pipeline algorithms having been put forward additionally, it was ensured that the experimental results were combined and the success of the study was improved. In the experiments carried out in this study, the training sessions were carried out using the k-fold cross validation method. Here the k value was chosen as 23 for the first and second training-test data sets. Considering the average highest results of the experiments performed within the scope of the study, the values of sensitivity, specificity, accuracy, F-1 score, and area under the receiver operating characteristic curve (AUC) for the first training-test data set were 0,9947, 0,9800, 0,9843, 0,9881 and 0,9990 respectively; while for the second training-test data set, they were 0,9920, 0,9939, 0,9891, 0,9828 and 0,9991; respectively. Within the scope of the study, finally, all the images were combined and the training and testing processes were repeated for a total of 556 X-ray images comprising 150 Covid-19 images and 406 non-Covid-19 images, by applying 2-fold cross. In this context, the average highest values of sensitivity, specificity, accuracy, F-1 score, and AUC for this last training-test data set were found to be 0,9760, 1,0000, 0,9906, 0,9823 and 0,9997; respectively. url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7609830/ doi: 10.1007/s10489-020-02019-1 id: cord-002901-u4ybz8ds author: Yu, Chanki title: Acral melanoma detection using a convolutional neural network for dermoscopy images date: 2018-03-07 words: 3513.0 sentences: 180.0 pages: flesch: 52.0 cache: ./cache/cord-002901-u4ybz8ds.txt txt: ./txt/cord-002901-u4ybz8ds.txt summary: We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist''s and non-expert''s evaluation. CONCLUSION: Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet. In the result of group B by the training of group A images, CNN also showed a higher diagnostic accuracy (80.23%) than that of the non-expert (62.71%) but was similar to that of the expert (81.64%). abstract: BACKGROUND/PURPOSE: Acral melanoma is the most common type of melanoma in Asians, and usually results in a poor prognosis due to late diagnosis. We applied a convolutional neural network to dermoscopy images of acral melanoma and benign nevi on the hands and feet and evaluated its usefulness for the early diagnosis of these conditions. METHODS: A total of 724 dermoscopy images comprising acral melanoma (350 images from 81 patients) and benign nevi (374 images from 194 patients), and confirmed by histopathological examination, were analyzed in this study. To perform the 2-fold cross validation, we split them into two mutually exclusive subsets: half of the total image dataset was selected for training and the rest for testing, and we calculated the accuracy of diagnosis comparing it with the dermatologist’s and non-expert’s evaluation. RESULTS: The accuracy (percentage of true positive and true negative from all images) of the convolutional neural network was 83.51% and 80.23%, which was higher than the non-expert’s evaluation (67.84%, 62.71%) and close to that of the expert (81.08%, 81.64%). Moreover, the convolutional neural network showed area-under-the-curve values like 0.8, 0.84 and Youden’s index like 0.6795, 0.6073, which were similar score with the expert. CONCLUSION: Although further data analysis is necessary to improve their accuracy, convolutional neural networks would be helpful to detect acral melanoma from dermoscopy images of the hands and feet. url: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5841780/ doi: 10.1371/journal.pone.0193321 id: cord-121200-2qys8j4u author: Zogan, Hamad title: Depression Detection with Multi-Modalities Using a Hybrid Deep Learning Model on Social Media date: 2020-07-03 words: 10036.0 sentences: 521.0 pages: flesch: 51.0 cache: ./cache/cord-121200-2qys8j4u.txt txt: ./txt/cord-121200-2qys8j4u.txt summary: While many previous works have largely studied the problem on a small-scale by assuming uni-modality of data which may not give us faithful results, we propose a novel scalable hybrid model that combines Bidirectional Gated Recurrent Units (BiGRUs) and Convolutional Neural Networks to detect depressed users on social media such as Twitter-based on multi-modal features. To be specific, this work aims to develop a new novel deep learning-based solution for improving depression detection by utilizing multi-modal features from diverse behaviour of the depressed user in social media. To this end, we propose a hybrid model comprising Bidirectional Gated Recurrent Unit (BiGRU) and Conventional Neural network (CNN) model to boost the classification of depressed users using multi-modal features and word embedding features. The most closely related recent work to ours is [23] where the authors propose a CNN-based deep learning model to classify Twitter users based on depression using multi-modal features. abstract: Social networks enable people to interact with one another by sharing information, sending messages, making friends, and having discussions, which generates massive amounts of data every day, popularly called as the user-generated content. This data is present in various forms such as images, text, videos, links, and others and reflects user behaviours including their mental states. It is challenging yet promising to automatically detect mental health problems from such data which is short, sparse and sometimes poorly phrased. However, there are efforts to automatically learn patterns using computational models on such user-generated content. While many previous works have largely studied the problem on a small-scale by assuming uni-modality of data which may not give us faithful results, we propose a novel scalable hybrid model that combines Bidirectional Gated Recurrent Units (BiGRUs) and Convolutional Neural Networks to detect depressed users on social media such as Twitter-based on multi-modal features. Specifically, we encode words in user posts using pre-trained word embeddings and BiGRUs to capture latent behavioural patterns, long-term dependencies, and correlation across the modalities, including semantic sequence features from the user timelines (posts). The CNN model then helps learn useful features. Our experiments show that our model outperforms several popular and strong baseline methods, demonstrating the effectiveness of combining deep learning with multi-modal features. We also show that our model helps improve predictive performance when detecting depression in users who are posting messages publicly on social media. url: https://arxiv.org/pdf/2007.02847v1.pdf doi: nan ==== make-pages.sh questions [ERIC WAS HERE] ==== make-pages.sh search /data-disk/reader-compute/reader-cord/bin/make-pages.sh: line 77: /data-disk/reader-compute/reader-cord/tmp/search.htm: No such file or directory Traceback (most recent call last): File "/data-disk/reader-compute/reader-cord/bin/tsv2htm-search.py", line 51, in with open( TEMPLATE, 'r' ) as handle : htm = handle.read() FileNotFoundError: [Errno 2] No such file or directory: '/data-disk/reader-compute/reader-cord/tmp/search.htm' ==== make-pages.sh topic modeling corpus Zipping study carrel