This page contains a list with next and coming talks on our weekly seminars in Computer Vision (CV) and Natural Language Processing (NLP).
If you want to be included in the Computer Vision (CV) mailing list, please contact: <alba.garcia(at)essex.ac.uk> or <e.yagis(at)essex.ac.uk>
If you would like to be included in the Natural Language and Information Processing (NLIP) mailing list, please contact Jon Chamberlain <jchamb (at) essex.ac.uk)
For past seminars see here.
24 March 2021 – CV – Zoom meeting
Title: Convolutional Autoencoder based Deep Learning Approach for Alzheimer’s Disease Diagnosis using Brain MRI by Ekin Yagis
Abstract: Rapid and accurate diagnosis of Alzheimer’s disease (AD) is critical for patient treatment, especially in the early stages of the disease. While computer-assisted diagnosis based on neuroimaging holds vast potential for helping clinicians detect disease sooner, there are still some technical hurdles to overcome. This study presents an end-to-end disease detection approach using convolutional autoencoders by integrating supervised pre- diction and unsupervised representation. The 2D neural network is based upon a pre-trained 2D convolutional autoencoder, to capture latent representations in structural brain MRI scans. Experiments on the OASIS brain MRI dataset revealed that the model outperforms a number of traditional classifiers in terms of accuracy using single slice.
Bio: Ekin Yagis is a Ph.D. student in Computer Science and Electrical Engineering at the University of Essex. She majored in Electrical Engineering at the Koc University and holds an M.Sc degree from Sabanci University, Istanbul. She works as a research officer in CSEE under supervision of Dr. Alba Garcia and Dr. Vahid Abolghasemi. Her research interests include medical image processing, machine learning, and computer vision. She is recently focusing on the detection of neurodegeneretive diseases such as Parkinson’s and Alzheimer’s diseases using machine learning.
28 April 2021 – NLP – Zoom meeting
Title: Discussion of the paper: “A Primer in BERTology: What we know about how BERT works” led by Tasos Papastylianou.
Abstract: We will be discussing the Rogers et al (2020) paper (download). From the paper: “Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.”