Menu Close

Projects

This page contains a list with the currently active projects. 

Artificial Intelligence for Diagnosis of Skin Cancer (2020)

AI is used to develop an app for image capture and processing of clinical data that can sit on modern mobile phones.

AI system to analyse patient records and process high-volume documentation (2020-2023)

Using Natural Language Processing (NLP) to analyse patient record documents and Machine Learning (ML) methods to develop appropriate algorithms, an AI software system will be developed to analyse patient records and process high-volume documentation such as A&E attendance letters.  Intelligently automating these processes will increase Firza’s ability to provide efficient and highly accurate administrative services to GP practices and CCGs.

IGGI – Doctoral Training Centre on Intelligent Games and Game Intelligence (2014-2022)

Games with a purpose for NLP, user profiling, crowdsourcing.

Combining NLP and image analysis to accelerate and automate the collection of OSINT (2019-2022)

The utilisation of a novel combination of natural language processing (NLP) and image analysis to accelerate and automate the collection of Open Source Intelligence (OSINT), providing end users with advanced insight into individuals without invading individual privacy.

Development of methods for automated text analyses using modern Deep Learning techniques (2019-2022)

This project, which is a joint collaboration between British Telecom (BT) and the University of Essex, explores novel methods to conduct multi-document automatic summarization. The main challenge lies in automatically summarising noisy, short, and domain-specific texts using various signals such as temporal patterns and named entity recognition.

Innovative SaaS based product to create video content and distribution strategies for marketing (2018-2021)

The creation of an innovative SaaS based product that helps content agencies and brands to create video content and distribution strategies for marketing, based on analysis of their target audience’s interests and needs, the existing content on the web and social channels, and their budget.

Disagreement in Language Interpretation (2016-2021)

ERC project at QMUL with Jon Chamberlain and Richard Bartle as co-investigators based at Essex. Analyzing disagreements in data collected using games.

ESRC Centre on Human Rights in the Era of Big Data (2015-2020)

Using text mining methods to identify human rights violations / protect privacy / humanitarian crises.

Using 3D modelling for marine surveying of chalk reefs (2020)

Investigating human impact on UK chalk reef using photogrammetry and 3D models.

ImageCLEFcaption (Since 2016)

The first step to automatic image captioning and scene understanding is identifying the presence and location of relevant concepts in a large corpus of medical images. Based on the visual image content, this subtask provides the building blocks for the scene understanding step by identifying the individual components from which captions are composed. The concepts can be further applied for context-based image and information retrieval purposes.

ImageCLEFcoral (Since 2019)

The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. It has also created a need to have such models labelled, with objects such as people, buildings, vehicles, terrain, etc. all essential for machine learning techniques to automatically identify as areas of interest and to label them appropriately. However, the complexity of the images makes impossible for human annotators to assess the contents of images on a large scale.
Advances in automatically annotating images for complexity and benthic composition have been promising, and we are interested in automatically identify areas of interest and to label them appropriately for monitoring coral reefs. Coral reefs are in danger of being lost within the next 30 years, and with them the ecosystems they support. This catastrophe will not only see the extinction of many marine species, but also create a humanitarian crisis on a global scale for the billions of humans who rely on reef services. By monitoring the changes and composition of coral reefs we can help prioritise conservation efforts.

Predicting Media Memorability (Since 2020)

Media platforms such as social networks, media advertisement, information retrieval and recommendation systems deal with exponentially growing data day after day. Enhancing the relevance of multimedia occurrences in our everyday life requires new ways to organize – in particular, to retrieve – digital content. Like other metrics of video importance, such as aesthetics or interestingness, memorability can be regarded as useful to help make a choice between competing videos. This is even truer when one considers the specific use case of commercials creation. Because the impact of multimedia content, images or videos, on human memory is unequal, the capability of predicting the memorability level of a given content is obviously of high importance for professionals in the field of advertising. Beyond advertising, a number of other applications, such as filmmaking, education, content retrieval, etc., may also result from the proposed task.

Early prediction of Neurodegenerative Diseases using Deep Learning (2019-2023)

Alzheimer’s Disease (AD) and Parkinson’s Disease (PD) are the two most common neurodegenerative diseases caused by structural changes in the brain and lead to deterioration of cognitive functions. Patients usually experience diagnostic symptoms at later stages after irreversible neural damage occurs. Early detection of such diseases is crucial in maximizing patients’ quality of life and to start treatments to decelerate the progress of the disease. Early detection may be possible via computer-assisted systems using neuro-imaging data. Among all, deep learning utilizing magnetic resonance imaging(MRI) have become a prominent tool due to its capability to extract high-level features through local connectivity, weight sharing, and spatial invariance. This project investigates the detection of AD by building various 2D and 3D convolutional models.