Skip to main content
School of Electronic Engineering and Computer Science

CSC PhD Studentships in Electronic Engineering and Computer Science

About the Studentships 

The school of Electronic Engineering and Computer Science of the Queen Mary University of London is inviting applications for several PhD Studentships in specific areas in Electronic Engineering and Computer Science co-funded by the China Scholarship Council (CSC). CSC is offering a monthly stipend to cover living expenses and QMUL is waiving fees and hosting the student. These scholarships are available only for Chinese candidates. For details on the available projects, please see below.

About the School of Electronic Engineering and Computer Science at Queen Mary 

The PhD Studentship will be based in the School of Electronic Engineering and Computer Science (EECS) at Queen Mary University of London. As a multidisciplinary School, we are well known for our pioneering research and pride ourselves on our world-class projects. We are 8th in the UK for computer science research (REF 2021) and 7th in the UK for engineering research (REF 2021). The School is a dynamic community of approximately 350 PhD students and 80 research assistants working on research centred around a number of research groups in several areas, including Antennas and Electromagnetics, Computing and Data Science, Communication Systems, Computer Vision, Cognitive Science, Digital Music, Games and AI, Multimedia and Vision, Networks, Risk and Information Management, Robotics and Theory. 

For further information about research in the school of Electronic Engineering and Computer Science, please visit: http://eecs.qmul.ac.uk/research/. 

Who can apply 

Queen Mary is on the lookout for the best and brightest students. A typical successful candidate:  

  • Should hold, or is expected to obtain an MSc in the Electronic Engineering, Computer Science, or a closely related discipline 
  • Having obtained distinction or first-class level degree is highly desirable 

Eligibility criteria and details of the scheme 

https://www.qmul.ac.uk/scholarships/items/china-scholarship-council-scholarships.html 

How to apply 

Queen Mary is interested in developing the next generation of outstanding researchers and decided to invest in specific research areas. For further information about potential PhD projects and supervisors please see below. 

Applicants should work with their prospective supervisor and submit their application following the instructions at: http://eecs.qmul.ac.uk/phd/how-to-apply/  

The application should include the following: 

  • CV (max 2 pages)  
  • Cover letter (max 4,500 characters) stating clearly in the first page whether you are eligible for a scholarship.  
  • Research proposal (max 500 words) 
  • 2 References  
  • Certificate of English Language (for students whose first language is not English)  
  • Other Certificates  

Application Deadline 

The deadline for applications is the 29th January 2025. 

For general enquiries contact Mrs. Melissa Yeo m.yeo@qmul.ac.uk (administrative enquiries) or Dr Arkaitz Zubiaga a.zubiaga@qmul.ac.uk (academic enquiries) with the subject “EECS-CSC 2025 PhD scholarships enquiry”. 

 

Supervisor: Dr Ahmed M. A. Sayed

In the rapidly evolving landscape of Artificial Intelligence (AI), developing sophisticated Generative AI and Large Language Models (LLMs) has become pivotal for various applications, ranging from natural language processing to creative content generation. However, training these models is com-putationally intensive, often requiring substantial time and resources and limiting its scalability. This project will study and propose system and algorithmic optimizations to accelerate the training process for Generative AI and LLMs, addressing the challenges posed by the complexity of these models. This research focuses on exploring and implementing advanced parallel computing techniques, lev-eraging the power of distributed systems and specialized hardware accelerators. By optimizing algo-rithms, employing parallelization strategies, and harnessing the capabilities of GPUs, TPUs, or emerging AI-specific hardware, this project aims to reduce the training/fine-tuning time of Generative AI and LLMs significantly. Furthermore, the study delves into transfer learning and explores techniques to enhance model convergence and accuracy. By leveraging pre-trained models and develop-ing novel knowledge transfer learning methodologies, the research intends to minimize the data and computational resources required for training, democratizing access to cutting-edge AI technologies. This project will also work on designing efficient architectures for Generative AI models and study network pruning or sparsity techniques to create lightweight yet effective models.

Supervisor: Akram Alomainy

Wireless capsule endoscopy (WCE) is an innovative medical technique used for diagnosing and treating gastrointestinal tract conditions. Conventional endoscopic procedures face challenges in adequately scanning the small intestine due to its anatomy and location. Currently, capsule endoscopy is primarily utilized as a diagnostic tool but has certain limitations. As a result, the development of future capsule prototypes seems unavoidable. Advancements in WCE technology aim to overcome these limitations. These advancements include three-dimensional reconstruction of high-resolution images, high-frame-rate imaging, complete spherical imaging, and capsule chromoendoscopy. By employing these techniques, unnecessary invasive examinations can be minimized as the endoscopic and microscopic features of small intestinal lesions can be clearly visualized. This offers improved diagnostic capabilities and potentially reduces the need for more invasive procedures. The primary objective of this project is to develop miniaturized antennas with high-data-rate capabilities for wireless capsule endoscopy, utilizing innovative techniques. Additionally, the project aims to perform a clinical evaluation of current wireless capsule endoscopy systems to create appropriate testing environments.

Primary supervisors: Dr Anna Xambo Sedo

This PhD research explores the potential of generative techniques in sound-based music, where sound itself—rather than traditional musical notes—serves as the core building block of composition. By utilising generative learning procedures, the study will develop systems capable of creating novel soundscapes and site-specific sound art experiences. It is particularly relevant for students with expertise in computing and music, as it combines advanced algorithmic design with artistic sound manipulation. Through the integration of neural networks and sound synthesis methods, this research will examine how machines can generate, transform, and structure sounds into cohesive musical works from a human-centred perspective. This approach contributes to various fields, including acoustic ecology, sound design, interactive music systems, and human-computer interaction.

Primary supervisor: Anthony Constantinou

Deep learning has achieved impressive success across various fields by effectively modelling complex patterns in large datasets. However, its lack of transparency and explainability poses a major challenge, as these models focus on learning associations rather than uncovering the cause-and-effect relationships needed to understand real-world phenomena.

In contrast, causal machine learning methods aim to identify and model causal structures, and these offer full transparency and explainability. However, they come with their own limitations. They require strong assumptions, struggle with scalability, and their applicability is often constrained to specific theoretical and practical conditions, limiting their broader effectiveness.

This project aims to develop new technology that combines deep learning with causal discovery and inference, and work towards addressing the limitations of both approaches

 

Primary supervisor: Arkaitz Zubiaga

The emergence and popularity of generative AI has led to the increasing generation of texts through the use of large language models (LLMs) such as ChatGPT or Bard. These automated texts can be generated for benevolent purposes, but also for malicious purposes where one wants to claim credit of having written the text or for massive diffusion of misinformation. This calls for the need for generating methods that can automatically distinguish if a text was generated by an LLM or a human; this distinction can be further complicated where a text generated by an LLM is further edited by adversaries, such as being paraphrased by a human or altered to get rid of any watermarks. This project will look into developing automated natural language processing methods to enable the detection of the source of a text, whether it is human-generated, LLM-generated or a combination of both.

Primary Supervisor: Professor Arumugam Nallanathan

Autonomous driving holds great potential to reduce traffic accidents caused by human errors, such as speeding, intoxicated driving, and distractions, which account for a significant portion of the 1.19 million global road fatalities annually. Current autonomous vehicles rely on a combination of sensors like GPS, Lidar, radar, cameras, and ultrasonic sensors to navigate the environment. However, achieving optimal safety and efficiency requires not just individual vehicle sensing but also coordination between vehicles and infrastructure, facilitated by vehicle-to-everything (V2X) communication. V2X technology enables vehicles to cooperate with each other and roadside infrastructure, enhancing safety through cooperative sensing and coordinated driving.

As 6G wireless networks emerge, advances in millimetre-wave and massive multi-input and multi-output (MIMO) technologies will provide the low-latency, high-bandwidth communications necessary for these systems. However, this introduces new challenges, particularly managing interference between the overlapping communication and radar sensing spectrum. Integrated sensing and communication (ISAC) is a promising 6G solution that combines radar sensing and communication in the same hardware using integrated signal processing techniques. ISAC’s significance is underscored by its inclusion as an active work item in major telecommunication standardization bodies, such as 3GPP, IEEE, ITU, Next G Alliance, and IMT-2023, and within the 6G framework by ITU-R as of June 2023.

While ISAC has the potential to improve spectral efficiency and sensing accuracy, its main limitation is reduced range resolution due to the high bandwidth requirements for communication. Advances in signal processing, massive MIMO, and machine learning (ML) offer a way forward, enabling ISAC to achieve centimetre-level range accuracy.
 
The primary challenge for ISAC in autonomous driving is developing an integrated framework that effectively manages interference. This challenge can be approached from two perspectives:
1) Time Domain: radar sensing leverages time-of-flight (ToF) to determine horizontal distances. To improve ToF accuracy, novel modulation and coding techniques are essential.
2) Spatial Domain: massive MIMO systems at base stations can gather spatial- domain data through angle-of-arrival, which helps estimate angular distances. From these two domains, this project aims to develop Intelligent Autonomous Vehicles via Integrated Sensing and Communications techniques by advancing traditional signal processing and ML approaches to address these critical challenges. 

Primary supervisor: Dr Athen Ma

Global Environmental Change (GEC) will have an adverse effect on biodiversity. Vulnerable species are likely to go extinct, triggering compensatory mechanics among the surviving species. A common way to study species responses in ecosystems is to examine the interactions between species. The Lotka-Volterra model has been used to study the dynamics between consumer and resource species groups. However, species within a trophic group do not respond to an environmental stressor in the same way, and changes of species dynamics are the resultant effects of different types of interac-tions and processes within an ecosystem. Thus, modelling the dynamics of species as a collective trophic group response will likely to provide a partial assessment.

The project aims to provide a better understanding on species dynamics within an ecosystem by using techniques in random matrix theory and dynamical models. Changes of species abundances, which reflect the overall effect of a stressor will be examined, and their causality will be used to define clusters of species that exhibit similar responses. These clusters will be used to model the interdependency among species, which will help greatly improve our understanding on the overall effects that alter their populations in the face of GEC.

Primary supervisor: Changjae Oh

Various computer vision tasks, such as segmentation, matching, point tracking and depth estimation, are showing decent performance in foundation vision models that are trained on internet-scale datasets. This project will investigate how the advancement of foundation vision models can be used to address the problems in robot perception and manipulation, such as large diversity in robots, physical environments, and robot manipulation tasks. Specifically, the goal is to investigate integrating foundational models to improve the 3D perception from video data, learn robot manipulation policies, and make these learned policies generalizable across different robot arms. At the end of the project, the student will be able to obtain solid background and skills in computer vision, machine learning, and robot manipulation.

Primary supervisor: Dr Charalampos Saitis

Music information retrieval tasks related to timbre (e.g., instrument identification, playing technique detection) have historically been under-researched, partly due to lack of available -and annotated- data, including a lack of community consensus around instrument and technique taxonomies. In the context of music similarity, which extends to the topic of timbre similarity, metric learning methods are commonly used to learn distances from human judgements. There is extensive work on using metric learning with hand-crafted features, but such representations can be limiting. Conversely, deep metric learning methods attempt to learn distances directly from data, promising a viable alternative. Despite some limited adoption of deep metric learning for specific music similarity tasks, related efforts to learn timbre similarity, or automatically construct taxonomical structures for timbre, are currently lacking. This project will investigate, propose, and develop machine learning models, including curating a new sizable dataset, that can learn discriminative representations of timbre through supervised, semi-supervised, and self-supervised learning paradigms of similarity and categorisation. Such models will enable a wide range of applications for computational music understanding (e.g., foundation models for music) and generation/creativity (e.g., neural audio synthesis). Candidates should have experience in at least one of the following: music informatics, machine listening, metric learning.

Primary Supervisor: Dr Dimitrios Kollias 

Secondary Supervisor: Prof. Ioannis Patras

This PhD research has two primary goals: i) to develop innovative and robust multimodal algorithms that leverage visual, audio, and textual data for analyzing and understanding human behavior in unconstrained, real-world environments (i.e., in-the-wild); and ii) to create novel, efficient and effective algorithms for synthesizing human behavior in-the-wild.

Modeling human behavior presents significant challenges due to its inherent complexity, influenced by factors like emotions, culture, and context. Additionally, existing datasets may be noisy or biased, leading to potential inaccuracies. Advanced deep learning models, while powerful, often function as black boxes, making their decision-making processes opaque. Moreover, models trained on specific datasets may struggle to generalize across diverse populations or environments.

This research aims to address these challenges by developing and integrating cutting-edge deep learning techniques, including LLMs, VLLMs, Transformers, CNNs, RNNs, GNNs, GANs, Diffusion models, and NeRFs. The study will further focus on two key applications: the development of digital humans and advancements in mental health technology.

Additionally, this research will involve close collaboration with industry partners (from UK and abroad) to ensure practical applicability and to facilitate the real-world deployment of the developed algorithms.

Primary supervisor: Emmanouil Benetos

The field of music information retrieval (MIR) has been growing for more than 20 years, with recent advances in deep learning having revolutionised the way machines can make sense of music data. At the same time, research in the field is still constrained by laborious tasks involving data preparation, feature extraction, model selection, architecture optimisation, hyperparameter optimisation, and transfer learning, to name but a few. Some of the model and experimental design choices made by MIR researchers also reflect their own biases.

Inspired by recent developments in machine learning and automation, this PhD project will investigate and develop automated machine learning methods which can be applied at any stage in the MIR pipeline as to build music understanding models ready for deployment across a wide range of tasks. This project will also compare the automated decisions made on every step in the MIR pipeline, as compared with manual model design choices made by researchers. The successful candidate will investigate, propose and develop novel deep learning methods for automating music understanding, resulting in models that can accelerate MIR research and contribute to the democratisation of AI.

Primary supervisor: Dr Evangelia Kyrimi

In an era where AI plays a pivotal role in healthcare, ensuring transparency and accountability is more pressing than ever. As data availability continues to expand, the complexity of AI systems often obscures their decision-making processes. This challenge is especially critical in healthcare, where issues of accountability and safety are paramount.

We invite applications for a PhD position focused on developing causal counterfactual explanations using causal Bayesian networks (CBNs) in AI-driven healthcare systems. This research aims to enhance decision-making processes in clinical settings by providing interpretable and actionable insights derived from causal inference methods. The candidate will engage in the following areas:

· Develop Algorithms: Innovate counterfactual explanation algorithms using CBNs to enable clinicians to explore critical "what-if" scenarios.

· Design User-Centric Outputs: Create tailored counterfactual explanations that resonate with clinical decision-makers.

· Establish Evaluation Metrics: Build robust frameworks to measure the accuracy, relevance, and interpretability of explanations.

· Address Ethical Implications: Investigate biases in CBN-generated outputs and propose strategies for enhancing fairness and transparency.

This research will not only advance the field of AI in healthcare but also empower clinicians with insights that can lead to improved patient outcomes. Join us in making a meaningful impact in this vital area!

Primary supervisor: Fatma Benkhelifa

Integrated sensing and communication (ISAC) is an emerging technology poised to transform the sixth-generation (6G) networks by combining sensing and communication processes. This enables simultaneous information transmission and target sensing through echo signals, unlocking significant benefits like enhanced spectral, energy and hardware efficiency. However, energy scarcity is a major challenge for ISAC. Simultaneous information and power transfer (SWIPT) addresses this by using radio frequency (RF) signals to both transmit information and harvest energy.

This project aims to integrate sensing, communication and energy harvesting which unifies the use of ISAC and SWIPT technologies, namely SWIPT-ISAC network. To achieve that, a hardware-complaint system model will be designed, and resource allocation problem will be optimized. We will leverage convex optimization theory and machine learning techniques to optimize the formulated problem. Computer-based simulator and laboratory experiments will be exploited to assess the proposed algorithms and provide practical insights for real-world implementation. The project outcomes are expected to lead to research excellence in wireless powered integrated sensing and communication systems and have a substantial impact on remote monitoring without intrusive access.

The PhD student should have strong skills in wireless communication, signal processing, optimization algorithms, with a preferred good knowledge in machine learning techniques.

.

Primary Supervisor: George Fazekas

The field of music representation learning aims to transform complex musical data into latent representations that are useful for tasks such as music classification, mood detection, music recommendation or generation. Despite recent advances in deep learning, many models rely purely on data-driven approaches and overlook domain-specific musical structures such as rhythm, melody and harmony.

This PhD project will investigate the integration of domain knowledge into music representation learning to enhance model interpretability and performance. Embedding music theoretical knowledge, structural hierarchies or genre-specific knowledge, the research should improve learning efficiency and provide richer representations that are more explainable and interpretable. The research has the option to explore various techniques, including incorporating symbolic representations, develop new methodologies for better utilisation of inductive biases, or leveraging musical ontologies to bridge the gap between data-driven models and the structured knowledge inherent in music theory.

There is flexibility in the approach taken, but the candidate should identify and outline a specific method within music analysis, production or generation. Special attention should be devoted to Ethical AI, i.e., it is expected that the proposed approach will not only improve music representation but allow for the reduction data biases or improve attribution of authorship to respect copyright.

Primary Supervisor: Joshua Reiss

Physical and signal-based models of sound generating phenomena are widely used in noise and vibration modelling, sound effects, and digital musical instruments. This project will explore machine learning from sample libraries for improving the models and their design process.

Not only can optimisation approaches be used to select parameter values such that the output of the model matches samples, the accuracy of such an approach will give us insight into the limitations of a model. It also provides the opportunity to explore the overall performance of different modelling approaches, and to find out whether a model can be generalised to cover a large number of sounds, with a relatively small number of exposed parameters.

Existing models will be used, with parameter optimisation based on gradient descent. Performance will be compared against recent neural synthesis approaches that often provide high quality synthesis but lack intuitive controls or a physical basis. It will also seek to measure the extent to which entire sample libraries could be replaced by a small number of models with parameters set to match the samples in the library.

The project can be tailored to the skills of the researcher, and has the potential for high impact.

Primary supervisor: Kamyar Mehran

The marine and aviation sectors share a parallel, yet distinct future powertrain requirement compared to the automotive industry. This presents a unique opportunity to accelerate innovation in zero-emission common drivetrains without the burden of the features and design complexities necessary for automotive end-products.

Current leading-edge automotive drivetrains are exploring higher voltages (800V+) using polyphase or switched reluctance motors (SRMs) to address the high-power demands also found in marine and aviation applications. However, these sectors currently rely on complex inverter drives developed for automotive safety standards, including redundant features like Safe Torque Off and current limiting. This project proposes the design of a prototype high-voltage inverter demonstrator specifically tailored for marine and aviation drivetrain applications.

Without a significant shift towards modern, high-efficiency, high-power permanent magnet electric drives (PMEDs), the larger motors needed for future maritime vessels and light aircraft will be constrained by increased weight, reduced efficiency, and potentially even cable size limitations. Current technology limits PMEDs to below 1000kW. A corresponding inverter designed for a megawatt-capacity drivetrain, paired with a more efficient motor than traditional designs, is essential.

Primary supervisor: Lin Wang

The project aims to develop novel audio-visual signal processing and machine learning algorithms that help improve machine intelligence and autonomy in an unknown environment, and to understand human behaviours interacting with robots. The project will investigate the application of AI algorithms for audio-visual scene analysis in real-life environments. One example is to employ multimodal sensors e.g. microphones and cameras, for analysing various sources and events present in the acoustic environment. Tasks to be considered include audio-visual source separation, localization/tracking, audio-visual event detection/recognition, audio-visual scene understanding.

Primary supervisor: Professor Mark Sandler

Since ~2016 most research in Digital Music and Digital Audio has adopted Deep Learning techniques. These have brought performance improvements in applications like Music Source Separation, Automatic Music Transcription and so on. This is good, but on the downside, the models get larger, they consume increasingly large amounts of power for training and inference, require more data and become less understandable and explainable. These issues underpin the research in this PhD.

A fundamental building block in DL is Matrix (or Linear) Algebra. Through training, each each layer’s weight matrix is progressively modified to reduce the training error. By examining these matrices during training, DL models can be compactly engineered to learn faster and more efficiently.

Research will start by exploring the learning dynamics of established Music Source Separation models. Using this knowledge, we can intelligently prune the models, using Low Rank approximations of weight matrices. We will explore what happens when Low Rank is imposed as a training constraint. Is the model better trained? Is it easier and cheaper to train? Next, the work shifts either to other Neural Audio applications, or to applying Mechanistic Interpretability, which reveals the hidden, innermost structures that emerge in trained Neural Networks.

Primary supervisor: Massimo Poesio

How people interpret natural language varies dramatically from person to person. Language can be ambiguous and subjective (what’s funny or offensive for one person may not be for another). This variation is a fundamental challenge for Artificial Intelligence (AI), both from a technological perspective (how should AI models learn to interpret language expressions when humans themselves disagree with each other? And how should we evaluate their success?) and from an application point of view (e.g., what should a social media company do with a post that is offensive according to some, but not according to others?). Even Large Language Models (LLMs) such as GPT4, while impressive in many ways, do not handle variation well. They are typically overcommitted to a particular interpretation, subject to bias, and unaware of alternatives. The proposed project will tackle a question fundamental to all types of variation, how to use datasets with inherent uncertainty and/or disagreements to evaluate NLP models.

Primary supervisor: Dr. Miles Hansard

Secondary supervisor: Dr. Changjae Oh 

View synthesis algorithms, based on collections of input images, have advanced greatly since the introduction of Neural Radiance Fields (NeRFs), Gaussian splatting methods, and 3D point-map representations. These approaches can be augmented by structure from motion, lidar, or other sensor information. It has also been shown that depth maps and visible surface models can be recovered, in addition to the predicted novel views. Large-scale outdoor scene representations, as obtained by these methods, have potential applications in engineering, architecture, archaeology, and earth sciences. These applications typically involve metric quantities such as distance and surface area, in addition to the structural and material properties of the scene. It follows that there may be physical constraints on the representation, in addition to those imposed by the original images. This research project proposes to incorporate application-driven geometric and photometric constraints into recent view synthesis methods. The project will be supervised by Dr. Miles Hansard and Dr. Changjae Oh, in the QMUL School of Electronic Engineering and Computer Science.

Primary supervisor: Mona Jaber

Active travel is a promising way to achieve low-carbon traveling to make modern life more environmentally friendly. However, complex traffic conditions and imperfect infrastructure in urban areas make it problematic to provide a safe and comfortable environment for emerging active transport modes such as shared bikes and e-scooters. Therefore, improving the safety and convenience of active modes of urban transportation has become critical to encouraging people to adopt such travel methods. Fortunately, the emergence of artificial intelligence (AI) technology can help us analyse complex and highly dynamic traffic scenarios. With the help of information collected by connected Internet-of-Things sensors, AI agents can explore technical insights to more accurately solve problems such as identifying effective interventions to promote safe active travel in the presence of driven and driver-less vehicles. In this project, you will develop cutting-edge AI technologies and apply them for the safety of active transport, discovering the health impacts of active travel, and the quality assessment of facilities supporting active transport. The output of this research project will have the opportunity to be published in internationally renowned journals, including but not limited to IEEE Transactions on Intelligent Transportation Systems and IEEE Internet of Things Journal.

Primary supervisor: Nikos Tzevelekos

Model learning seeks to learn a model of a system using information from various sources such as samples, specifications and testing. In the formal setting of finite-state automata, algorithms like L* can generate precise models by use of testing routines presented as oracles. Such algorithms have found application e.g. in learning models for hardware components, but struggle to scale to more elaborate systems such as software components. This project aims to develop novel automata learning techniques going beyond finite-state automata; study the use of large language models along with formal learning techniques for automata learning; apply the devised algorithms for learning models of software components and in particular higher-order programs.

The project will be at the intersection of automata learning, large language models and software verification. While all strong applications will be considered, experience in either of these themes would be a plus.

Primary supervisor: Pasquale Malacaria

This PhD project aims to improve decision-making processes by interfacing large language models (LLMs) with mathematical optimization techniques. The research will focus on developing novel approaches to enhance both individual and organizational cybersecurity decision-making capabilities.

Research Objectives

- Develop a framework for integrating LLMs with mathematical optimization in cybersecurity contexts

- Create models using machine learning to infer parameter values for mathematical optimization in security scenarios

- Design methods for explaining model outcomes to improve interpretability and trust in AI-assisted security decisions

- Evaluate the effectiveness of the developed approaches in real-world cybersecurity applications

Candidate Requirements:

- A Master's degree (or equivalent) in Computer Science, Mathematics, or a related field

- Strong background in mathematics, particularly in mathematical optimization

- Solid understanding of cybersecurity principles, especially network and web security

- Proficiency in machine learning techniques and their applications

- Excellent programming skills (e.g., Python, R)

- Background in Logic and strong analytical and problem-solving abilities

- Excellent written and verbal communication skills in English

Primary supervisor: Paulo Rauber

Reinforcement learning has attracted significant interest due to the fact that many problems in healthcare, robotics, manufacturing, logistics, finance, and advertising can be naturally formulated as problems of maximizing a measure of success (in this context, called cumulative reward) through a sequence of decisions informed by data. The combination of reinforcement learning with artificial neural networks has led to the best computer agents that (learn to) play games such as Chess, Go, Dota 2, and StarCraft II. Even in these outstanding success cases, the corresponding reinforcement learning algorithms are remarkably sample inefficient: they require an enormous amount of trial-and-error to produce good results. In contrast with games and simulations, real-world applications are heavily constrained by the cost of trial-and-error and available data. Consequently, sample inefficiency limits the applicability of reinforcement learning. This inefficiency is fundamentally linked with the trade-off between exploring in order to learn about potentially better sources of rewards and exploiting the well-known sources of rewards. This project aims to develop scalable methods for reinforcement learning that address this issue.

Primary supervisor: Qianni Zhang

3D vessel registration is a necessary step to understand the implications of blood flow on coronary atherosclerosis. Furthermore, 3D vessel models allow for full visualisation of vessel geometry, stenosis (narrowing) and can assist in stent placement. To assess coronary atherosclerosis, it is often required to employ multiple intravascular imaging techniques, such as intravascular ultrasound (IVUS) and Quantitative Coronary Angiography (QCA). IVUS frames provide an accurate contour of vessel wall, while QCAs contain information regarding the 3D orientation of the vessel in space. The integration of these diverse imaging modalities in a 3D vessel model allows for implementing fluid simulation to assist the assessment. However, IVUS is often unavailable to many patients as it is expensive and invasive, while vessel models built from only QCA lack details.

With the growing capability of AI in healthcare, this PhD project will focus on solving the crucial problem of multimodal registration of vessel imaging and prediction of a 3D vessel model with accurate features, followed by potential modelling and assessment of plaque composition. A dataset has been collected at Barts Hospital comprising of comprehensive multimodal vessel imaging data. Model training and evaluation will be effectively supported by cardiovascular experts

Primary supervisor: Dr Riccardo Degl’Innocenti

Second supervisor: Dr SaeJune Park

Terahertz ultrafast spectroscopy is a powerful investigation tool, an all-optical technique that is currently used for the retrieval of fundamental properties, e.g. complex dielectric constant, conductivity, mobility, thickness in a plethora of materials. These include new 2D materials, e.g. graphene, Bi2Se3, and thin films in the photovoltaics field, as well as all-solid-state batteries and pharmaceutical tablets. Further to this, optical pump/THz probe spectroscopy permits to retrieve the photoinduced absorption and conductivity change with sub-ps time resolution, hence providing a fundamental tool for the design and characterization of more energy efficient materials and devices. The PhD project will target the experimental investigation and theoretical modelling of new materials for energy storage and modulation by using state of the art ultrafast time domain spectroscopic systems.

Primary supervisor: Dr SaeJune Park

2nd supervisor: Dr Riccardo Degl’Innocenti

Terahertz (THz) waves have been intensively explored for the last couple of decades owing to their unique properties e.g., label-free detection of target materials using their spectral fingerprints in the THz frequency range. However, examining target materials with THz waves becomes challenging when the volume of the target material is small compared to the THz wavelength due to the small scattering cross-section between the THz waves and samples. Developing novel platforms to enhance THz spectral fingerprints for detecting extremely small amounts of target materials is imperative to deliver rapid identification of small amounts of target materials for various purposes. This PhD project will study THz devices and systems such as metamaterials and waveguides to enhance the THz fingerprints of the target materials.

Primary supervisor: Dr Shady Gadoue

There has been a fast-growing advancement in developing digital twin technology to create real-time mod-els which mimic the behaviour of physical systems. The next generation of electric vehicles require highly accurate online condition monitoring advanced techniques to predict faults within the electric drive power-train system. This project will investigate the application of interactive data-driven digital twin HIL integra-tion for real-time diagnosis and health prediction of the electric powertrain with focus on machine learning algorithms. Proposed digital twin models will be developed and validated based on an experimental electric drive powertrain platform. The project requires a deep knowledge and background in the areas of power electronics, control systems, propulsion machines and machine learning algorithms.

Primary supervisor: Professor Simon Dixon

Music information retrieval (MIR) applies computing and engineering technologies to musical data to satisfy users' information needs. This topic involves the application of artificial intelligence technologies to the processing of music, either in audio or symbolic (score, MIDI) form. The application could be e.g. for software to enhance the listening experience, for music education, for musical practice or for the scientific study of music. Examples of topics of particular interest are automatic transcription of multi-instrumental music, providing feedback to music learners, incorporation of musical knowledge into data-driven deep learning approaches, and tracing the transmission of musical styles, ideas or influences across time or locations.

It is intentional that this topic description is very general, but it is expected that applicants choose your own specific project within this broad area of research, according to your interests and experience. The research proposal should define the scope of the project, the relationship to the state of the art, the data and methods that you plan to use, and the expected outputs and means of evaluation.

Primary supervisor: Sukhpal Singh Gill

Recent technological developments and paradigms, such as edge AI, AI-driven computing, and processing at the network edge, bring new opportunities for modern computing. Current resource management techniques, frameworks, and mechanisms cannot easily meet the new challenges presented by these emerging technologies and computing paradigms. Therefore, we need to develop new research approaches and revisit established models to tackle issues like scalability, elasticity, reliability, latency, and sustainability. Future research aims to optimise the performance of computing systems through the use of AI at the edge and design new computing services that facilitate the deployment of applications using Edge AI, thereby promoting sustainable, cost-effective, and eco-friendly practices. This studentship will focus on addressing significant challenges, including cost-effective computing system solutions, energy-efficient and resilient system architectures, standards, and applications for an eco-friendly environment. It will also analyze the emerging challenges of modern computing systems and develop innovative solutions using modern technologies, such as AI, Edge, 6G and quantum computing, to design real-world and commercial applications. This studentship will explore the intersection of modern computing and ML/AI for IoT applications. The scope of the project is quite broad. We encourage applicants to suggest their own interests and refine the research direction accordingly.

Primary supervisor: Thomas Roelleke

AI potentially allows for addressing a long-standing problem of probability theory: "God does not throw dice (Einstein)". Already Fuzzy theory, Quantum theory, and other theories and conceptssuggested that pure probabilistic reasoning is problematic. This is well-known in information retrieval (IR). Ranking algorithms such as TF-IDF (vector-based similarity) and BM25 (mix of TF-IDF and binomial probabilities) outperform probabilistic approaches. Interestingly, machine learning (ML) builds upon probabilistic models. Experts, researches and practitioners struggle to understand and agree on foundations and the WHY of ranking/predictive algorithms. How can advanced knowledge about the "atoms" of a new mathematics be combined with state-of-the-art, be transferred to applied mathematics?

This PhD project will be at the forefront of IR- and AI-inspired mathematics establishing new functions for ranking scores, relevancy, burstiness, harmony assumptions and informativeness.

PhD candidates must be euphoric about mathematics. They must have good knowledge about probability distributions, regression, logarithm and exponential functions. They must be excited about contributing to foundation research, and interested in current trends. They should be competent in applying mathematics (for this project, investigative IR, including recommendation and classification). Experience with semi-structured data, algorithms (TF-IDF, BM25, regression, sumexp/softmax), and language modelling and models will be beneficial.

Primary supervisor: Dr William Marsh

Join EECS’s Decision Support Laboratory, part of QMUL’s Digital Environment Research Institute, to develop practically useful decision-support systems for medical applications. Working in a multi-disciplinary team, the proposed project looks at safe recovery from knee injuries. Unless managed well, these can lead to early arthritis, a lifelong condition. Guidelines exist to prevent this, but they are often not followed due to a lack of clinician time or resources. The project aims to automate parts of this supervision so that more patients can recover with less risk of early arthritis. The guidance starts as a patient completes acute treatment but is still in recovery. Using a mix of probabilistic reasoning and deep learning to combine different types of data and knowledge, we will develop a prediction model to assess and guide a patient’s recovery. The data source will include the clinical findings (following the injury and its treatment), patient reports and activity monitoring as part of the Barts Biomedical Research Centre. We also expect to work to extract knowledge from the extensive literature. The candidate should have good programming, ML and statistical skills and will learn how to combine these to solve a practical problem.

Primary supervisor: Xiaodong Chen

Pattern reconfigurable antennas (PRA) are beneficial to wireless communications and wireless power transfers. There are various techniques to achieve PRAs, such as employing switching diodes on the radiator or the feeding network of the antenna. However, adopting the diode switches, not only complicates the antenna structure by introducing DC bias circuits, but also increases the nonlinearity and loss of the antenna, leading to a low radiation efficiency. To overcome these mentioned drawbacks, we have proposed a novel antenna pattern reconfigurable technique through controlling the phase of feeding signals from a digital circuit. The proposed PRA consists of multiple feeding ports and its beam can be steered through adjusting the phases on the feed ports. Then, a PRA array based on the proposed antenna element will be designed and developed to demonstrate the performance enhancement. The proposed antenna and its array will be designed in simulation and tested in experiment.

Primary supervisor: Yang Hao

This research aims to develop innovative antenna and electromagnetic technology for the non-invasive diagnosis of cardiovascular diseases, with a key focus on identifying new biomarkers that indicate early stages of cardiovascular conditions. By utilising electromagnetic signals, these antennas will detect subtle changes in blood flow, heart tissue, and other biomarkers crucial for early diagnosis. The advanced design of these antennas will offer high sensitivity and real-time cardiac monitoring, allowing for the early detection of issues such as arterial blockages and heart failure. Integrating this technology into wearable or handheld devices will provide continuous health monitoring for individuals, offering a powerful tool for both clinicians and patients to track heart health and detect problems before they become severe. Furthermore, the research aims to improve access to cardiovascular diagnostics by creating a cost-effective, non-invasive alternative to traditional methods such as angiography and echocardiography, particularly useful for remote or underserved populations. The project will also incorporate AI-driven signal processing to enhance the accuracy of diagnostics, making it possible to analyse complex patterns in the data and ensure reliable, real-time insights into cardiovascular health for proactive care and management.

Supervisor: Ying He

Health Information Systems face various challenges that can impact their resilience and ability to function optimally. Decision-makers, including senior managers and board members, often need to assess the potential impacts of disruptions, balancing the direct costs of preventive measures (e.g., system upgrades, staff training) with indirect consequences such as operational delays or reputational damage. This is compounded by the uncertainties resulting from the changing threat landscape and business context.

Calculating the return on investment (ROI) for these initiatives is difficult, as they aim to prevent potential disruptions rather than directly generate revenue. This research aims to explore a novel framework to assist healthcare organizations in making data-driven, strategic decisions for enhancing system resilience. The proposed approach should be able to leverage real-time data from diverse sources, incorporating it into decision-making processes within organisations to provide user-specific, updated analyses on the ROI of system resilience measures. The framework should also be able to process multidisciplinary data including user inputs to help prioritize investments based on their potential to mitigate operational disruptions and support business continuity. By improving the transparency and quality of decision-making at the strategic level, this research will contribute to the ongoing stability and robustness of health information systems.

Primary supervisor: Yongxin Yang

The project bridges generative AI and healthcare, aiming to leverage multimodal models to enhance our understanding of biomedical systems and improve medical decision-making. While recent advances in generative AI have demonstrated impressive capabilities in processing and creating text and images, applying these models to healthcare research presents unique challenges. Currently, we lack multimodal AI systems capable of interpreting the language of genes and cells, i.e., raw biomedical data. This project seeks to develop novel approaches that integrate multimodal models with raw biomedical data analysis to create more effective tools for healthcare applications. Key objectives include: (i) developing multimodal models that comprehend raw biomedical data, (ii) addressing issues of hallucination and misinformation in healthcare settings, and (iii) optimizing memory usage of multimodal models for edge devices such as AR glasses.

First supervisor: Dr Ziquan Liu

Second supervisor: Professor Shaogang Gong

Foundation models, including large language models (LLMs) and multimodal models (LMMs), has been the key research topic in machine learning due to their strong capacity. However, the large-scale deployment of foundation models is not achieved yet, especially in safety-critical domains such as healthcare, because their risk is not trivial to measure as the output is open-ended can might be long-form, meaning that traditional risk quantification is not applicable.

This project will investigate the challenge of quantifying the uncertainty or risk of foundation models, including both monolithic and modular models, especially in the open-ended generation scenario. After identifying the key challenges, we will propose rigorous uncertainty quantification methods that can be used to measure the quality of foundation models’ generations, including the factuality and informativeness. Uncertainty calibration will be done with the quantified uncertainty as a fine-tuning process. We will evaluate the calibrated foundation models in several application domains such as medical imaging understanding and clinical text summarization.

Back to top