Skip to main content
School of Electronic Engineering and Computer Science

Dr Michael Shekelyan

Michael

Lecturer in Computer Science (T&R)

Email: m.shekelyan@qmul.ac.uk
Room Number: Peter Landin Building, 4th Floor
Website: https://shekelyan.science
Office Hours: Email me to arrange an appointment.

Profile

I am a Lecturer (Assistant Professor) in Computer Science. I am part of the theory group and aim to develop practical algorithms & data structures that can be used in a wide range of applications or theoretical foundations that promise substantial practical impact. I participate in many research communities that are interested in learning from data. Most of my works have been presented in the go-to data management conferences (VLDB, ACM SIGMOD/PODS, IEEE ICDE, EDBT), but I also regularly serve as a reviewer for leading machine learning and data mining conferences (NeurIPS, ICML, ICLR, AISTATS, ACM KDD) as well as many journals. My main goal is to progress technology towards reliably and responsibly capturing crucial (factual) insights from data, i.e., reactive interfaces for data exploration (better algorithms, data structures & data approximations), and privacy-preserving analysis & machine learning (better theory, hypothesis tests & randomised algorithms).

Open Position (PhD Studentship)

Privacy-Preserving Algorithms: Unlocking Data Sharing for Medical Sciences & Machine Learning

  • PhD studentship: covers 3 years stipend (currently £20,662 per year in 2023/2024) and fees 
  • Supervisor: Dr Michael Shekelyan
  • Location: London, UK
  • Applications: open to UK home students (see TGC 5.2 Student eligibility) till 31st January 2024
  • Description:

Statistical analysis & machine learning approaches can only be as good as the underlying data and harder tasks call for a larger collection of accurate data. Collectively, health organisations, companies and service providers collect massive amounts of data from patients, employees and users, but on an individual level, a single hospital, company or service provider only accesses their smaller subset. Data sharing partnerships are challenging due to distrust, varying quality standards, legal obligations and ethical considerations aimed to protect the privacy of the person whose confidential information is collected. Naive anonymisation techniques may remove names, dates and identifiers, but overlook quasi-identifiers leveraging contextual knowledge, i.e., rare combinations of facts pointing to a particular person when considering additional information. Sophisticated frameworks like differential privacy & federated learning employ randomised algorithms that satisfy formal privacy guarantees & privacy-enhancing technologies that safely aggregate data in a decentralised fashion using cryptographic techniques. The studentship can take many directions such as mathematical foundations for privacy-preserving algorithms, intuitively explainable formal privacy guarantees, federated learning systems satisfying legal requirements, and other practical or theoretical solutions towards privacy-preserving data sharing. Contact m.shekelyan@qmul.ac.uk for further information or questions about potential directions.

Back to top