Robustness Analysis Researcher

Location Singapore
Discipline Information & Communications Technology
Job Reference BBBH135559_1705978724
Salary Negotiable
Consultant Email
EA License No. 02C3423

Job Responsibilities:

  • Identify potential risks and vulnerabilities of (LLM) features and how those may differ across populations and model input types.
  • Evaluate potential risks and vulnerabilities by red teaming (i.e., trying to elicit harmful outputs), as well as collecting data and running experiments.
  • Assist other team members and testers in offensive techniques and approaches to scale AI red teaming.
  • Work with stakeholders to mitigate risks and perform testing to ensure progress.
  • Research new and emerging threats to inform the organization including prompt injection, improve red teaming efficacy and accuracy, and stay relevant.


  • Bachelor Degree in Computer Science, Machine Learning, Statistics, or related field.
  • Minimum 2 years of relevant experience in identifying vulnerabilities, anomaly detection, or red teaming.
  • Understanding of machine learning principles, especially in the context of LLMs.
  • Expertise on bias, discrimination, or other safety issues in Artificial Intelligence (AI) and Machine Learning (ML) systems.
  • Excellent communication skills with the ability to work effectively across internal and external organizations and virtual teams.