Psychologist

10a Labs

Washington D.C., DC Posted 6/19/2025 Full Time

At the forefront of applied AI safety, our team works to ensure that advanced AI systems protect and empower peoplenot harm them. We believe that safeguarding mental health and human agency is just as essential as preventing technical failure. If you're driven by real-world impact and want to help build the psychological backbone of AI safety, this role offers a chance to shape how the next generation of AI interacts with society.

About the role

You will be supporting an AI Lab safety team for an initial six-month engagement. Drawing on your background in forensic psychologywhere practitioners routinely assess risk of violence, self-harm, or addictionyou will design and validate methods to detect early signs of clinical risk in conversational AI logs. You will partner with analysts, data scientists and engineers to set behavioral red-lines for persuasive or deceptive model behaviors, and you will translate digital health ethics and emerging AI safety standards into effective detection systems and actionable internal policies.

In this role you will

  • Design clinical risk-detection pipelines: Create workflows that flag language patterns associated with self-harm, suicidal ideation, or substance use escalation in AI conversations.

  • Stand up crisis-response triage flows: Help define escalation protocols that guide at-risk users toward professional help while ensuring user safety and privacy.

  • Investigate human-AI manipulation dynamics: Analyze how sycophancy, strategic deception, or persuasive tone shifts emerge in large language models, and develop evaluation methods to detect and prevent them.

  • Quantify well-being outcomes of model changes: Design and validate psychometric tools to measure emotional and psychological effects of model updates, new modalities, or prolonged AI interaction.

  • Provide ethical, regulatory, and policy guidance: Translate digital health ethics, privacy standards, and AI governance frameworks into practical internal policies and disclosures that prioritize user care without blocking innovation.

You might thrive in this role if you

  • Bring 5+ years of forensic or clinical risk experience, particularly in evaluating threats of violence, self-harm, or addiction.

  • Have hands-on fluency with NLP or speech analytics and can work directly with engineers to implement psychological risk-detection models.

  • Enjoy analyzing and interpreting AI-human behavior, especially with regard to manipulation, misalignment, and user vulnerability.

  • Are skilled at balancing innovation and responsibility, and can help organizations meet high ethical and compliance standards in fast-moving domains.

  • Think like a mixed-methods researcher, combining experimental design with qualitative case review to assess user well-being.

  • Communicate clearly and cross-functionally, translating complex psychological insights into product, policy, and engineering actions.

This is a rare opportunity to help shape how AI platforms understand, detect, and mitigate risks to user mental health and safety at scale.

JOB LOCATION:
Washington D.C., DC 20009

Apply NowApply Now
This website uses cookies for analytics and to function properly. By using our site, you agree to these terms.