Associate Research Scientist - Normativity Lab
Listed on 2025-12-01
-
Research/Development
Data Scientist, Research Scientist
Overview
Professor Gillian Hadfield is seeking a highly-qualified Associate Research scholar to join her team at the Normativity Lab in Baltimore, MD, or Washington, DC, to investigate the foundations of human normativity and how these insights can inform the development of AI systems aligned with human values. The ideal candidate will have a track record in computational modelling that explores the dynamics of AI systems and the development of autonomous AI agents, experience with machine learning, reinforcement learning, and generative AI, and a background in interdisciplinary research.
This is a full-time one-year position, with the possibility of extension.
How can we ensure AI systems and agents align with human values and norms? Maintain and enhance the complex cooperative economic, political and social systems humans have built? What will it take to ensure that the AI transformation puts us on the path to improved human well-being and flourishing, and not catastrophe? Existing approaches to alignment, such as RLHF, constitutional AI and social choice methods, focus on eliciting human preferences, aggregating them across multiple, pluralistic values if necessary, and fine-tuning models to satisfy those preferences.
In the Normativity Lab we believe these approaches are likely to prove too limited to address the alignment challenge and that the alignment questions will require studying the foundations of human normativity and human normative systems.
We bridge computational modeling, specifically multi-agent reinforcement learning and generative agent simulations, and economic, political, and cultural evolutionary theoryto explore the dynamics of normative systems and explore how to build AI systems and agents that have the normative infrastructure and normative competence to do as humans have learned to do: create stable rule-based groups that can adapt to change while ensuring group well-being.
- Project Ownership: Take ownership of research projects, working independently and collaboratively with a diverse team of experts
- Model Development: Develop and refine computational models to simulate and analyze normative behaviors in various contexts.
- Data Collection and Analysis: Design and implement empirical studies, including data collection, statistical analysis, and interpretation of results.
- Publication: Prepare and submit manuscripts for publication in high-impact academic journals and present findings at conferences and workshops.
- Collaborate with Team Members: Work closely with lab members and external collaborators to foster a productive research environment and contribute to interdisciplinary projects.
- Mentor Students and Postdocs: Provide guidance and mentorship to postdocs, graduate students, and undergraduate students involved in related research projects.
- Contribute to Lab
Activities:
Participate in lab meetings, potentially take a leadership role in coordinating lab activities, contribute to grant writing, and engage in outreach activities to promote the lab's research initiatives.
- Strong technical understanding of issues in the fields of AI safety and decision-making. Familiarity with the ethical implications of AI technologies and their alignment with human values and societal norms.
- Expertise in large language models, multi-agent reinforcement, or economic modeling and game theory.
- Knowledge of theories related to human normativity, including welfare economics, political theory, moral philosophy, cultural evolutionary theory, social norms, or legal systems is a strong asset; an interest in learning about these domains is essential.
- Proficiency in designing and implementing computational models to simulate normative behaviors and group dynamics.
- Strong capability in both qualitative and quantitative research methods, including statistical analysis and survey design.
- Experience conducting interdisciplinary research, including integrating perspectives from economics, law, and social sciences to analyze complex systems.
- Demonstrated ability in independently leading and delivering on research projects.
- Track record of formulating and implementing research methodologies and procedures.
- Track record of making independent decisions regarding data quality, analysis, and interpretation.
- Track record of managing timelines and deliverables.
- Advanced Degree: A PhD in a relevant field such as computer science, economics, political science or cultural evolution
- Programming
Skills:
Proficiency in programming languages relevant to computational modeling (e.g., Python, R, or similar). - Data Analysis Tools:
Experience with statistical software and data visualization tools (e.g., SPSS, Stata, Tableau). - AI
Experience:
Knowledge of AI frameworks and algorithms, particularly those related to decision-making and ethical AI. - Machine Learning
Experience:
Knowledge of ML techniques, including reinforcement…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).