×
Register Here to Apply for Jobs or Post Jobs. X

Postdoctoral Fellow - Normativity Lab

Job in Baltimore, Anne Arundel County, Maryland, 21276, USA
Listing for: Johns Hopkins University
Full Time position
Listed on 2025-12-06
Job specializations:
  • Research/Development
    Data Scientist, Research Scientist
Salary/Wage Range or Industry Benchmark: 60000 - 90000 USD Yearly USD 60000.00 90000.00 YEAR
Job Description & How to Apply Below

Department/Program:
Department of Computer Science

Professor Gillian Hadfield is seeking highly-qualified postdoctoral scholars to join her team at the Normativity Lab at Johns Hopkins University in 2026. The Normativity lab examines the foundations of human normativity and leverages insights to inform the development of AI systems that align with human values. The ideal candidate will have a track record in experiments with multiagent systems using reinforcement learning and/or generative agents, interest and ideally experience in training models to interpret and align with norms, and a background in interdisciplinary research such as overlaps between AI and cognitive science, microeconomic theory, cultural evolution theory, or moral/normative/legal reasoning.

Applicants with strong credentials and motivation should still apply even if they lack these qualifications. These are full-time positions with the potential for appointments 12‑18 months in duration, and the possibility of further extension. Lab members have the option to work in either Baltimore, MD or Washington, D.C.

About the Normativity Lab

How can we ensure AI systems and agents align with human values and norms? Maintain and enhance the complex cooperative economic, political and social systems humans have built? What will it take to ensure that the AI transformation puts us on the path to improved human well-being and flourishing, and not catastrophe? Existing approaches to alignment, such as RLHF, constitutional AI and social choice methods, focus on eliciting human preferences, aggregating them across multiple, pluralistic values if necessary, and fine-tuning models to satisfy those preferences.

In the Normativity Lab we believe these approaches are likely to prove too limited to address the alignment challenge and that the alignment questions will require studying the foundations of human normativity and human normative systems. We bridge computational modeling, specifically multi-agent reinforcement learning and generative agent simulations, and economic, political, and cultural evolutionary theory to explore the dynamics of normative systems and explore how to build AI systems and agents that have the normative infrastructure and normative competence to do as humans have learned to do: create stable rule-based groups that can adapt to change while ensuring group well-being.

Specific Duties and Responsibilities
  • Project Ownership:
    Take ownership of research projects, working independently and collaboratively with a diverse team of experts
  • Model Development :
    Develop and refine multiagent systems to simulate and analyze normative behaviors in various contexts.
  • Data Collection and Analysis :
    Design and implement empirical studies, including data collection, statistical analysis, and interpretation of results.
  • Publication :
    Prepare and submit manuscripts for publication in high-impact academic journals and present findings at conferences and workshops.
  • Collaborate with Team Members :
    Work closely with lab members and external collaborators to foster a productive research environment and contribute to interdisciplinary projects.
  • Mentor Students :
    Provide guidance and mentorship to graduate and undergraduate students involved in related research projects.
  • Contribute to Lab Activities :
    Participate in lab meetings, potentially take a leadership role in coordinating lab activities, contribute to grant writing, and engage in outreach activities to promote the lab's research initiatives.
Qualifications

Special Knowledge, Skills, and Abilities

  • Strong technical understanding of issues in the fields of AI safety and governance. Familiarity with the ethical implications of AI technologies and their alignment with human values and societal norms.
  • Expertise in large language models, multi-agent reinforcement, or economic modeling and game theory.
  • Knowledge of theories related to human normativity, including welfare economics, political theory, moral philosophy, cultural evolutionary theory, social norms, or legal systems is a strong asset; an interest in learning about these domains is essential.
  • Proficiency in designing and implementing computational…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary