×
Register Here to Apply for Jobs or Post Jobs. X

AI Security Engineer

Job in Dubai, Dubai, UAE/Dubai
Listing for: SentraAI
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    AI Engineer, Cybersecurity
Salary/Wage Range or Industry Benchmark: 200000 - 300000 AED Yearly AED 200000.00 300000.00 YEAR
Job Description & How to Apply Below

Dubai, United Arab Emirates | Posted on 01/26/2026

As an AI Security Engineer at Sentra

AI, you will operate at the intersection of
AI architecture, application security, and offensive security
, helping enterprise organisations design, deploy, and operate AI systems that are secure by design and defensible in production.

You will work closely with AI engineers, platform teams, and security stakeholders to embed
runtime guardrails, security observability, and continuous AI red-teaming
into real production systems. This role is accountable for translating AI threat models into concrete engineering controls, and for ensuring AI systems remain secure, auditable, and resilient as they evolve.

This is a hands‑on role for practitioners who understand that AI security is an
operational discipline
, not a policy exercise.

About SentraAI

Sentra

AI is a specialist enterprise AI firm, focused on helping large, regulated organisations move AI and data platforms from experimentation into production safely and sustainably.

We work inside enterprise run‑states, where governance, operational risk, change control, and long‑term ownership are integral to delivery. Our teams are trusted to design and deliver systems, platforms, and operating models that can be run, audited, and evolved, not just launched.

We prioritise engineering discipline, architectural clarity, and delivery quality over speed theatre or hype.

Requirements AI Threat Modelling and Security Architecture
  • Guide application and platform teams on
    threat modelling for AI and LLM‑based systems across the full lifecycle
  • Develop and maintain AI‑specific threat models aligned to recognised standards and regulatory expectations
  • Translate threat models into
    explicit architectural controls, security requirements, and acceptance criteria
  • Advise on secure AI design patterns, including least‑privilege, isolation, and human‑in‑the‑loop safeguards
Secure Implementation and Runtime Enforcement
  • Work closely with AI and ML engineers to ensure
    secure implementation of AI guardrails within application codebases
  • Ensure robust
    input sanitisation, validation, and prompt hardening for text, document, and multimodal inputs
  • Ensure
    output validation, redaction, and data exfiltration prevention mechanisms are correctly implemented
  • Evaluate, test, and support deployment of
    LLM security frameworks and detection mechanisms
  • Ensure security‑relevant telemetry and logs are captured in line with regulatory and audit requirements
AI Security Observability and SOC Integration
  • Define and publish
    AI‑specific security indicators for operational monitoring and alerting
  • Enable real‑time visibility into AI security signals such as anomalous behaviour, prompt abuse, or tool misuse
  • Support downstream security operations and incident response teams with actionable AI security context
AI Red Teaming and Offensive Security Integration
  • Embed
    automated AI security testing into CI/CD pipelines, including prompt fuzzing and regression testing
  • Support and guide offensive security teams on
    LLM‑specific attack scenarios
  • Operationalise AI red‑teaming tools and custom adversarial test cases
  • Ensure findings feed back into guardrail tuning, detection logic, and adaptive defence mechanisms
Required Qualifications Core Experience
  • Strong background in application development, security engineering, or platform engineering
  • Practical experience working with
    AI‑enabled applications
    , LLMs, or ML pipelines
  • Solid grounding in
    application security concepts and secure software design
  • Hands‑on experience implementing or integrating
    AI guardrails, sanitisation, and runtime security controls
AI and Security Capability
  • Practical understanding of AI and LLM threat vectors such as prompt injection, data poisoning, tool abuse, and agent escalation
  • Experience collaborating closely with AI engineers, platform teams, and offensive security practitioners
  • Ability to translate security intent into concrete, testable engineering controls
Advantageous but Not Mandatory
  • Experience with AI red‑teaming tools or adversarial testing frameworks
  • Familiarity with secure CI/CD and Dev Sec Ops  practices
  • Experience operating in regulated or highly governed enterprise environments
  • Exposure to SOC…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary