×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Remote / Online - Candidates ideally in
Houston, Harris County, Texas, 77246, USA
Listing for: Elios
Remote/Work from Home position
Listed on 2025-12-19
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Big Data
Job Description & How to Apply Below

Senior Data Engineer (100% Remote)

  • Fully Remote: Work from anywhere with flexible hours that fit your lifestyle.
  • Award-Winning Culture: Be part of a company recognized for exceptional employee satisfaction, inclusivity, and professional development.
  • Competitive Compensation: Generous salary, performance bonuses, and comprehensive benefits package.
  • Professional Growth: Access to mentorship programs, certifications, and opportunities to advance your career.
  • Cutting-Edge Tech: Work with state-of-the-art tools and technologies on impactful, high-visibility projects.
Key Responsibilities
  • Design and implement scalable, efficient data pipelines using Azure Data Lake
    , Databricks
    , Snowflake
    , and Synapse Analytics
    .
  • Develop and optimize workflows with Apache Spark and Scala for batch and streaming data processing.
  • Build, maintain, and enhance robust ETL/ELT pipelines tailored to big data applications within Azure ecosystems.
  • Manage and optimize data storage solutions like Azure Data Lake Storage
    , Snowflake
    , and Synapse Analytics to ensure peak performance and cost-efficiency.
  • Partner with data scientists, analysts, and business teams to ensure the reliability and availability of data platforms.
  • Monitor and fine-tune the performance of data platforms in production environments.
  • Enforce best practices for data governance, security, and compliance in Azure-based data frameworks.
  • Stay ahead of the curve by researching and integrating new big data technologies to enhance scalability and performance.
Requirements Essential Skills and Experience
  • A minimum of 5 years of experience in data engineering or related fields.
  • At least 3 years of hands‑on expertise in Apache Spark using Scala
    .
  • Advanced knowledge of Azure data services
    , including Azure Data Lake
    , Azure Databricks
    , Azure Synapse Analytics
    , and Azure Data Factory
    .
  • Proficiency with Snowflake
    , including schema design, performance optimization, and integration with cloud platforms.
  • Solid expertise in cloud-based big data solutions, particularly within the Azure ecosystem.
  • Strong knowledge of data modeling
    , ETL/ELT pipelines
    , and database concepts
    .
  • Experience with streaming platforms such as Spark Streaming
    , Kafka
    , or Event Hubs
    .
  • Familiarity with data lake and data warehouse architecture (e.g.,
    Delta Lake
    , Snowflake
    , Synapse
    ).
  • Proficiency in Dev Ops practices, including CI/CD pipelines for data engineering workflows.
Preferred Skills
  • Knowledge of Python for data engineering tasks.
  • Experience with Azure Machine Learning or other machine learning platforms integrated with data workflows.
#J-18808-Ljbffr
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary