Middle Data Engineer

Middle

Data Engineering

AWS

Azure

SQL

2+ years of data engineering experience with ETL/ELT, strong Python and SQL skills, data warehousing and data lakes expertise, Databricks, ADF, Snowflake, cloud platforms, basic Terraform, and client-facing communication.

Middle Data Engineer

Middle Data Engineer

Middle

Data Engineering

AWS

Azure

SQL

Role Summary

Take the next step in your career by joining a team where you will not just build pipelines, but design scalable data platforms. We are looking for an experienced Data Engineer who combines strong technical expertise in Python and SQL with a mindset for optimization. You will work directly with clients to build modern Data Warehouses and Lakehouses using industry-standard tools like Databricks and Azure Data Factory.

The Mission

To build and optimize the data backbone of our clients' businesses. Your mission goes beyond moving data; you will ensure that pipelines are resilient, scalable, and economically efficient (FinOps). You will bridge the gap between infrastructure and data, taking ownership of technical delivery and helping shape the architecture of modern data platforms.

The Tech Stack

  • Core Languages: Python or Scala (Proficient/OOP), SQL (Advanced/Analytical).

  • Compute & Storage: Databricks, Snowflake, BigQuery, Synapse, Redshift.

  • Orchestration: Azure Data Factory (ADF), Airflow, Dagster.

  • Transformation: dbt (Core/Cloud), Spark/PySpark (Optimization focus).

  • Infrastructure: Terraform (Implementation), Docker, Kubernetes (K8s).

Your Skills

  • Experience: 2+ years of hands-on data engineering experience (ETL/ELT).

  • Data Architecture: Strong understanding of Data Warehousing concepts (Star/Snowflake Schema) and Data Lake principles (Medallion Architecture).

  • Key Platforms: Proven experience with Databricks (managing clusters, notebooks), Azure Data Factory (pipelines, data flows), Snowflake.

  • Engineering Core: Proficiency in Python (OOP, functional programming) and strong SQL skills for complex transformations.

  • Cloud Fluency: Practical experience with at least one major cloud (AWS, GCP, Azure) and its data services (e.g., Kinesis/Lambda, Dataflow/BigQuery, Synapse).

  • Infrastructure Awareness: Base understanding of Infrastructure as Code (Terraform). Ability to deploy and modify infrastructure resources independently.

  • Communication: Ability to communicate technical decisions directly to clients and the team.

Your Responsibilities

  • Data Platform Development: Design and implement pipelines to ingest, clean, and transform data into Data Lakes and Data Warehouses.

  • Performance & FinOps: Actively monitor and optimize pipelines for performance and cost. Differentiate between a working query and an efficient one.

  • Deployment & Configuration: Configure pipeline deployments and manage environment settings using IaC tools (Terraform) and CI/CD.

  • Quality Assurance: Implement data quality checks (e.g., Great Expectations, dbt tests) and ensure data integrity.

  • Mentorship: Communicate with stakeholders to clarify requirements and provide technical guidance/code reviews to Junior engineers.

Nice to Have

  • Advanced IaC: Experience setting up complex modules in Terraform or Pulumi.

  • Lakehouse Formats: Hands-on experience with Delta Lake or Apache Iceberg features (Time Travel, Schema Evolution).

  • NoSQL: Experience modeling data for NoSQL databases (MongoDB, DynamoDB).

  • Certifications: Associate-level cloud certifications (e.g., Azure Data Engineer Associate, Databricks Certified Data Engineer).

What we offer

  • Long-term career stability with a competitive salary paid in USD.

  • Conditions for steady career development.

  • Development supported by dedicated mentors and a variety of programs focused on expertise and innovation.

  • Private medical insurance provided after successful completion of the probationary period

  • A well-equipped and cozy office supports comfort and productivity across all project stages.

  • Welcoming atmosphere and a friendly corporate culture.

If you feel this opportunity resonates with you, apply now — we’re looking forward to getting to know you!

Role Summary

Take the next step in your career by joining a team where you will not just build pipelines, but design scalable data platforms. We are looking for an experienced Data Engineer who combines strong technical expertise in Python and SQL with a mindset for optimization. You will work directly with clients to build modern Data Warehouses and Lakehouses using industry-standard tools like Databricks and Azure Data Factory.

The Mission

To build and optimize the data backbone of our clients' businesses. Your mission goes beyond moving data; you will ensure that pipelines are resilient, scalable, and economically efficient (FinOps). You will bridge the gap between infrastructure and data, taking ownership of technical delivery and helping shape the architecture of modern data platforms.

The Tech Stack

  • Core Languages: Python or Scala (Proficient/OOP), SQL (Advanced/Analytical).

  • Compute & Storage: Databricks, Snowflake, BigQuery, Synapse, Redshift.

  • Orchestration: Azure Data Factory (ADF), Airflow, Dagster.

  • Transformation: dbt (Core/Cloud), Spark/PySpark (Optimization focus).

  • Infrastructure: Terraform (Implementation), Docker, Kubernetes (K8s).

Your Skills

  • Experience: 2+ years of hands-on data engineering experience (ETL/ELT).

  • Data Architecture: Strong understanding of Data Warehousing concepts (Star/Snowflake Schema) and Data Lake principles (Medallion Architecture).

  • Key Platforms: Proven experience with Databricks (managing clusters, notebooks), Azure Data Factory (pipelines, data flows), Snowflake.

  • Engineering Core: Proficiency in Python (OOP, functional programming) and strong SQL skills for complex transformations.

  • Cloud Fluency: Practical experience with at least one major cloud (AWS, GCP, Azure) and its data services (e.g., Kinesis/Lambda, Dataflow/BigQuery, Synapse).

  • Infrastructure Awareness: Base understanding of Infrastructure as Code (Terraform). Ability to deploy and modify infrastructure resources independently.

  • Communication: Ability to communicate technical decisions directly to clients and the team.

Your Responsibilities

  • Data Platform Development: Design and implement pipelines to ingest, clean, and transform data into Data Lakes and Data Warehouses.

  • Performance & FinOps: Actively monitor and optimize pipelines for performance and cost. Differentiate between a working query and an efficient one.

  • Deployment & Configuration: Configure pipeline deployments and manage environment settings using IaC tools (Terraform) and CI/CD.

  • Quality Assurance: Implement data quality checks (e.g., Great Expectations, dbt tests) and ensure data integrity.

  • Mentorship: Communicate with stakeholders to clarify requirements and provide technical guidance/code reviews to Junior engineers.

Nice to Have

  • Advanced IaC: Experience setting up complex modules in Terraform or Pulumi.

  • Lakehouse Formats: Hands-on experience with Delta Lake or Apache Iceberg features (Time Travel, Schema Evolution).

  • NoSQL: Experience modeling data for NoSQL databases (MongoDB, DynamoDB).

  • Certifications: Associate-level cloud certifications (e.g., Azure Data Engineer Associate, Databricks Certified Data Engineer).

What we offer

  • Long-term career stability with a competitive salary paid in USD.

  • Conditions for steady career development.

  • Development supported by dedicated mentors and a variety of programs focused on expertise and innovation.

  • Private medical insurance provided after successful completion of the probationary period

  • A well-equipped and cozy office supports comfort and productivity across all project stages.

  • Welcoming atmosphere and a friendly corporate culture.

If you feel this opportunity resonates with you, apply now — we’re looking forward to getting to know you!

Middle Data Engineer

Content

Middle

2+ years of data engineering experience with ETL/ELT, strong Python and SQL skills, data warehousing and data lakes expertise, Databricks, ADF, Snowflake, cloud platforms, basic Terraform, and client-facing communication.