Middle Data Engineer - Data Platforms

Middle

Data Engineering

AWS

Azure

SQL

Work on cloud-based solutions (AWS, GCP, Azure), contributing to data integration, transformation, and delivery processes across diverse client projects.

Middle Data Engineer - Data Platforms

Middle

Data Engineering

AWS

Azure

SQL

About the Role

We are looking for a Data Engineer to join our team and take part in building and maintaining data pipelines using modern technologies. You will work on cloud-based solutions (AWS, GCP, Azure), contributing to data integration, transformation, and delivery processes across diverse client projects. The role involves applying established best practices to ensure performance, reliability, and scalability of data workflows.

Your Responsibilities

  • Design, build, and maintain robust ETL/ELT pipelines using tools such as Databricks, Apache Spark, dbt, and Snowflake.

  • Ingest and process data from APIs, message queues, relational databases, and files into a cloud-based data platform.

  • Schedule and orchestrate workflows using Apache Airflow or cloud-native alternatives.

  • Enable querying and analytics on large-scale data using data warehouses and SQL engines (e.g., Snowflake, BigQuery, Redshift).

  • Operate across major cloud platforms (AWS, GCP, Azure) and leverage their native data engineering services.

Your Skills

  • 2+ years of experience in data engineering

  • Proficient in Python; knowledge of Scala is a plus.

  • Experience with Databricks, Snowflake, or other modern data platforms.

  • Strong command of SQL and familiarity with relational databases (PostgreSQL, MySQL, SQL Server).

  • Exposure to one or more cloud platforms, ideally with services such as:

    • AWS: Glue, Athena, Lambda, DMS, ECS, EMR, Kinesis, S3, RDS

    • GCP: Dataflow, BigQuery, Cloud Functions, Datastream, Pub/Sub, Dataproc, Dataprep

    • Azure: Data Factory, Synapse Analytics, Azure Functions, Data Explorer, Event Hubs, Data Wrangler

Nice to Have

  • Experience with Delta Lake, Apache Iceberg, or other lakehouse formats.

  • Familiarity with NoSQL databases (e.g., MongoDB).

  • Knowledge of CI/CD for data pipelines, version control, testing frameworks.

  • Comfort working with containerized or serverless environments (e.g., ECS, Cloud Run, AKS, Lambda).

  • Hands-on experience with dbt Cloud/Core and integration into data workflows.

What we offer

  • Long-term stability, competitive compensation, and a fast onboarding process.

  • Conditions for steady career development.

  • Development supported by dedicated mentors and a variety of programs focused on expertise and innovation.

  • A well-equipped and cozy office supports comfort and productivity across all project stages.

  • Welcoming atmosphere and a friendly corporate culture.

If you feel this opportunity resonates with you, apply now — we’re looking forward to getting to know you!

About the Role

We are looking for a Data Engineer to join our team and take part in building and maintaining data pipelines using modern technologies. You will work on cloud-based solutions (AWS, GCP, Azure), contributing to data integration, transformation, and delivery processes across diverse client projects. The role involves applying established best practices to ensure performance, reliability, and scalability of data workflows.

Your Responsibilities

  • Design, build, and maintain robust ETL/ELT pipelines using tools such as Databricks, Apache Spark, dbt, and Snowflake.

  • Ingest and process data from APIs, message queues, relational databases, and files into a cloud-based data platform.

  • Schedule and orchestrate workflows using Apache Airflow or cloud-native alternatives.

  • Enable querying and analytics on large-scale data using data warehouses and SQL engines (e.g., Snowflake, BigQuery, Redshift).

  • Operate across major cloud platforms (AWS, GCP, Azure) and leverage their native data engineering services.

Your Skills

  • 2+ years of experience in data engineering

  • Proficient in Python; knowledge of Scala is a plus.

  • Experience with Databricks, Snowflake, or other modern data platforms.

  • Strong command of SQL and familiarity with relational databases (PostgreSQL, MySQL, SQL Server).

  • Exposure to one or more cloud platforms, ideally with services such as:

    • AWS: Glue, Athena, Lambda, DMS, ECS, EMR, Kinesis, S3, RDS

    • GCP: Dataflow, BigQuery, Cloud Functions, Datastream, Pub/Sub, Dataproc, Dataprep

    • Azure: Data Factory, Synapse Analytics, Azure Functions, Data Explorer, Event Hubs, Data Wrangler

Nice to Have

  • Experience with Delta Lake, Apache Iceberg, or other lakehouse formats.

  • Familiarity with NoSQL databases (e.g., MongoDB).

  • Knowledge of CI/CD for data pipelines, version control, testing frameworks.

  • Comfort working with containerized or serverless environments (e.g., ECS, Cloud Run, AKS, Lambda).

  • Hands-on experience with dbt Cloud/Core and integration into data workflows.

What we offer

  • Long-term stability, competitive compensation, and a fast onboarding process.

  • Conditions for steady career development.

  • Development supported by dedicated mentors and a variety of programs focused on expertise and innovation.

  • A well-equipped and cozy office supports comfort and productivity across all project stages.

  • Welcoming atmosphere and a friendly corporate culture.

If you feel this opportunity resonates with you, apply now — we’re looking forward to getting to know you!

Middle Data Engineer - Data Platforms

Content

Middle

Work on cloud-based solutions (AWS, GCP, Azure), contributing to data integration, transformation, and delivery processes across diverse client projects.