Senior Data Engineer
Senior
AWS
Azure
SQL
About the Role
We are seeking a Senior Data Engineer to lead the design and implementation of scalable data pipelines using Databricks, Snowflake, cloud-native services, and distributed data processing frameworks. You will work across various cloud platforms (AWS, GCP, Azure), contributing to architecture decisions and the adoption of lakehouse patterns. The role involves technical leadership on client projects, ensuring adherence to best practices in data modeling, pipeline orchestration, security, and performance optimization.
Your Responsibilities
Architect and maintain end-to-end data pipelines (batch and streaming) using tools such as Databricks, Apache Spark, Snowflake, and dbt.
Implement lakehouse architectures using technologies like Delta Lake, Apache Iceberg, and Parquet on cloud storage.
Design and manage ingestion workflows from APIs, event streams, databases, and cloud storage using native services.
Optimize data solutions for performance, reliability, cost, and scalability across different cloud environments.
Lead and mentor mid-level and junior engineers, and collaborate with cross-functional teams across engineering, product, and analytics.
Your Skills
4+ years of hands-on experience in data engineering, with a strong background in cloud-based pipelines and distributed processing.
Advanced Python skills; Scala proficiency is a plus.
Deep experience with either Databricks or Snowflake:
For Databricks: familiarity with Jobs, Workspaces, Notebooks, Unity Catalog, and Spark internals.
For Snowflake: hands-on with data modeling, warehouse optimization, Streams & Tasks, Time Travel, Cloning, and RBAC.
Strong knowledge of SQL, dimensional modeling, OLTP/OLAP, and SCDs.
Deep experience with cloud platforms and their data services:
AWS: Glue, Athena, Lambda, DMS, ECS, EMR, Kinesis, S3, RDS
GCP: Dataflow, BigQuery, Cloud Functions, Datastream, Pub/Sub, Dataproc, Dataprep
Azure: Data Factory, Synapse Analytics, Azure Functions, Data Explorer, Event Hubs, Data Wrangler
Will be a pluse
Experience with streaming architectures and real-time data processing.
Familiarity with NoSQL systems (MongoDB, DynamoDB, Cosmos DB).
Knowledge of CI/CD for data pipelines, version control, testing frameworks.
What we offer
Long-term stability, competitive compensation, and a fast onboarding process.
Conditions for steady career development.
Development supported by dedicated mentors and a variety of programs focused on expertise and innovation.
A well-equipped and cozy office supports comfort and productivity across all project stages.
Welcoming atmosphere and a friendly corporate culture.
If you feel this opportunity resonates with you, apply now — we’re looking forward to getting to know you!
About the Role
We are seeking a Senior Data Engineer to lead the design and implementation of scalable data pipelines using Databricks, Snowflake, cloud-native services, and distributed data processing frameworks. You will work across various cloud platforms (AWS, GCP, Azure), contributing to architecture decisions and the adoption of lakehouse patterns. The role involves technical leadership on client projects, ensuring adherence to best practices in data modeling, pipeline orchestration, security, and performance optimization.
Your Responsibilities
Architect and maintain end-to-end data pipelines (batch and streaming) using tools such as Databricks, Apache Spark, Snowflake, and dbt.
Implement lakehouse architectures using technologies like Delta Lake, Apache Iceberg, and Parquet on cloud storage.
Design and manage ingestion workflows from APIs, event streams, databases, and cloud storage using native services.
Optimize data solutions for performance, reliability, cost, and scalability across different cloud environments.
Lead and mentor mid-level and junior engineers, and collaborate with cross-functional teams across engineering, product, and analytics.
Your Skills
4+ years of hands-on experience in data engineering, with a strong background in cloud-based pipelines and distributed processing.
Advanced Python skills; Scala proficiency is a plus.
Deep experience with either Databricks or Snowflake:
For Databricks: familiarity with Jobs, Workspaces, Notebooks, Unity Catalog, and Spark internals.
For Snowflake: hands-on with data modeling, warehouse optimization, Streams & Tasks, Time Travel, Cloning, and RBAC.
Strong knowledge of SQL, dimensional modeling, OLTP/OLAP, and SCDs.
Deep experience with cloud platforms and their data services:
AWS: Glue, Athena, Lambda, DMS, ECS, EMR, Kinesis, S3, RDS
GCP: Dataflow, BigQuery, Cloud Functions, Datastream, Pub/Sub, Dataproc, Dataprep
Azure: Data Factory, Synapse Analytics, Azure Functions, Data Explorer, Event Hubs, Data Wrangler
Will be a pluse
Experience with streaming architectures and real-time data processing.
Familiarity with NoSQL systems (MongoDB, DynamoDB, Cosmos DB).
Knowledge of CI/CD for data pipelines, version control, testing frameworks.
What we offer
Long-term stability, competitive compensation, and a fast onboarding process.
Conditions for steady career development.
Development supported by dedicated mentors and a variety of programs focused on expertise and innovation.
A well-equipped and cozy office supports comfort and productivity across all project stages.
Welcoming atmosphere and a friendly corporate culture.
If you feel this opportunity resonates with you, apply now — we’re looking forward to getting to know you!