Junior Data Engineer
Junior
Data Engineering
1+ year of Python-focused development experience, familiarity with data processing frameworks (PySpark/Polars/Pandas), solid SQL and relational DB knowledge, basic cloud understanding (AWS/GCP/Azure), Git experience, and B2+ English.
Junior Data Engineer
Junior Data Engineer
Junior
Data Engineering
Role Summary
Start your journey in Data Engineering by joining a dynamic team that builds diverse data solutions for global clients. We are looking for a proactive engineer with a software engineering mindset who is eager to master different cloud platforms and modern data stacks. You will work alongside experienced mentors to turn raw data into valuable business assets using Python and distributed processing frameworks.
The Mission
To ensure that data is reliable, accessible, and ready for impact. Your mission is to build robust pipelines and efficient data flows that solve diverse business challenges - whether for operational integration, machine learning, or decision-making. You will help clients unlock the full potential of their data by writing clean code and building scalable technical solutions.
The Tech Stack
Core Languages: Python or Scala (Focus on clean scripting), SQL (Solid basics).
Processing Frameworks: PySpark, Polars, Pandas.
Clouds (Exposure to): AWS, GCP, or Azure (understanding storage, compute, IAM).
Orchestration: Airflow, Dagster, or similar (basic concepts).
Modern Tools (You will learn): dbt, Docker, CI/CD pipelines, Databricks/Snowflake.
Your Skills
Experience: 1+ years of proven experience in software development with a strong focus on Python (writing clean, modular code).
Data Processing: Hands-on experience or strong familiarity with modern data processing frameworks like PySpark or Polars, Pandas
Database Knowledge: Solid understanding of SQL fundamentals (Joins, Aggregations, Window Functions) and familiarity with relational databases (PostgreSQL, MySQL, SQL Server).
Cloud Awareness: Conceptual understanding of at least one major cloud platform (AWS, GCP, Azure).
Tooling: Experience with Version Control Systems (Git) is a must.
Language: Upper-Intermediate (B2) English or higher.
Your Responsibilities
Build Pipelines: Develop, test, and maintain data pipelines to extract, transform, and load (ETL) data from various sources (APIs, Databases, Files).
Code Quality: Write high-quality, maintainable Python code and participate in code reviews.
Collaborate & Learn: Participate in technical discussions to understand tool selection and architectural decisions.
Support: Assist the team in troubleshooting data issues and monitoring pipeline performance.
Continuous Improvement: Actively learn new tools (like dbt, Snowflake, Terraform, or Cloud services) and share findings with the team.
Nice to Have
Streaming Data: Familiarity with basic concepts of streaming (Kafka, Kinesis).
NoSQL: Basic exposure to non-relational databases (MongoDB, DynamoDB).
Advanced Formats: Knowledge of modern data formats like Parquet, Avro.
What we offer
Long-term stability, competitive compensation, and a fast onboarding process.
Conditions for steady career development.
Development supported by dedicated mentors and a variety of programs focused on expertise and innovation.
A well-equipped and cozy office supports comfort and productivity across all project stages.
Welcoming atmosphere and a friendly corporate culture.
If you feel this opportunity resonates with you, apply now — we’re looking forward to getting to know you!
Role Summary
Start your journey in Data Engineering by joining a dynamic team that builds diverse data solutions for global clients. We are looking for a proactive engineer with a software engineering mindset who is eager to master different cloud platforms and modern data stacks. You will work alongside experienced mentors to turn raw data into valuable business assets using Python and distributed processing frameworks.
The Mission
To ensure that data is reliable, accessible, and ready for impact. Your mission is to build robust pipelines and efficient data flows that solve diverse business challenges - whether for operational integration, machine learning, or decision-making. You will help clients unlock the full potential of their data by writing clean code and building scalable technical solutions.
The Tech Stack
Core Languages: Python or Scala (Focus on clean scripting), SQL (Solid basics).
Processing Frameworks: PySpark, Polars, Pandas.
Clouds (Exposure to): AWS, GCP, or Azure (understanding storage, compute, IAM).
Orchestration: Airflow, Dagster, or similar (basic concepts).
Modern Tools (You will learn): dbt, Docker, CI/CD pipelines, Databricks/Snowflake.
Your Skills
Experience: 1+ years of proven experience in software development with a strong focus on Python (writing clean, modular code).
Data Processing: Hands-on experience or strong familiarity with modern data processing frameworks like PySpark or Polars, Pandas
Database Knowledge: Solid understanding of SQL fundamentals (Joins, Aggregations, Window Functions) and familiarity with relational databases (PostgreSQL, MySQL, SQL Server).
Cloud Awareness: Conceptual understanding of at least one major cloud platform (AWS, GCP, Azure).
Tooling: Experience with Version Control Systems (Git) is a must.
Language: Upper-Intermediate (B2) English or higher.
Your Responsibilities
Build Pipelines: Develop, test, and maintain data pipelines to extract, transform, and load (ETL) data from various sources (APIs, Databases, Files).
Code Quality: Write high-quality, maintainable Python code and participate in code reviews.
Collaborate & Learn: Participate in technical discussions to understand tool selection and architectural decisions.
Support: Assist the team in troubleshooting data issues and monitoring pipeline performance.
Continuous Improvement: Actively learn new tools (like dbt, Snowflake, Terraform, or Cloud services) and share findings with the team.
Nice to Have
Streaming Data: Familiarity with basic concepts of streaming (Kafka, Kinesis).
NoSQL: Basic exposure to non-relational databases (MongoDB, DynamoDB).
Advanced Formats: Knowledge of modern data formats like Parquet, Avro.
What we offer
Long-term stability, competitive compensation, and a fast onboarding process.
Conditions for steady career development.
Development supported by dedicated mentors and a variety of programs focused on expertise and innovation.
A well-equipped and cozy office supports comfort and productivity across all project stages.
Welcoming atmosphere and a friendly corporate culture.
If you feel this opportunity resonates with you, apply now — we’re looking forward to getting to know you!
Junior Data Engineer
Content
Junior
1+ year of Python-focused development experience, familiarity with data processing frameworks (PySpark/Polars/Pandas), solid SQL and relational DB knowledge, basic cloud understanding (AWS/GCP/Azure), Git experience, and B2+ English.