Data Specialist

October 14, 2024

  • Full Time
  • Gothenburg
  • Considering candidates only who are inside Sweden 🇸🇪 and across Europe 🇪🇺 who are willing to relocate.

As a Data Specialist, with extensive experience in Databricks, you will be assigned key responsibilities concerning the design, development, and maintenance of our data environments. Your technical prowess will directly contribute to building scalable and robust data pipelines and enhancing our data platform’s performance, availability, and scalability. You will be instrumental in optimizing data workflows, ensuring data quality, and integrating data from different sources.

Responsibilities

  • Design, develop, and maintain scalable data pipelines using Databricks to support data ingestion, transformation, and loading (ETL/ELT) processes.
  • Collaborate with and mentor other product teams and engineers to build and maintain the data platform that integrates data from multiple sources.
  • Optimize data processing workflows and ensure they adhere to architectural principles, performance and security.
  • Implement and enforce data quality checks, monitoring, and alerting systems to ensure the integrity and reliability of the data.
  • Leverage Databricks features such as Delta Lake and Databricks Workflows to enhance data pipeline performance and reliability.
  • Work with cloud infrastructure teams to ensure the platform’s performance, availability, and scalability within the cloud environment.

Required Skills

  • Experience as a Data Engineer or similar role, with at least hands-on experience in Databricks.
  • Knowledge of IAC, CI/CD pipelines, version control (Git), and DevOps practices in data engineering.
  • Proficiency in Databricks and familiarity with key features like Delta Lake, Databricks Jobs, and Databricks SQL.
  • Excellent skills in Python for data processing and pipeline development. (Beneficial if experienced in SQL)
  • Strong understanding of distributed data processing technologies such as Apache Spark.
  • Experience in working with big data technologies and tools, including Spark, Kafka, Hadoop, or similar.
  • Experience of medallion architecture in Databricks. 
  • Experience in working with automation testing and data quality.

Desired Skills

  • Experience of Azure infrastructure (especially ADF, Blob Storage & DataLake).
  • Experience of agile practices working in a product organization.
  • Understanding and experience of data mesh concepts.
  • Experience of using Terraform and/or Bicep.
  • Experience of working with streaming ETL.
  • Understanding or experience of concepts like SLO, GitHub and control planes.
Maximum file size: 256 MB.