Posted at: 27 January

Databricks Lead Data Engineer

Company

Merico

Merico is a San Francisco-based B2B software development company specializing in contribution analysis tools for developers, leveraging advanced technologies like Python and AI.

Remote Hiring Policy:

Merico is a fully remote company that hires developers from various regions worldwide, fostering a collaborative environment across time zones.

Job Type

Full-time

Allowed Applicant Locations

United States

Salary

$200,000 to $260,000 per year

Apply Here

Job Description

Job Description

Databricks Lead Data Engineer

Full Time

US - Remote

Salary Range: $200,000 To $260,000 Annually

Databricks Lead Data Engineer – K2 Insurance Services - Remote

Founded in 2011, K2 Insurance Services, LLC is a results-driven managing general agency offering specialty insurance programs through retail and wholesale channels. With over 40 active programs and 20,000+ distribution partners, K2 provides innovative, customized solutions across niche markets

K2 Insurance Services, LLC seeks a full-time Lead Databricks Data Engineer to spearhead our data engineering initiatives, focusing on designing, building, and optimizing scalable data solutions using Databricks. As a lead, you will mentor a team of data engineers, collaborate with cross-functional stakeholders, and define best practices to unlock the full potential of our data.

K2 Insurance Services offers the opportunity to join an established company in growth mode. Our compensation program includes competitive pay; bonus plan; medical, dental, and vision insurance; paid time-off in year of hire; and 401(k) with employer match.

Salary Range: $200,000 To $260,000 USD per year

Key Responsibilities:

  • Leadership & Strategy:

  • Lead and mentor a team of data engineers in implementing Databricks-based solutions.

  • Define and drive the data engineering strategy, ensuring alignment with business goals.

  • IT Product Owner role for organizational data warehouse project.

  • Databricks Expertise:

  • Design and develop scalable data pipelines and ETL processes using Databricks.

  • Optimize and tune Spark jobs for performance and efficiency.

  • Develop and enforce best practices for Databricks cluster management and data security.

  • Data Architecture:

  • Build and maintain robust data models to support analytics and reporting needs.

  • Integrate Databricks with various data sources (cloud storage, databases, APIs).

  • Implement Delta Lake for reliable, scalable, and performant data lakes.

  • Collaboration:

  • Work closely with data scientists, analysts, and stakeholders to deliver actionable insights.

  • Act as the technical liaison between the data engineering team and other stakeholders.

  • Innovation & Optimization:

  • Stay updated on the latest Databricks and Azure features to drive continuous improvements.

  • Automate repetitive processes and streamline data workflows.

Qualifications:

  • Proven experience (5+ years) in data engineering, with 2+ years as a team lead or equivalent role.
  • Expertise in Databricks, including Spark, Delta Lake, and MLFlow.
  • Proficiency in Python, SQL, and at least one cloud platform (Azure, AWS, or GCP).
  • Hands-on experience with data lakes, data warehouses, and big data technologies.
  • Strong understanding of CI/CD pipelines and infrastructure as code (IaC).
  • Excellent problem-solving and communication skills.
  • Prior insurance industry experience is a strong plus.

Preferred Skills:

  • Familiarity with Databricks Unity Catalog and Lakehouse architecture.
  • Experience with Airflow or other workflow orchestration tools.
  • Knowledge of machine learning pipelines and integrations.
  • Certifications in Databricks, Azure, AWS, or GCP.
  • Previous IT product owner role.
Apply Here