Data Engineer - Copperworks - SDI LaFarga COPPERWORKS : Job Details

Data Engineer - Copperworks

SDI LaFarga COPPERWORKS

Job Location : New Haven,IN, USA

Posted on : 2025-08-17T07:28:17Z

Job Description :
Division SDI Lafarga Copperworks Overview The Data Engineer at Copperworks will own and evolve our modern data stack, moving data from diverse sources into trusted, analytics-ready models that power decision-making across the business. You'll design and operate pipelines in Dagster, model with dbt, shape semantic models in Power BI/Fabric, and optimize and manage our Azure stack (Azure SQL Database, Azure Data Lake, Azure Data Factory, etc.). This role will eventually involve migrating from our hybrid infrastructure to a more cloud-centric setup leveraging more of the Fabric platform. Responsibilities Duties and responsibilities include, but are not limited to:
  • Design, build, and maintain ELT pipelines in Dagster/Python with robust scheduling, observability, and alerting.
  • Develop modular, tested data models in dbt (sources, staging, marts), including incremental strategies and documentation.
  • Implement performant transformations using T-SQL and DuckDB (or Spark SQL equivalents) for analytics at scale.
  • Ingest and orchestrate data flows with Azure Data Factory and Azure Data Lake; manage datasets, and cost/performance.
  • Build and maintain Power BI semantic models (star schemas, relationships, calculation groups/measures), optimizing for refresh and query performance
  • Leverage Microsoft Fabric for end-to-end analytics workflows, governance, and distribution.
  • Manage integrations with external APIs/applications such as our Process AI platform and Salesforce CRM
  • Administer and optimize Azure SQL/On-Prem SQL Server objects (views, sprocs, triggers, indexes), ensuring data quality and reliability.
  • Manage code in Linux/Bash (WSL Ubuntu) environments for our on-premise data server.
  • Partner with end users and business stakeholders to gather requirements, perform testing and QA, and ensure a smooth handoff of deliverables.
  • Monitor, troubleshoot, and continuously improve pipeline reliability, cost, and performance.
QualificationsRequired
  • 3-5 years of professional data engineering experience in a production environment.
  • Hands-on with orchestration tools (Dagster preferred; Airflow/Prefect acceptable).
  • Proficiency with a modeling framework like dbt or sqlmesh (tests, snapshots, macros).
  • Intermediate Python (data access, transformations, packaging/venv, type-safe code, unit tests).
  • SQL expertise (advanced T-SQL): window functions, performance tuning, query plans, indexing strategies.
  • Experience with Spark SQL or similar query engines; strong comfort with DuckDB (or willingness to ramp quickly).
  • Azure: Data Lake (ADLS Gen2) and Data Factory for ingestion/orchestration.
  • Working knowledge of Microsoft Fabric and Power BI semantic modeling (dimensional design, DAX measures).
  • Linux/Bash skills; ability to work in WSL Ubuntu. • API/application integrations experience (REST/JSON, OAuth2/keys, Odata).
  • Version control with Git and collaborative workflows (PRs, code reviews).
  • Strong communication, documentation, and stakeholder partnership skills.
Preferred
  • Experience with Dynamics 365 Finance & Operations (D365 F&O) data models and integration patterns.
  • Data warehousing best practices (star schemas, SCDs, incremental strategies, CDC).
  • Power BI performance tuning (aggregations, incremental refresh, understanding of different storage modes).
  • Azure access control (IAM) and application management in Azure.
Steel Dynamics, Inc., and all affiliated entities are equal opportunity employers.
Apply Now!

Similar Jobs ( 0)