Job Location : New York,NY, USA
About Masterworks
Masterworks is a fintech platform that allows anyone to invest in SEC-qualified shares of multi-million dollar paintings by names like Banksy, Basquiat, and Picasso. In just three short years, we have built a portfolio of nearly $800 million in world-class artworks, introducing over 800,000 individuals to the $1.7 trillion art market.
Masterworks has been covered by major media publications such as The New York Times, CNBC, The Wall Street Journal, and the Financial Times, and was recently recognized as one of the Top 50 Startups in the US by LinkedIn.
In 2021, Masterworks achieved unicorn status raising $110M in its Series A fundraising round at a valuation exceeding $1 billion.
Our 200+ employees are based out of our offices at 1 World Trade Center in the Financial District of New York City. With an entirely in-office team, there are endless opportunities for collaboration, innovation, and learning.
Why Masterworks?
If you answered “Yes” to any of the above, we'd love to hear from you!
Position OverviewWe're looking for a skilled and proactive Data Engineer with 2–5 years of experience to help build and maintain robust, high-performance data pipelines and infrastructure. This role is ideal for someone who thrives in a fast-paced environment, has deep technical knowledge of Redshift, and is capable of diagnosing and improving SQL query performance. You'll work closely with investment advisors, engineering, product, and analytics teams to ensure the reliability, efficiency, and scalability of our data systems.
Key ResponsibilitiesDesign, build, and maintain scalable ETL pipelines using Luigi and Apache Airflow
Monitor and optimize performance of Redshift clusters, particularly:
Diagnosing high CPU usage
Identifying slow or resource-intensive queries
Refactoring SQL for performance improvements
Proactively build data quality alerts and notification systems to ensure pipeline health and catch missing/incomplete data early
Work closely with analysts and stakeholders to ensure the data is accurate, available, and accessible
Respond promptly to issues during working hours (within 5 minutes during core hours)
Lead or assist in potential migration projects (e.g., Redshift to Snowflake or other tools), including planning, testing, and execution
Collaborate on data modeling and schema design to support analytics and application needs
Requirements2–5 years of hands-on data engineering experience in a production environment
Strong experience with Amazon Redshift , including query optimization and system diagnostics
Proficiency with ETL orchestration tools such as Luigi and Apache Airflow
Expert-level SQL skills; ability to analyze and optimize long-running queries
Proven ability to troubleshoot high CPU or slow query issues on Redshift
Familiarity with data alerting and monitoring tools (e.g., CloudWatch, Datadog, custom alert systems)
Strong communication skills and a collaborative mindset
High responsiveness during working hours; ability to support production data pipelines and address urgent issues quickly
Experience with cloud platforms (AWS preferred)
Bonus: Experience leading or supporting data platform migrations
Preferred QualificationsExperience with data warehousing concepts and modern data stacks
Familiarity with data pipeline logging, testing, and observability best practices
Experience with Snowflake or other modern data platforms
Interest in or experience with financial data or investment platforms
Additional Requirements:
Benefits:
****