THE ROLE
As a Data Architect within the Core Data Services team, you will be responsible for the comprehensive solution and architecture of our Enterprise Data platform.
We are seeking an experienced professional with a track record of building solutions to large-scale systems at the enterprise level.
You will be responsible for designing, implementing, and optimizing end-to-end data solutions that empower our organization to harness the full potential of data.
You will collaborate closely with cross-functional teams, including data engineers, data scientists, and business stakeholders, to drive innovation and ensure the seamless integration of data and analytics within our ecosystem.
Within this role, you will encounter a multitude of opportunities to effect meaningful change across various dimensions, from technological enhancements and streamlined processes to the development and mentorship of talented individuals.
Your contributions will have a significant impact on our business and technology team, reaffirming our commitment to excellence.
This role is about driving transformation – defining how data is modeled, governed, and delivered in a global, high-volume environment.
If you’re excited to influence architecture decisions from day one, thrive in ambiguity, and want to leave a lasting mark on how data is done at scale, this role is for you.
WHAT THIS ROLE WILL DO:
Architecture & Strategy
Design and evolve our Databricks Lakehouse architecture with a focus on scalability, cost efficiency, and reliability in response to changing requirements and to address various data needs across the business.
Define and implement a tiered data ecosystem with maturity layers to transform data systematically while ensuring each layer services a specific purpose from raw data ingestion to refined and analytics-ready data.
Outline the vision, requirements, and lead development of the Data Lakehouse by aligning technical architecture with business needs and long-term vision.
Continuously evaluate and introduce new patterns, tools, or practices to make our data platform more scalable, resilient and reliable and then work across our team to put your ideas into action.
Modeling, Standards & Governance
Develop conceptual, logical, and physical data models to serve as the backbone of integrated, reusable and scalable data to support the current and future development of data strategies
Establish and enforce standards for metadata, lineage, and data quality across the ecosystem.
Define patterns and playbooks that guide engineering teams in building consistent, future-proof pipelines.
Enhance data discoverability by working closely with data analysts and business stakeholders to make data easily accessible and understandable to them.
Develop and enforce data engineering, security, and data quality standards through automation.
Develop and implement strategies for seamlessly integrating data from diverse sources across different systems and platforms, including real-time and batch processing.
Delivery & Enablement
Collaborate with engineers, analysts, and business teams to design solutions that are accessible, usable, and trustworthy.
Mentor and provide architectural guidance to engineering staff.
Lead workshops and training to raise data literacy and promote best practices.
WHAT THIS PERSON WILL BRING:
Bachelor's or Master's degree in Computer Science, Information Systems, or a related field
10+ years of tech industry experience and have 5+ years experience in architecting and implementing big data strategies
5+ years expertise with and cloud-based data platforms (Databricks, AWS, Snowflake, Spark)
Deep knowledge of data modeling using Erwin or similar tooling (3NF, dimensional/star schema), medallion architectures, ETL processes, data integration, and Data Warehousing concepts
Experience with Big Data technologies such as Databricks, Spark, Hadoop, and NoSQL databases
Experience in architecting data pipelines and solutions for streaming and batch integrations using tools/frameworks like dbt, Talend, Azure Data Factory, Spark, Spark Streaming, etc.
Experience with confluent Kafka real-time data processing and API platforms.
Strong understanding of data governance, lineage, and compliance.
Experience optimizing for cost and performance at scale (e.g., caching, partitioning strategies)
Excellent communication and interpersonal skills for effective collaboration with technical and non-technical teams to be able to translate architecture into business value
BENEFITS & PERKS
Our motto is ‘Taking Care of Our Own’ through 6 pillars of benefits:
HEALTH: Medical, vision, dental and mental health benefits for you and your family, with access to a health care concierge, and Flexible or Health Savings Accounts (FSA or HSA)
YOURSELF: Free concert tickets, generous paid time off including paid holidays, sick time, and personal days
WEALTH: 401(k) program with company match, stock reimbursement program
FAMILY: New parent programs including caregiver leave and baby bonuses, plus fertility, adoption, foster, or surrogacy support
CAREER: Career and skill development programs with School of Live, tuition reimbursement, and student loan repayment
OTHERS: Volunteer time off, crowdfunding match