Staff Data Engineer
PrimaryBid
The team
PrimaryBid is on a mission to give everyone fair access to the Capital Markets. We’ve created the technology to make sure public markets are inclusive, transparent and fair, as they were always meant to be.
PrimaryBid’s Data team includes Data Engineering, Data Analytics, Data Science, Machine Learning and AI. We are a relatively new function within PrimaryBid, focused on delivering value from PrimaryBid’s unique and proprietary data assets. We have spent the last 12 months building a best-in-class data stack, and putting data at the heart of decision-making at PrimaryBid; we are looking for senior team members to join us as we broaden our data offerings and expand internationally!
The role
Working at PrimaryBid is theperfect environment for innovating, driving impact, and having true ownership over the direction of travel for our data platform.
Your role will be to architect, build, and maintain core data infrastructure to power PrimaryBid’s internal and external data products. Leveraging a modern, cloud-first data stack, underpinned by GCP, you will work hand in hand with Data Analysts, Data Scientists, ML/AI experts, and business stakeholders, to turn raw data sets into complex, resilient data assets.
Reporting into the Director of Data Engineering, you will be a technical lead for data within the business, and a close partner to our SRE, DevOps, Infra, and broader engineering teams.
You will be given the space to experiment, learn, and grow your skill set, working with new and best-in-class technologies, to keep you, and PrimaryBid, at the forefront of what’s possible in the world of data.
We are looking for individuals who are collaborative, impact-focused, and self-starting. We want you to be as excited as we are about making the public markets more accessible and fair for everyone.
Key responsibilities
- In partnership with the Director of Data Engineering, and other Data Engineers, you will have direct ownership for PrimaryBid’s global data architecture and infrastructure.
- You will help scope and build innovative data solutions that align directly with the strategic priorities of the business.
- You will ensure that data products are resilient, have appropriate redundancy and support, and that pipeline failures are observable and resolved appropriately.
- You will actively contribute to the quarterly, monthly, and sprint-level planning of the data team’s overall deliverables, including in areas outside of Data Engineering.
Requirements
- The ability to build and maintain complex and scalable data pipelines:
- Expert SQL skills including advanced utilisation of windows functions; solid understanding of relational/dimensional modelling and ETL concepts.
- Deep experience with Python; the ability to construct robust, maintainable, reusable code.
- Advanced dbt skills: model definitions, snapshots, python models, macros, checkpointing and of course tests.
- Proficiency with Airflow (or similar) for orchestration and development using operators, hooks and APIs.
- Experience working with a cloud-first data platform
- Experience with CI/CD pipelines.
- History of partnering closely with Infrastructure, DevOps, and broader engineering teams to complete cross-functional projects.
- A strong interest in being part of team and platform strategy, and being a thought leader in the Data team and the broader business.
Desirables
- GCP Experience: BigQuery, Cloud Composer, Pub/Sub, Cloud Storage, CloudRun etc.
- Experience with automated data ingestion vendors, preferably FiveTran
- Experience with real time data pipelines, leveraging technologies such as Pub/Sub, Apache Kafka, or similar
- Knowledge or use of observability platforms (e.g. MonteCarlo, Sifflet)
- Hands-on experience with Docker and Kubernetes.
- Understanding of Looker and LookML; experience working with Data Analysts and Data Scientists as key stakeholders
- Experience with working with infrastructure-as-code eg Terraform
Required knowledge/qualifications/memberships and ongoing training requirement
- We encourage our team to stay fully up to date with the latest developments in Data Engineering, especially with the technologies that we use; this includes attending conferences and workshops in person and online throughout the year, and actively participating in meetings with our suppliers to understand their product roadmaps.
A day in the life of a Staff Data Engineer
While no two days are the same, likely you’ll spend ~50% of your time actively building and maintaining functionality in our data stack, ~20% of your time collaborating with other engineers, data practitioners, and business stakeholders, ~20% of your time reviewing, refining and delegating, and ~10% learning, developing and staying up to date with all things data and PrimaryBid!
Interview Process
- Meet & Greet (Virtual) - If your profile looks good, then we want to meet you! Let’s have an informal chat to get to know each other better. This chat is as much an opportunity for us to get to know you, as it is for you to understand more about the role and the plans for the business.
- Technical Assessment (Take home) - We invite you to complete our bespoke technical task. This will give you a chance to showcase your skills and present them back to our team.
- Team interview - Once you’ve been shortlisted, we will invite you for a technical interview with one or two senior members of the Data team. This is an opportunity to meet the people you’ll be working with and ask any technical questions you may have.
- Final Interview - If all goes well with the technical interview, we invite you to meet some members of the leadership team for a final chat.