Junior Data Engineer
at Drug Intelligence
About the Role
Build and maintain data pipelines and infrastructure
About Drug Intelligence
Mission-driven, data-focused small business
Full Description
WeāreĀ a fast-growing, mission-driven small business that relies on data toĀ informĀ decisions across every aspect of our operations. While our data stack isĀ lean,Ā weāreĀ at a pivotal stage where we are rebuilding our stack using modern tooling.
Important: to be considered for this role you must apply on LinkedIn and fill out a 2-step assessment. Link to assessment here: https://assessment.predictiveindex.com/bo/5PWK/Data_Engineer_2026
The assessment will take 20 minutes (for both parts). You will receive a link to complete part 2 after completing part 1 (you will also receive the results of your test).
The Role
We'reĀ looking for a generalist Junior Data EngineerāÆto take ownership of our data workflows and infrastructure. This role is ideal for someone who enjoys building simple but effective systems, cleaning up messy processes, and helping teams make better decisions through better data. One of your key responsibilities will be to dig into our current state, understand how things are done today, and lead the way in making them more reliable, scalable, and maintainable.
Most of the teamĀ isāÆnon-technical, soĀ strong communicationĀ skills, autonomy, and the ability to explain technical topics to non-technical stakeholders are essential.
We are looking for someone who is AI forward - tell us about how you use Cursor, Claude Code, etc. to work more efficiently than you used to.
This is a hybrid roleāÆbased in Toronto, with someĀ requiredĀ in-person collaboration each week. Face-to-face time helps us work better across functions and ensures alignment between business and data needs.
WhatĀ YouāllĀ Do
- Support Data Pipelines
- Manage lightweight ETL/ELT processes that process pharmaceutical market research and treatment data.
- Standardize and Clean Data Workflows
- HelpĀ identifyĀ bottlenecks and inefficiencies in current data processes and work with the team to implement solutions.
- Contribute to Our Data Stack
- HelpĀ maintainĀ and improve a simple cloud-based data infrastructure (e.g., Azure, AWS, or GCP) using SQL, Python, and lightweight tools.
- Enable Reporting & Analysis
- Help teamsĀ accessĀ clean, reliable data for client deliverables and strategic healthcare insights by connecting pipelines to dashboards and BI tools.
- Support Cross-Functional Teams
- Collaborate with teams across the business (ops, product, finance, marketing) to understand data needs and contribute to practical solutions.
- Prototype and Experiment
- Explore emerging tools, including AI-powered platforms, and prototype lightweight solutions to improve efficiency and unlock new capabilities.
- Learn and Apply Best Practices
- Help implement best practices around version control, documentation, data quality checks, and process standardization.
Who You Are
- A pragmatic builder ā you like to ship simple, working solutions
- Curious and hands-on ā you enjoy tinkering, prototyping, and exploring how new tools (including AI) can help you work smarter
- Solid foundation in SQL and Python, with eagerness to learn more
- You have experience using AI-native tooling for building software (e.g. Claude Code, Codex, Cursor)
- Exposure to cloud-based data platforms, especially Azure
- Comfortable with ambiguity and learning with team support; resourceful andĀ knowsĀ when to ask questions.
- Process-oriented ā you enjoy cleaning up messy workflows
- Able to communicate clearly with technical and non-technical teammates
- Excited to contribute proactively and grow into taking initiative in a small company environment.
Nice-to-Haves
- Experience with BI tools (Power BI, Tableau, Looker, etc.)
- Exposure to ETL tools like Azure Data Factory,Ā dbt, or similar
- Some knowledge of data governance or security best practices
- Any experience with version control (Git) or collaborative development
- PreviousĀ exposure to data pipeline concepts or workflows
What You'll Learn
- How to design scalable, production-grade data pipelines
- Data quality frameworks and testing strategies
- Cross-functional communication and requirements gathering
Why Join Us
- High ownership and autonomy
- Direct impact on company-wide decisions
- Flexibility and room to grow with the company
- A collaborative, low-ego team that values doing things well (and simply)
- Mentorship and support from experienced data engineers
Opens in a new tab on the company's website