About the role
We are looking for a versatile senior engineer blending software engineering, data engineering and modern AI/ML development skills. You’ll build and operate the critical systems underpinning our AI/ML/data infrastructure, focusing on practical, scalable solutions.
Your core tasks involve creating core data infrastructure, integrating and deploying AI models, especially leveraging APIs and LLMs. This requires hands-on work with cloud platforms, containerization, and infrastructure automation tools. It also includes close collaboration with the Backend and DevSecOps teams.
Your impact will be direct: enabling new AI-powered features, collaborating across technical boundaries to effectively merge data, software, and AI using modern engineering practices.
This position has a very wide impact in the organization, is very visible, and is expected to deliver impactful solutions with minimal supervision.
Responsibilities
- Integrating and operationalizing AI/ML models (esp. via APIs and LLMs)
- Building & maintaining robust data pipelines (ETL/ELT)
- Developing and deploying scalable services & APIs, incl. containerization
- Managing cloud infrastructure using Infrastructure-as-Code (IaC)
- Managing diverse data storage solutions (warehouses, databases)
- Implementing reporting (dashboards as-code in Holistics), system monitoring, logging, and alerting
- Automating build, test, and deployment workflows (CI/CD)
- Troubleshooting issues across data, application, and infrastructure layers
- Collaborating closely across all engineering and product teams
What you’ll bring
We’re looking for someone who:
- Experience deploying or integrating AI/ML models/services (e.g., using APIs, understanding MLOps fundamentals)
- Strong programming skills, with a preference for Python with demonstrated experience developing and deploying scalable backend services and APIs
- Proven experience building, deploying, and maintaining data pipelines (ETL/ELT)
- Has Hands-on experience with at least one major cloud provider (AWS, GCP, Azure)
- Experience with containerization (Docker) and orchestration (Kubernetes)
- Familiarity with Infrastructure as Code principles and tools (e.g., Terraform)
- Solid understanding of various database technologies (SQL, NoSQL)
- Has experience in implementing and utilising CI/CD practices and tools
- Is a strong stakeholder management and communication skills, with the ability to organize cross-functional requests and explain complex ideas in plain English
- Is proactive and collaborative, takes initiative and delivers results with minimal supervision
- Has eagerness to learn new technologies and adaptable to changing requirements in a fast-moving AI environment
We actively seek diversity and encourage applications from everyone. If you’re interested in this role but your experience doesn’t perfectly match the description, please still apply. Studies show that women and underrepresented groups can be hesitant to apply unless they meet every requirement. If this role excites you and you’re confident you can do the job, we encourage you to apply!
Bonus skills
While these skills aren’t a requirement, we’d be very interested to hear from you if you have;
- Familiarity interacting with APIs from major AI service providers (e.g., OpenAI, Anthropic, Google Vertex AI, AWS SageMaker endpoints, Hugging Face Inference Endpoints)
- Understanding of concepts related to deploying and managing Large Language Model (LLM) based applications (e.g., prompt engineering basics, context management)
- Experience with or strong understanding of vector databases and embedding generation/retrieval (e.g., Pinecone, Weaviate, Chroma, FAISS)
- Familiarity with basic MLOps concepts surrounding model deployment, versioning, and monitoring (even without building the models)
- Experience working with the AI/ML services for deployment and integration from the main cloud platforms (AWS, GCP, or Azure)