EdTech Jobs
Risepoint

Senior AI Engineer (Evals/Observability Concentration)

Risepoint
🇺🇸Remote$180K–$240K/yri1w ago
Prep for this Role

Summary

Senior AI Engineer role focused on AI evaluations and observability systems at Risepoint. Responsible for developing and maintaining critical infrastructure for monitoring and assessing AI model performance.

Key Responsibilities: Design and implement evaluation frameworks and observability tools for AI systems. Develop monitoring solutions, testing pipelines, and diagnostic systems to track AI model behavior and performance metrics.
Skills & Tools: Expertise in AI/ML systems, observability platforms, and evaluation methodologies. Strong software engineering fundamentals with proficiency in Python, data analysis, and distributed systems.
Qualifications: 5+ years of software engineering experience with demonstrated expertise in AI/ML systems or observability infrastructure. Bachelor's degree in Computer Science, related field, or equivalent professional experience.
Location: US - Remote

Job Description

Risepoint is an education technology company that provides world-class support and trusted expertise to more than 100 universities and colleges. We primarily work with regional universities, helping them develop and grow their high-ROI, workforce-focused online degree programs in critical areas such as nursing, teaching, business, and public service. Risepoint is dedicated to increasing access to affordable education so that more students, especially working adults, can improve their careers and meet employer and community needs. The Impact You Will Make Risepoint is developing an AI-powered Student Journey Platform and is seeking a Senior AI Engineer with deep expertise in Retrieval-Augmented Generation (RAG), multi-agent architectures, and LLM evaluation frameworks. This role focuses on designing, implementing, and operationalizing AI systems with a strong emphasis on structured evaluation (including LLM-as-Judge), measurable quality, and production-grade reliability. The ideal candidate has experience integrating LLMs with enterprise data sources, building testable and observable AI workflows, and improving system performance through rigorous evaluation and iteration. This role contributes directly to a platform that is central to the organization’s long-term strategy. How You Will Bring Our Mission to Life What You Will Do Build and maintain evaluation frameworks (LLM-as-Judge, rubric-based scoring, regression test suites) to measure output quality, reliability, and drift with the responsibility of debugging production level issues as detected. Architect and implement multi-agent workflows with clear coordination, tool usage, and failure handling patterns. Build structured observability into AI systems (tracing, prompt/version tracking, evaluation logging, cost and latency monitoring). Define and enforce quality gates for AI features using automated evals prior to production release. Optimize inference performance (latency, token usage, caching, batching, routing across models). Collaborate with product and engineering teams to translate business requirements into testable AI system designs. Contribute to code reviews, architectural discussions, and internal standards for AI development. Design and implement Retrieval-Augmented Generation (RAG) systems and Model Context Protocol (MCP) servers using structured and unstructured enterprise data. Develop and manage fine-tuning workflows (SFT, preference optimization, or related techniques) including dataset preparation, versioning, and validation. What Success Looks Like RAG pipelines return grounded, source-attributed responses with minimal hallucination. Evals are automated, reproducible, and integrated into CI/CD or release workflows. Multi-agent workflows are observable, testable, and maintainable as complexity increases. How Impact Will be Measured AI systems demonstrate measurable improvements in quality using defined evaluation benchmarks. Fine-tuned models and/or programmatic solutions show validated performance gains over baseline foundation models. AI systems meet defined SLAs for latency, reliability, and cost. What You’ll Bring to the Team Experience That Matters Most 3-5 years of full stack engineering experience with strong fundamentals in object-oriented programming, applicable design patterns, and AI-focused system design. Professional experience in Python, C#, Java, or a similar language used in production systems. Experience with LLM evaluation and observability tooling (e.g. Langfuse, LangSmith, OpenTelemetry-based tracing, custom evaluation harnesses). Experience implementing guardrails, policy enforcement, and safety layers in AI driven systems while leveraging LLM-as-Judge for validation and continuous improvement. Experience That’s Great to Have Familiarity with performance optimization techniques for LLM-based systems (latency, caching, routing, batching). Experience building production-grade RAG systems (retrieval pipelines, chunking strategies, embeddings, reranking, context construction). Experience contributing to internal AI standards, reusable frameworks, or platform-level tooling. Experience deploying AI systems in cloud environments (AWS, Azure, GCP). Experience in Databricks (model serving endpoints, ML Flow) Risepoint is an equal-opportunity employer and supports a diverse and inclusive workforce. Reliable. Empowered. Adaptable. Customer-centric. Heart. These are some of the words that describe Risepoint employees. We have spent the past nearly 20 years helping universities grow by expanding access to affordable, life-changing education for working adults. As an education technology company that provides trusted partnership and expertise to more than 125 universities and colleges, we primarily work with regional universities, helping them create online programs in critical areas such as nursing, teaching, business, and public service. We are dedicated to increasing access to affordable education so that more students can improve their careers and their communities.