Guardrails for
AI Tutors
Stop AI from giving away answers. Keep responses on-syllabus. Prove it works.
AI Tutors Are Powerful. But Unguarded.
Without proper guardrails, AI tutoring systems create more problems than they solve.
Answer Leakage
Students prompt-hack their way to direct answers, bypassing the learning process entirely.
Hallucinations
AI generates plausible-sounding but incorrect information that misleads students and undermines trust.
No Audit Trail
When regulators or administrators ask how AI is being used, you have no data to show them.
Define the Rules. We Enforce Them.
useRubric gives you fine-grained control over what your AI tutor can and cannot do.
Socratic Mode
Force AI to ask guiding questions instead of giving direct answers. Students learn by thinking, not copying.
On-Syllabus Only
Constrain responses to your approved curriculum. No off-topic tangents, no hallucinated content.
Compliance Reports
Automatically generate audit trails and analytics. Show administrators and regulators exactly how AI is being used.
Integrate in Minutes, Not Months.
Three steps to guardrailed AI tutoring.
Connect Your AI
Drop in our SDK with a few lines of code. Works with OpenAI, Anthropic, and any LLM provider.
Set Your Rubric
Define rules for your curriculum: what topics are allowed, what response styles to enforce, what to block.
Monitor & Prove
Every interaction is logged, scored, and auditable. Generate compliance reports in one click.
// Configure your AI tutor guardrails
import { createRubric } from "use-rubric";
const rubric = createRubric({
mode: "socratic",
syllabus: "./curriculum/algebra-101.json",
rules: {
blockDirectAnswers: true,
maxHintLevel: 3,
requireCitations: true,
},
compliance: {
logAll: true,
reportSchedule: "weekly",
},
});
export default rubric;Be the First to Try useRubric
Join our early access program and help shape the future of safe AI tutoring.