Intended purpose
Claire is designed to help educators:- Work through student submissions more efficiently
- Maintain consistency across assessments by grounding feedback in rubric criteria
- Draft well-written feedback that reflects your notes and approved suggestions
Capabilities
Claire can:- Analyze rubrics and assessment instructions to understand grading criteria
- Assess student submissions against rubrics, highlighting strengths and areas for improvement
- Summarize your notes and approved AI suggestions into well-written feedback
- Align feedback with rubric dimensions to support grading consistency
Limitations
Be aware of the following constraints before relying on Claire’s output:- Suggestions may be incomplete, incorrect, or not context-aware
- Performance varies by subject domain and rubric quality — complex or ambiguous rubrics produce less reliable suggestions
- Claire does not access external systems beyond the content you provide and your configured integrations
Claire tracks suggestion approval rates as its primary accuracy metric — measuring the percentage of AI suggestions you approve versus ignore or reject. This helps Claire improve over time to better match your grading judgment.
Text discrepancies
When you upload a PDF file, Claire uses large language models (LLMs) to process student submissions — enabling support for a wide range of assessment types, formats, and languages. LLMs can occasionally cause minor text interferences where the model slightly alters, adds, or omits text from the original submission. To preserve submissions in their original form, Claire runs a separate, independent validation workflow that detects and flags these interferences automatically.This is an early release feature. The quality of text discrepancy flags may vary depending on the submission’s structure, language, and format.
Types of text interferences
Not every interference affects your grading. Claire distinguishes between two categories: Tolerable interferences do not change the meaning or semantics of the text — for example, removing an extra space in a subheading. These are minor formatting differences and are not flagged. Unwanted interferences occur when the model adds, alters, or omits text in a way that no longer accurately represents the original submission — for example, duplicating a heading as text above an image. These are flagged as Discrepancies for your review.Reviewing discrepancies
Check for the Discrepancies badge
Look for the Discrepancies badge in the top navigation bar. It appears only when unwanted discrepancies are detected in the current submission.
Open the Discrepancies panel
Click the badge to open the Discrepancies panel. The panel lists all flagged interferences with the affected text highlighted.
Human oversight
You are responsible for reviewing all AI output before it reaches students. Follow these steps for every submission:- Review all AI suggestions critically in both Rubric and Reader views
- Edit, accept, or reject each suggestion based on your professional judgment
- Before publishing feedback, verify that you have:
- Completed a thorough review of the submission
- Performed completeness checks across all rubric criteria
- Critically reviewed score recommendations and their explanations
- Confirmed that the feedback report is accurate and relevant to the student’s work
Safe use
Follow these guidelines to use Claire responsibly. Do:- Verify all facts and rubric applications before approving suggestions
- Document overrides when you make material changes to AI suggestions
- Stay alert to automation bias — actively evaluate each suggestion rather than approving in bulk
- Publish feedback without a thorough human review
- Upload sensitive data beyond what is contractually permitted
- Rely solely on AI output without incorporating your own expertise and judgment
Contact
Privacy and rights
For privacy questions or to exercise your data rights: privacy@clairelabs.ai.
Security incidents
To report a security or system incident: security@clairelabs.ai.

