Skip to main content
Claire is a unified platform with an AI-assisted grading and feedback copilot. It helps you review student submissions faster, maintain grading consistency, and efficiently draft feedback against rubrics. Claire supports your professional judgment — it does not make final grading decisions.

Intended purpose

Claire is designed to help educators:
  • Work through student submissions more efficiently
  • Maintain consistency across assessments by grounding feedback in rubric criteria
  • Draft well-written feedback that reflects your notes and approved suggestions
The final grade, the published feedback, and every grading decision remain yours.

Capabilities

Claire can:
  • Analyze rubrics and assessment instructions to understand grading criteria
  • Assess student submissions against rubrics, highlighting strengths and areas for improvement
  • Summarize your notes and approved AI suggestions into well-written feedback
  • Align feedback with rubric dimensions to support grading consistency

Limitations

Be aware of the following constraints before relying on Claire’s output:
  • Suggestions may be incomplete, incorrect, or not context-aware
  • Performance varies by subject domain and rubric quality — complex or ambiguous rubrics produce less reliable suggestions
  • Claire does not access external systems beyond the content you provide and your configured integrations
Claire tracks suggestion approval rates as its primary accuracy metric — measuring the percentage of AI suggestions you approve versus ignore or reject. This helps Claire improve over time to better match your grading judgment.

Text discrepancies

When you upload a PDF file, Claire uses large language models (LLMs) to process student submissions — enabling support for a wide range of assessment types, formats, and languages. LLMs can occasionally cause minor text interferences where the model slightly alters, adds, or omits text from the original submission. To preserve submissions in their original form, Claire runs a separate, independent validation workflow that detects and flags these interferences automatically.
This is an early release feature. The quality of text discrepancy flags may vary depending on the submission’s structure, language, and format.

Types of text interferences

Not every interference affects your grading. Claire distinguishes between two categories: Tolerable interferences do not change the meaning or semantics of the text — for example, removing an extra space in a subheading. These are minor formatting differences and are not flagged. Unwanted interferences occur when the model adds, alters, or omits text in a way that no longer accurately represents the original submission — for example, duplicating a heading as text above an image. These are flagged as Discrepancies for your review.

Reviewing discrepancies

1

Check for the Discrepancies badge

Look for the Discrepancies badge in the top navigation bar. It appears only when unwanted discrepancies are detected in the current submission.
The Discrepancies badge only appears when unwanted issues are detected. If you don’t see the badge, no interferences were flagged for that submission.
2

Open the Discrepancies panel

Click the badge to open the Discrepancies panel. The panel lists all flagged interferences with the affected text highlighted.
3

Review each flagged issue

Go through each discrepancy and compare the processed text with the original submission. Use this to confirm that your grading is based on what the student actually wrote.

Human oversight

You are responsible for reviewing all AI output before it reaches students. Follow these steps for every submission:
  1. Review all AI suggestions critically in both Rubric and Reader views
  2. Edit, accept, or reject each suggestion based on your professional judgment
  3. Before publishing feedback, verify that you have:
    • Completed a thorough review of the submission
    • Performed completeness checks across all rubric criteria
    • Critically reviewed score recommendations and their explanations
    • Confirmed that the feedback report is accurate and relevant to the student’s work
Data requirements: To generate suggestions, Claire requires the student instructions, the grading rubric, and the student submission.
Over-reliance on AI output — such as automatically accepting all feedback drafts without applying human nuance — is a known risk. If your rubric’s score criteria are not precisely defined, Claire may struggle to distinguish between grade boundaries, requiring careful review of grade alignment suggestions.

Safe use

Follow these guidelines to use Claire responsibly. Do:
  • Verify all facts and rubric applications before approving suggestions
  • Document overrides when you make material changes to AI suggestions
  • Stay alert to automation bias — actively evaluate each suggestion rather than approving in bulk
Don’t:
  • Publish feedback without a thorough human review
  • Upload sensitive data beyond what is contractually permitted
  • Rely solely on AI output without incorporating your own expertise and judgment

Contact

Privacy and rights

For privacy questions or to exercise your data rights: privacy@clairelabs.ai.

Security incidents

To report a security or system incident: security@clairelabs.ai.