Automated Checks
Automated quality checks run during content generation to catch issues before human review. The general pattern is: generate → auto-check → accept or retry.
Text: three-pass editing
Text-based sections (format = Text, TextWithImage, TextWithDiagram, etc.) go
through a post-generation editing step (flow_section_content_edit) that performs
three passes:
- Content analysis — checks alignment with the topic, section type, and learning objectives
- Fact-checking — validates claims, statistics, and references against the knowledge depository
- Instructional editing — applies language rules, HTML structure, and CEFR B1 readability level
The output is a JSON object with edited_content (corrected HTML), fact_checking_results,
and list_of_changes documenting what was modified and why.
Non-text sections (video, assessments, interactive artifacts) skip this editing step.
See docs/section_content/section_content.md
for the full section content pipeline.
Images: quality scoring
Each generated image goes through a quality check that scores the result on a scale:
- Score 7+ → accept the image
- Score < 7 → generate an improved prompt and retry
The check evaluates visual quality, brand compliance, and relevance to the section content. If the retry also fails, the best-scoring result is used.
See docs/image/images.md for the image pipeline details.
Related
- Human review and editing — next step after automated checks
- Section content — text editing runs as part of section content pipeline
- Images — image quality scoring pipeline details
prompts/section_content/flow_section_content_edit.yaml