Model evaluation metrics for text extraction problems.
Attributes
Name
Description
au_prc
float
Output only. The Area under precision recall
curve metric.
confidence_metrics_entries
Sequence[.text_extraction.TextExtractionEvaluationMetrics.ConfidenceMetricsEntry]
Output only. Metrics that have confidence
thresholds. Precision-recall curve can be
derived from it.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-07-02 UTC."],[],[]]