Class Rouge (1.95.1)
Note: Some or all of the information on this page might not apply
to Trusted Cloud. For a list of services that are available in
Trusted Cloud, see Services available for
Trusted Cloud .
Version latestkeyboard_arrow_down
Rouge (
* ,
rouge_type : typing . Literal [
"rouge1" ,
"rouge2" ,
"rouge3" ,
"rouge4" ,
"rouge5" ,
"rouge6" ,
"rouge7" ,
"rouge8" ,
"rouge9" ,
"rougeL" ,
"rougeLsum" ,
],
use_stemmer : bool = False ,
split_summaries : bool = False
)
The ROUGE Metric.
Calculates the recall of n-grams in prediction as compared to reference and
returns a score ranging between 0 and 1. Supported rouge types are
rougen[1-9], rougeL, and rougeLsum.
Methods
Rouge
Rouge (
* ,
rouge_type : typing . Literal [
"rouge1" ,
"rouge2" ,
"rouge3" ,
"rouge4" ,
"rouge5" ,
"rouge6" ,
"rouge7" ,
"rouge8" ,
"rouge9" ,
"rougeL" ,
"rougeLsum" ,
],
use_stemmer : bool = False ,
split_summaries : bool = False
)
Initializes the ROUGE metric.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-28 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# Class Rouge (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.evaluation.Rouge)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.evaluation.Rouge)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.evaluation.Rouge)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.evaluation.Rouge)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.evaluation.Rouge)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.evaluation.Rouge)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.evaluation.Rouge)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.evaluation.Rouge)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.evaluation.Rouge)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.evaluation.Rouge)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.evaluation.Rouge)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.evaluation.Rouge)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.evaluation.Rouge)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.evaluation.Rouge)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.evaluation.Rouge)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.evaluation.Rouge)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.evaluation.Rouge)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.evaluation.Rouge)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.evaluation.Rouge)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.evaluation.Rouge)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.evaluation.Rouge)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.evaluation.Rouge)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.evaluation.Rouge)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.evaluation.Rouge)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.evaluation.Rouge)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.evaluation.Rouge)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.evaluation.Rouge)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.evaluation.Rouge)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.evaluation.Rouge)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.evaluation.Rouge)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.evaluation.Rouge)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.evaluation.Rouge)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.evaluation.Rouge)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.evaluation.Rouge)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.evaluation.Rouge) \n\n Rouge(\n *,\n rouge_type: typing.Literal[\n \"rouge1\",\n \"rouge2\",\n \"rouge3\",\n \"rouge4\",\n \"rouge5\",\n \"rouge6\",\n \"rouge7\",\n \"rouge8\",\n \"rouge9\",\n \"rougeL\",\n \"rougeLsum\",\n ],\n use_stemmer: bool = False,\n split_summaries: bool = False\n )\n\nThe ROUGE Metric.\n\nCalculates the recall of n-grams in prediction as compared to reference and\nreturns a score ranging between 0 and 1. Supported rouge types are\nrougen\\[1-9\\], rougeL, and rougeLsum.\n\nMethods\n-------\n\n### Rouge\n\n Rouge(\n *,\n rouge_type: typing.Literal[\n \"rouge1\",\n \"rouge2\",\n \"rouge3\",\n \"rouge4\",\n \"rouge5\",\n \"rouge6\",\n \"rouge7\",\n \"rouge8\",\n \"rouge9\",\n \"rougeL\",\n \"rougeLsum\",\n ],\n use_stemmer: bool = False,\n split_summaries: bool = False\n )\n\nInitializes the ROUGE metric."]]