- 0.74.0 (latest)
- 0.72.0
- 0.71.0
- 0.70.0
- 0.69.0
- 0.68.0
- 0.66.0
- 0.64.0
- 0.63.0
- 0.60.0
- 0.59.0
- 0.58.0
- 0.56.0
- 0.55.0
- 0.54.0
- 0.53.0
- 0.52.0
- 0.51.0
- 0.50.0
- 0.49.0
- 0.48.0
- 0.47.0
- 0.45.0
- 0.44.0
- 0.43.0
- 0.42.0
- 0.41.0
- 0.40.0
- 0.39.0
- 0.38.0
- 0.37.0
- 0.36.0
- 0.35.0
- 0.33.0
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.0
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.0
- 0.8.0
- 0.7.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
public static final class QualityMetrics.Builder extends GeneratedMessageV3.Builder<QualityMetrics.Builder> implements QualityMetricsOrBuilderDescribes the metrics produced by the evaluation.
 Protobuf type google.cloud.discoveryengine.v1beta.QualityMetrics
Inheritance
Object > AbstractMessageLite.Builder<MessageType,BuilderType> > AbstractMessage.Builder<BuilderType> > GeneratedMessageV3.Builder > QualityMetrics.BuilderImplements
QualityMetricsOrBuilderStatic Methods
getDescriptor()
public static final Descriptors.Descriptor getDescriptor()| Returns | |
|---|---|
| Type | Description | 
| Descriptor | |
Methods
addRepeatedField(Descriptors.FieldDescriptor field, Object value)
public QualityMetrics.Builder addRepeatedField(Descriptors.FieldDescriptor field, Object value)| Parameters | |
|---|---|
| Name | Description | 
| field | FieldDescriptor | 
| value | Object | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
build()
public QualityMetrics build()| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics | |
buildPartial()
public QualityMetrics buildPartial()| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics | |
clear()
public QualityMetrics.Builder clear()| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
clearDocNdcg()
public QualityMetrics.Builder clearDocNdcg()Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_ndcg = 3;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
clearDocPrecision()
public QualityMetrics.Builder clearDocPrecision()Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single SampleQuery, If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_precision = 2;
 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
clearDocRecall()
public QualityMetrics.Builder clearDocRecall()Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single SampleQuery, If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_recall = 1;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
clearField(Descriptors.FieldDescriptor field)
public QualityMetrics.Builder clearField(Descriptors.FieldDescriptor field)| Parameter | |
|---|---|
| Name | Description | 
| field | FieldDescriptor | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
clearOneof(Descriptors.OneofDescriptor oneof)
public QualityMetrics.Builder clearOneof(Descriptors.OneofDescriptor oneof)| Parameter | |
|---|---|
| Name | Description | 
| oneof | OneofDescriptor | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
clearPageNdcg()
public QualityMetrics.Builder clearPageNdcg()Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_ndcg = 5;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
clearPageRecall()
public QualityMetrics.Builder clearPageRecall()Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single SampleQuery, if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_recall = 4;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
clone()
public QualityMetrics.Builder clone()| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
getDefaultInstanceForType()
public QualityMetrics getDefaultInstanceForType()| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics | |
getDescriptorForType()
public Descriptors.Descriptor getDescriptorForType()| Returns | |
|---|---|
| Type | Description | 
| Descriptor | |
getDocNdcg()
public QualityMetrics.TopkMetrics getDocNdcg()Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_ndcg = 3;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics | The docNdcg. | 
getDocNdcgBuilder()
public QualityMetrics.TopkMetrics.Builder getDocNdcgBuilder()Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_ndcg = 3;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics.Builder | |
getDocNdcgOrBuilder()
public QualityMetrics.TopkMetricsOrBuilder getDocNdcgOrBuilder()Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_ndcg = 3;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetricsOrBuilder | |
getDocPrecision()
public QualityMetrics.TopkMetrics getDocPrecision()Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single SampleQuery, If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_precision = 2;
 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics | The docPrecision. | 
getDocPrecisionBuilder()
public QualityMetrics.TopkMetrics.Builder getDocPrecisionBuilder()Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single SampleQuery, If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_precision = 2;
 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics.Builder | |
getDocPrecisionOrBuilder()
public QualityMetrics.TopkMetricsOrBuilder getDocPrecisionOrBuilder()Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single SampleQuery, If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_precision = 2;
 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetricsOrBuilder | |
getDocRecall()
public QualityMetrics.TopkMetrics getDocRecall()Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single SampleQuery, If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_recall = 1;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics | The docRecall. | 
getDocRecallBuilder()
public QualityMetrics.TopkMetrics.Builder getDocRecallBuilder()Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single SampleQuery, If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_recall = 1;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics.Builder | |
getDocRecallOrBuilder()
public QualityMetrics.TopkMetricsOrBuilder getDocRecallOrBuilder()Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single SampleQuery, If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_recall = 1;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetricsOrBuilder | |
getPageNdcg()
public QualityMetrics.TopkMetrics getPageNdcg()Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_ndcg = 5;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics | The pageNdcg. | 
getPageNdcgBuilder()
public QualityMetrics.TopkMetrics.Builder getPageNdcgBuilder()Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_ndcg = 5;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics.Builder | |
getPageNdcgOrBuilder()
public QualityMetrics.TopkMetricsOrBuilder getPageNdcgOrBuilder()Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_ndcg = 5;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetricsOrBuilder | |
getPageRecall()
public QualityMetrics.TopkMetrics getPageRecall()Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single SampleQuery, if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_recall = 4;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics | The pageRecall. | 
getPageRecallBuilder()
public QualityMetrics.TopkMetrics.Builder getPageRecallBuilder()Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single SampleQuery, if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_recall = 4;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetrics.Builder | |
getPageRecallOrBuilder()
public QualityMetrics.TopkMetricsOrBuilder getPageRecallOrBuilder()Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single SampleQuery, if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_recall = 4;
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.TopkMetricsOrBuilder | |
hasDocNdcg()
public boolean hasDocNdcg()Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_ndcg = 3;
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the docNdcg field is set. | 
hasDocPrecision()
public boolean hasDocPrecision()Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single SampleQuery, If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_precision = 2;
 
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the docPrecision field is set. | 
hasDocRecall()
public boolean hasDocRecall()Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single SampleQuery, If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_recall = 1;
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the docRecall field is set. | 
hasPageNdcg()
public boolean hasPageNdcg()Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_ndcg = 5;
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the pageNdcg field is set. | 
hasPageRecall()
public boolean hasPageRecall()Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single SampleQuery, if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_recall = 4;
| Returns | |
|---|---|
| Type | Description | 
| boolean | Whether the pageRecall field is set. | 
internalGetFieldAccessorTable()
protected GeneratedMessageV3.FieldAccessorTable internalGetFieldAccessorTable()| Returns | |
|---|---|
| Type | Description | 
| FieldAccessorTable | |
isInitialized()
public final boolean isInitialized()| Returns | |
|---|---|
| Type | Description | 
| boolean | |
mergeDocNdcg(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder mergeDocNdcg(QualityMetrics.TopkMetrics value)Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_ndcg = 3;
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
mergeDocPrecision(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder mergeDocPrecision(QualityMetrics.TopkMetrics value)Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single SampleQuery, If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_precision = 2;
 
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
mergeDocRecall(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder mergeDocRecall(QualityMetrics.TopkMetrics value)Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single SampleQuery, If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_recall = 1;
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
mergeFrom(QualityMetrics other)
public QualityMetrics.Builder mergeFrom(QualityMetrics other)| Parameter | |
|---|---|
| Name | Description | 
| other | QualityMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)
public QualityMetrics.Builder mergeFrom(CodedInputStream input, ExtensionRegistryLite extensionRegistry)| Parameters | |
|---|---|
| Name | Description | 
| input | CodedInputStream | 
| extensionRegistry | ExtensionRegistryLite | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
| Exceptions | |
|---|---|
| Type | Description | 
| IOException | |
mergeFrom(Message other)
public QualityMetrics.Builder mergeFrom(Message other)| Parameter | |
|---|---|
| Name | Description | 
| other | Message | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
mergePageNdcg(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder mergePageNdcg(QualityMetrics.TopkMetrics value)Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_ndcg = 5;
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
mergePageRecall(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder mergePageRecall(QualityMetrics.TopkMetrics value)Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single SampleQuery, if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_recall = 4;
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
mergeUnknownFields(UnknownFieldSet unknownFields)
public final QualityMetrics.Builder mergeUnknownFields(UnknownFieldSet unknownFields)| Parameter | |
|---|---|
| Name | Description | 
| unknownFields | UnknownFieldSet | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setDocNdcg(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder setDocNdcg(QualityMetrics.TopkMetrics value)Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_ndcg = 3;
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setDocNdcg(QualityMetrics.TopkMetrics.Builder builderForValue)
public QualityMetrics.Builder setDocNdcg(QualityMetrics.TopkMetrics.Builder builderForValue)Normalized discounted cumulative gain (NDCG) per document, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved documents (D1, D2, D3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [D3 (0), D1 (1), D2 (1)] Ideal: [D1 (1), D2 (1), D3 (0)]
Calculate NDCG@3 for each SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_ndcg = 3;
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | QualityMetrics.TopkMetrics.Builder | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setDocPrecision(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder setDocPrecision(QualityMetrics.TopkMetrics value)Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single SampleQuery, If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_precision = 2;
 
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setDocPrecision(QualityMetrics.TopkMetrics.Builder builderForValue)
public QualityMetrics.Builder setDocPrecision(QualityMetrics.TopkMetrics.Builder builderForValue)Precision per document, at various top-k cutoff levels.
Precision is the fraction of retrieved documents that are relevant.
Example (top-5):
- For a single SampleQuery, If 4 out of 5 retrieved documents in the top-5 are relevant, precision@5 = 4/5 = 0.8
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_precision = 2;
 
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | QualityMetrics.TopkMetrics.Builder | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setDocRecall(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder setDocRecall(QualityMetrics.TopkMetrics value)Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single SampleQuery, If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_recall = 1;
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setDocRecall(QualityMetrics.TopkMetrics.Builder builderForValue)
public QualityMetrics.Builder setDocRecall(QualityMetrics.TopkMetrics.Builder builderForValue)Recall per document, at various top-k cutoff levels.
Recall is the fraction of relevant documents retrieved out of all relevant documents.
Example (top-5):
- For a single SampleQuery, If 3 out of 5 relevant documents are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics doc_recall = 1;
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | QualityMetrics.TopkMetrics.Builder | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setField(Descriptors.FieldDescriptor field, Object value)
public QualityMetrics.Builder setField(Descriptors.FieldDescriptor field, Object value)| Parameters | |
|---|---|
| Name | Description | 
| field | FieldDescriptor | 
| value | Object | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setPageNdcg(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder setPageNdcg(QualityMetrics.TopkMetrics value)Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_ndcg = 5;
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setPageNdcg(QualityMetrics.TopkMetrics.Builder builderForValue)
public QualityMetrics.Builder setPageNdcg(QualityMetrics.TopkMetrics.Builder builderForValue)Normalized discounted cumulative gain (NDCG) per page, at various top-k cutoff levels.
NDCG measures the ranking quality, giving higher relevance to top results.
Example (top-3): Suppose SampleQuery with three retrieved pages (P1, P2, P3) and binary relevance judgements (1 for relevant, 0 for not relevant):
Retrieved: [P3 (0), P1 (1), P2 (1)] Ideal: [P1 (1), P2 (1), P3 (0)]
Calculate NDCG@3 for SampleQuery:
- DCG@3: 0/log2(1+1) + 1/log2(2+1) + 1/log2(3+1) = 1.13
- Ideal DCG@3: 1/log2(1+1) + 1/log2(2+1) + 0/log2(3+1) = 1.63
- NDCG@3: 1.13/1.63 = 0.693
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_ndcg = 5;
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | QualityMetrics.TopkMetrics.Builder | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setPageRecall(QualityMetrics.TopkMetrics value)
public QualityMetrics.Builder setPageRecall(QualityMetrics.TopkMetrics value)Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single SampleQuery, if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_recall = 4;
| Parameter | |
|---|---|
| Name | Description | 
| value | QualityMetrics.TopkMetrics | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setPageRecall(QualityMetrics.TopkMetrics.Builder builderForValue)
public QualityMetrics.Builder setPageRecall(QualityMetrics.TopkMetrics.Builder builderForValue)Recall per page, at various top-k cutoff levels.
Recall is the fraction of relevant pages retrieved out of all relevant pages.
Example (top-5):
- For a single SampleQuery, if 3 out of 5 relevant pages are retrieved in the top-5, recall@5 = 3/5 = 0.6
 .google.cloud.discoveryengine.v1beta.QualityMetrics.TopkMetrics page_recall = 4;
| Parameter | |
|---|---|
| Name | Description | 
| builderForValue | QualityMetrics.TopkMetrics.Builder | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)
public QualityMetrics.Builder setRepeatedField(Descriptors.FieldDescriptor field, int index, Object value)| Parameters | |
|---|---|
| Name | Description | 
| field | FieldDescriptor | 
| index | int | 
| value | Object | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |
setUnknownFields(UnknownFieldSet unknownFields)
public final QualityMetrics.Builder setUnknownFields(UnknownFieldSet unknownFields)| Parameter | |
|---|---|
| Name | Description | 
| unknownFields | UnknownFieldSet | 
| Returns | |
|---|---|
| Type | Description | 
| QualityMetrics.Builder | |