public sealed class ExplanationParameters : IMessage<ExplanationParameters>, IEquatable<ExplanationParameters>, IDeepCloneable<ExplanationParameters>, IBufferMessage, IMessage
Reference documentation and code samples for the Vertex AI v1beta1 API class ExplanationParameters.
Parameters to configure explaining for Model's predictions.
public IntegratedGradientsAttribution IntegratedGradientsAttribution { get; set; }
An attribution method that computes Aumann-Shapley values taking
advantage of the model's fully differentiable structure. Refer to this
paper for more details: https://arxiv.org/abs/1703.01365
If populated, only returns attributions that have
[output_index][google.cloud.aiplatform.v1beta1.Attribution.output_index]
contained in output_indices. It must be an ndarray of integers, with the
same shape of the output it's explaining.
If not populated, returns attributions for
[top_k][google.cloud.aiplatform.v1beta1.ExplanationParameters.top_k]
indices of outputs. If neither top_k nor output_indices is populated,
returns the argmax index of the outputs.
Only applicable to Models that predict multiple outputs (e,g, multi-class
Models that predict multiple classes).
public SampledShapleyAttribution SampledShapleyAttribution { get; set; }
An attribution method that approximates Shapley values for features that
contribute to the label being predicted. A sampling strategy is used to
approximate the value rather than considering all subsets of features.
Refer to this paper for model details: https://arxiv.org/abs/1306.4265.
If populated, returns attributions for top K indices of outputs
(defaults to 1). Only applies to Models that predicts more than one outputs
(e,g, multi-class Models). When set to -1, returns explanations for all
outputs.
public XraiAttribution XraiAttribution { get; set; }
An attribution method that redistributes Integrated Gradients
attribution to segmented regions, taking advantage of the model's fully
differentiable structure. Refer to this paper for
more details: https://arxiv.org/abs/1906.02825
XRAI currently performs better on natural images, like a picture of a
house or an animal. If the images are taken in artificial environments,
like a lab or manufacturing line, or from diagnostic equipment, like
x-rays or quality-control cameras, use Integrated Gradients instead.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[[["\u003cp\u003eThe \u003ccode\u003eExplanationParameters\u003c/code\u003e class in the Vertex AI v1beta1 API configures parameters for explaining a Model's predictions.\u003c/p\u003e\n"],["\u003cp\u003eThis class implements several interfaces, including \u003ccode\u003eIMessage\u003c/code\u003e, \u003ccode\u003eIEquatable\u003c/code\u003e, \u003ccode\u003eIDeepCloneable\u003c/code\u003e, and \u003ccode\u003eIBufferMessage\u003c/code\u003e, and inherits from \u003ccode\u003eobject\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003e\u003ccode\u003eExplanationParameters\u003c/code\u003e supports multiple attribution methods like \u003ccode\u003eIntegratedGradientsAttribution\u003c/code\u003e, \u003ccode\u003eSampledShapleyAttribution\u003c/code\u003e, and \u003ccode\u003eXraiAttribution\u003c/code\u003e to analyze feature contributions.\u003c/p\u003e\n"],["\u003cp\u003eIt allows specifying output indices or the top K outputs for which to return attributions, providing flexibility in focusing on relevant prediction results.\u003c/p\u003e\n"],["\u003cp\u003eThe class includes constructors for creating new instances, either default or based on another \u003ccode\u003eExplanationParameters\u003c/code\u003e object, and offers properties such as \u003ccode\u003eExamples\u003c/code\u003e, \u003ccode\u003eMethodCase\u003c/code\u003e, \u003ccode\u003eOutputIndices\u003c/code\u003e, and \u003ccode\u003eTopK\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Vertex AI v1beta1 API - Class ExplanationParameters (1.0.0-beta47)\n\nVersion latestkeyboard_arrow_down\n\n- [1.0.0-beta47 (latest)](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.ExplanationParameters)\n- [1.0.0-beta46](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/1.0.0-beta46/Google.Cloud.AIPlatform.V1Beta1.ExplanationParameters) \n\n public sealed class ExplanationParameters : IMessage\u003cExplanationParameters\u003e, IEquatable\u003cExplanationParameters\u003e, IDeepCloneable\u003cExplanationParameters\u003e, IBufferMessage, IMessage\n\nReference documentation and code samples for the Vertex AI v1beta1 API class ExplanationParameters.\n\nParameters to configure explaining for Model's predictions. \n\nInheritance\n-----------\n\n[object](https://learn.microsoft.com/dotnet/api/system.object) \\\u003e ExplanationParameters \n\nImplements\n----------\n\n[IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage-1.html)[ExplanationParameters](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.ExplanationParameters), [IEquatable](https://learn.microsoft.com/dotnet/api/system.iequatable-1)[ExplanationParameters](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.ExplanationParameters), [IDeepCloneable](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IDeepCloneable-1.html)[ExplanationParameters](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1.ExplanationParameters), [IBufferMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IBufferMessage.html), [IMessage](https://cloud.google.com/dotnet/docs/reference/Google.Protobuf/latest/Google.Protobuf.IMessage.html) \n\nInherited Members\n-----------------\n\n[object.GetHashCode()](https://learn.microsoft.com/dotnet/api/system.object.gethashcode) \n[object.GetType()](https://learn.microsoft.com/dotnet/api/system.object.gettype) \n[object.ToString()](https://learn.microsoft.com/dotnet/api/system.object.tostring)\n\nNamespace\n---------\n\n[Google.Cloud.AIPlatform.V1Beta1](/dotnet/docs/reference/Google.Cloud.AIPlatform.V1Beta1/latest/Google.Cloud.AIPlatform.V1Beta1)\n\nAssembly\n--------\n\nGoogle.Cloud.AIPlatform.V1Beta1.dll\n\nConstructors\n------------\n\n### ExplanationParameters()\n\n public ExplanationParameters()\n\n### ExplanationParameters(ExplanationParameters)\n\n public ExplanationParameters(ExplanationParameters other)\n\nProperties\n----------\n\n### Examples\n\n public Examples Examples { get; set; }\n\nExample-based explanations that returns the nearest neighbors from the\nprovided dataset.\n\n### IntegratedGradientsAttribution\n\n public IntegratedGradientsAttribution IntegratedGradientsAttribution { get; set; }\n\nAn attribution method that computes Aumann-Shapley values taking\nadvantage of the model's fully differentiable structure. Refer to this\npaper for more details: \u003chttps://arxiv.org/abs/1703.01365\u003e\n\n### MethodCase\n\n public ExplanationParameters.MethodOneofCase MethodCase { get; }\n\n### OutputIndices\n\n public ListValue OutputIndices { get; set; }\n\nIf populated, only returns attributions that have\n\\[output_index\\]\\[google.cloud.aiplatform.v1beta1.Attribution.output_index\\]\ncontained in output_indices. It must be an ndarray of integers, with the\nsame shape of the output it's explaining.\n\nIf not populated, returns attributions for\n\\[top_k\\]\\[google.cloud.aiplatform.v1beta1.ExplanationParameters.top_k\\]\nindices of outputs. If neither top_k nor output_indices is populated,\nreturns the argmax index of the outputs.\n\nOnly applicable to Models that predict multiple outputs (e,g, multi-class\nModels that predict multiple classes).\n\n### SampledShapleyAttribution\n\n public SampledShapleyAttribution SampledShapleyAttribution { get; set; }\n\nAn attribution method that approximates Shapley values for features that\ncontribute to the label being predicted. A sampling strategy is used to\napproximate the value rather than considering all subsets of features.\nRefer to this paper for model details: \u003chttps://arxiv.org/abs/1306.4265\u003e.\n\n### TopK\n\n public int TopK { get; set; }\n\nIf populated, returns attributions for top K indices of outputs\n(defaults to 1). Only applies to Models that predicts more than one outputs\n(e,g, multi-class Models). When set to -1, returns explanations for all\noutputs.\n\n### XraiAttribution\n\n public XraiAttribution XraiAttribution { get; set; }\n\nAn attribution method that redistributes Integrated Gradients\nattribution to segmented regions, taking advantage of the model's fully\ndifferentiable structure. Refer to this paper for\nmore details: \u003chttps://arxiv.org/abs/1906.02825\u003e\n\nXRAI currently performs better on natural images, like a picture of a\nhouse or an animal. If the images are taken in artificial environments,\nlike a lab or manufacturing line, or from diagnostic equipment, like\nx-rays or quality-control cameras, use Integrated Gradients instead."]]