public sealed class AutoMlImageSegmentationInputs : IMessage<AutoMlImageSegmentationInputs>, IEquatable<AutoMlImageSegmentationInputs>, IDeepCloneable<AutoMlImageSegmentationInputs>, IBufferMessage, IMessage
The ID of the base model. If it is specified, the new model will be
trained based on the base model. Otherwise, the new model will be
trained from scratch. The base model must be in the same
Project and Location as the new Model to train, and have the same
modelType.
The training budget of creating this model, expressed in milli node
hours i.e. 1,000 value in this field means 1 node hour. The actual
metadata.costMilliNodeHours will be equal or less than this value.
If further model training ceases to provide any improvements, it will
stop without using the full budget and the metadata.successfulStopReason
will be model-converged.
Note, node_hour = actual_hour * number_of_nodes_involved. Or
actaul_wall_clock_hours = train_budget_milli_node_hours /
(number_of_nodes_involved * 1000)
For modelType cloud-high-accuracy-1(default), the budget must be between
20,000 and 2,000,000 milli node hours, inclusive. The default value is
192,000 which represents one day in wall time
(1000 milli * 24 hours * 8 nodes).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-20 UTC."],[[["\u003cp\u003eThis webpage details the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class within the \u003ccode\u003eGoogle.Cloud.AIPlatform.V1\u003c/code\u003e namespace, which is used for configuring inputs for AutoML image segmentation training jobs.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class implements several interfaces, including \u003ccode\u003eIMessage\u003c/code\u003e, \u003ccode\u003eIEquatable\u003c/code\u003e, \u003ccode\u003eIDeepCloneable\u003c/code\u003e, and \u003ccode\u003eIBufferMessage\u003c/code\u003e, enabling functionalities like deep cloning and efficient message handling.\u003c/p\u003e\n"],["\u003cp\u003eThe page lists multiple versions of this class, ranging from version 1.0.0 to 3.22.0, with the latest version being 3.22.0, indicating a history of updates and improvements to the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eKey properties of the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class include \u003ccode\u003eBaseModelId\u003c/code\u003e for specifying a base model for training and \u003ccode\u003eBudgetMilliNodeHours\u003c/code\u003e to define the training budget, with default and suggested values provided for training.\u003c/p\u003e\n"],["\u003cp\u003eThe class also has a \u003ccode\u003eModelType\u003c/code\u003e property to identify the type of image segmentation model, and it includes both a default constructor and one that takes another \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e object for copying.\u003c/p\u003e\n"]]],[],null,[]]