public sealed class AutoMlImageSegmentationInputs : IMessage<AutoMlImageSegmentationInputs>, IEquatable<AutoMlImageSegmentationInputs>, IDeepCloneable<AutoMlImageSegmentationInputs>, IBufferMessage, IMessage
The ID of the base model. If it is specified, the new model will be
trained based on the base model. Otherwise, the new model will be
trained from scratch. The base model must be in the same
Project and Location as the new Model to train, and have the same
modelType.
The training budget of creating this model, expressed in milli node
hours i.e. 1,000 value in this field means 1 node hour. The actual
metadata.costMilliNodeHours will be equal or less than this value.
If further model training ceases to provide any improvements, it will
stop without using the full budget and the metadata.successfulStopReason
will be model-converged.
Note, node_hour = actual_hour * number_of_nodes_involved. Or
actaul_wall_clock_hours = train_budget_milli_node_hours /
(number_of_nodes_involved * 1000)
For modelType cloud-high-accuracy-1(default), the budget must be between
20,000 and 2,000,000 milli node hours, inclusive. The default value is
192,000 which represents one day in wall time
(1000 milli * 24 hours * 8 nodes).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-20 UTC."],[[["\u003cp\u003eThis webpage provides documentation for the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class within the Google Cloud AI Platform, detailing its use in defining inputs for AutoML image segmentation training jobs.\u003c/p\u003e\n"],["\u003cp\u003eThe content lists various versions of the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class, ranging from version 1.0.0 to the latest version, 3.22.0, allowing users to navigate and select specific versions of the documentation.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class implements interfaces such as \u003ccode\u003eIMessage\u003c/code\u003e, \u003ccode\u003eIEquatable\u003c/code\u003e, \u003ccode\u003eIDeepCloneable\u003c/code\u003e, and \u003ccode\u003eIBufferMessage\u003c/code\u003e, enabling specific functionalities like message handling, comparison, deep cloning, and buffer management.\u003c/p\u003e\n"],["\u003cp\u003eThe class has two constructors: a default one and another that takes an \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e instance for copying.\u003c/p\u003e\n"],["\u003cp\u003eKey properties of the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class include \u003ccode\u003eBaseModelId\u003c/code\u003e (for training a model based on an existing one), \u003ccode\u003eBudgetMilliNodeHours\u003c/code\u003e (specifying the training budget), and \u003ccode\u003eModelType\u003c/code\u003e (defining the type of model to be trained), each with specific implications for model training and resource allocation.\u003c/p\u003e\n"]]],[],null,[]]