public sealed class AutoMlImageSegmentationInputs : IMessage<AutoMlImageSegmentationInputs>, IEquatable<AutoMlImageSegmentationInputs>, IDeepCloneable<AutoMlImageSegmentationInputs>, IBufferMessage, IMessage
The ID of the base model. If it is specified, the new model will be
trained based on the base model. Otherwise, the new model will be
trained from scratch. The base model must be in the same
Project and Location as the new Model to train, and have the same
modelType.
The training budget of creating this model, expressed in milli node
hours i.e. 1,000 value in this field means 1 node hour. The actual
metadata.costMilliNodeHours will be equal or less than this value.
If further model training ceases to provide any improvements, it will
stop without using the full budget and the metadata.successfulStopReason
will be model-converged.
Note, node_hour = actual_hour * number_of_nodes_involved. Or
actaul_wall_clock_hours = train_budget_milli_node_hours /
(number_of_nodes_involved * 1000)
For modelType cloud-high-accuracy-1(default), the budget must be between
20,000 and 2,000,000 milli node hours, inclusive. The default value is
192,000 which represents one day in wall time
(1000 milli * 24 hours * 8 nodes).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-20 UTC."],[[["\u003cp\u003eThis webpage provides documentation for the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class within the Google Cloud AI Platform, detailing its usage in defining inputs for automated image segmentation training jobs.\u003c/p\u003e\n"],["\u003cp\u003eThe content lists various versions of the Google Cloud AI Platform .NET library, ranging from version 1.0.0 to the latest version 3.22.0, each linking to its respective documentation for the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class inherits from \u003ccode\u003eObject\u003c/code\u003e and implements multiple interfaces, including \u003ccode\u003eIMessage\u003c/code\u003e, \u003ccode\u003eIEquatable\u003c/code\u003e, \u003ccode\u003eIDeepCloneable\u003c/code\u003e, and \u003ccode\u003eIBufferMessage\u003c/code\u003e, providing functionalities for message handling, comparison, deep cloning, and buffer manipulation.\u003c/p\u003e\n"],["\u003cp\u003eThe class includes properties like \u003ccode\u003eBaseModelId\u003c/code\u003e, \u003ccode\u003eBudgetMilliNodeHours\u003c/code\u003e, and \u003ccode\u003eModelType\u003c/code\u003e, which are used to configure the training process, including specifying a base model for training, defining the training budget in milli node hours, and setting the model type.\u003c/p\u003e\n"],["\u003cp\u003eThe documentation details the constructors available for the \u003ccode\u003eAutoMlImageSegmentationInputs\u003c/code\u003e class, including a default constructor and one that copies from an existing instance of the class.\u003c/p\u003e\n"]]],[],null,[]]