public sealed class AutoMlImageObjectDetectionInputs : IMessage<AutoMlImageObjectDetectionInputs>, IEquatable<AutoMlImageObjectDetectionInputs>, IDeepCloneable<AutoMlImageObjectDetectionInputs>, IBufferMessage, IMessage
The training budget of creating this model, expressed in milli node
hours i.e. 1,000 value in this field means 1 node hour. The actual
metadata.costMilliNodeHours will be equal or less than this value.
If further model training ceases to provide any improvements, it will
stop without using the full budget and the metadata.successfulStopReason
will be model-converged.
Note, node_hour = actual_hour * number_of_nodes_involved.
For modelType cloud(default), the budget must be between 20,000
and 900,000 milli node hours, inclusive. The default value is 216,000
which represents one day in wall time, considering 9 nodes are used.
For model types mobile-tf-low-latency-1, mobile-tf-versatile-1,
mobile-tf-high-accuracy-1
the training budget must be between 1,000 and 100,000 milli node hours,
inclusive. The default value is 24,000 which represents one day in
wall time on a single node that is used.
Use the entire training budget. This disables the early stopping feature.
When false the early stopping feature is enabled, which means that AutoML
Image Object Detection might stop training before the entire training
budget has been used.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-20 UTC."],[[["\u003cp\u003eThis webpage provides documentation for the \u003ccode\u003eAutoMlImageObjectDetectionInputs\u003c/code\u003e class within the \u003ccode\u003eGoogle.Cloud.AIPlatform.V1\u003c/code\u003e namespace, specifically detailing its role in defining inputs for AutoML image object detection training jobs.\u003c/p\u003e\n"],["\u003cp\u003eThe content lists various versions of the \u003ccode\u003eAutoMlImageObjectDetectionInputs\u003c/code\u003e class, ranging from the latest version 3.22.0 down to version 1.0.0, including links to the documentation for each respective version, allowing users to consult the appropriate documentation for their needs.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eAutoMlImageObjectDetectionInputs\u003c/code\u003e class allows users to specify training parameters such as \u003ccode\u003eBudgetMilliNodeHours\u003c/code\u003e, which sets the training budget, and \u003ccode\u003eDisableEarlyStopping\u003c/code\u003e, which dictates whether or not to use the entire budget or terminate early, as well as \u003ccode\u003eModelType\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eThe class implements several interfaces, including \u003ccode\u003eIMessage\u003c/code\u003e, \u003ccode\u003eIEquatable\u003c/code\u003e, \u003ccode\u003eIDeepCloneable\u003c/code\u003e, and \u003ccode\u003eIBufferMessage\u003c/code\u003e, indicating its capabilities for serialization, comparison, cloning, and buffer operations.\u003c/p\u003e\n"],["\u003cp\u003eThe documentation lists the constructors for this class, as well as the inherited members of the \u003ccode\u003eobject\u003c/code\u003e class, and the properties for the \u003ccode\u003eAutoMlImageObjectDetectionInputs\u003c/code\u003e class.\u003c/p\u003e\n"]]],[],null,[]]