Google Cloud Vision Ai V1 Client - Class VertexCustomConfig (0.1.0)

Reference documentation and code samples for the Google Cloud Vision Ai V1 Client class VertexCustomConfig.

Message describing VertexCustomConfig.

Generated from protobuf message google.cloud.visionai.v1.VertexCustomConfig

Namespace

Google \ Cloud \ VisionAI \ V1

Methods

__construct

Constructor.

Parameters
Name Description
data array

Optional. Data for populating the Message object.

↳ max_prediction_fps int

The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.

↳ dedicated_resources DedicatedResources

A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.

↳ post_processing_cloud_function string

If not empty, the prediction result will be sent to the specified cloud function for post processing. * * The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse. * * The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field. * * To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.

↳ attach_application_metadata bool

If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }

↳ dynamic_config_input_topic string

Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance. string stream_id = 1; // The target fps. By default, the custom processor will not send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; }

getMaxPredictionFps

The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.

Returns
Type Description
int

setMaxPredictionFps

The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.

Parameter
Name Description
var int
Returns
Type Description
$this

getDedicatedResources

A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.

Returns
Type Description
DedicatedResources|null

hasDedicatedResources

clearDedicatedResources

setDedicatedResources

A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.

Parameter
Name Description
var DedicatedResources
Returns
Type Description
$this

getPostProcessingCloudFunction

If not empty, the prediction result will be sent to the specified cloud function for post processing.

  • The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse.
  • The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field.
  • To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.
Returns
Type Description
string

setPostProcessingCloudFunction

If not empty, the prediction result will be sent to the specified cloud function for post processing.

  • The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse.
  • The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field.
  • To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.
Parameter
Name Description
var string
Returns
Type Description
$this

getAttachApplicationMetadata

If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }

Returns
Type Description
bool

setAttachApplicationMetadata

If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }

Parameter
Name Description
var bool
Returns
Type Description
$this

getDynamicConfigInputTopic

Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance.

string stream_id = 1; // The target fps. By default, the custom processor will not send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; }

Returns
Type Description
string

hasDynamicConfigInputTopic

clearDynamicConfigInputTopic

setDynamicConfigInputTopic

Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance.

string stream_id = 1; // The target fps. By default, the custom processor will not send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; }

Parameter
Name Description
var string
Returns
Type Description
$this