Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class HarmCategory.
Harm categories that will block the content.
Protobuf type google.cloud.aiplatform.v1.HarmCategory
Namespace
Google \ Cloud \ AIPlatform \ V1Methods
static::name
| Parameter | |
|---|---|
| Name | Description |
value |
mixed
|
static::value
| Parameter | |
|---|---|
| Name | Description |
name |
mixed
|
Constants
HARM_CATEGORY_UNSPECIFIED
Value: 0The harm category is unspecified.
Generated from protobuf enum HARM_CATEGORY_UNSPECIFIED = 0;
HARM_CATEGORY_HATE_SPEECH
Value: 1The harm category is hate speech.
Generated from protobuf enum HARM_CATEGORY_HATE_SPEECH = 1;
HARM_CATEGORY_DANGEROUS_CONTENT
Value: 2The harm category is dangerous content.
Generated from protobuf enum HARM_CATEGORY_DANGEROUS_CONTENT = 2;
HARM_CATEGORY_HARASSMENT
Value: 3The harm category is harassment.
Generated from protobuf enum HARM_CATEGORY_HARASSMENT = 3;
HARM_CATEGORY_SEXUALLY_EXPLICIT
Value: 4The harm category is sexually explicit content.
Generated from protobuf enum HARM_CATEGORY_SEXUALLY_EXPLICIT = 4;
HARM_CATEGORY_CIVIC_INTEGRITY
Value: 5Deprecated: Election filter is not longer supported.
The harm category is civic integrity.
Generated from protobuf enum HARM_CATEGORY_CIVIC_INTEGRITY = 5 [deprecated = true];
HARM_CATEGORY_JAILBREAK
Value: 6The harm category is for jailbreak prompts.
Generated from protobuf enum HARM_CATEGORY_JAILBREAK = 6;