- 3.78.0 (latest)
- 3.76.0
- 3.74.0
- 3.73.0
- 3.72.0
- 3.71.0
- 3.70.0
- 3.68.0
- 3.66.0
- 3.65.0
- 3.62.0
- 3.61.0
- 3.60.0
- 3.58.0
- 3.57.0
- 3.56.0
- 3.55.0
- 3.54.0
- 3.53.0
- 3.52.0
- 3.51.0
- 3.50.0
- 3.49.0
- 3.47.0
- 3.46.0
- 3.45.0
- 3.44.0
- 3.43.0
- 3.42.0
- 3.41.0
- 3.40.0
- 3.39.0
- 3.38.0
- 3.37.0
- 3.35.0
- 3.34.0
- 3.33.0
- 3.32.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.28.0
- 3.27.0
- 3.26.0
- 3.25.0
- 3.22.0
- 3.21.0
- 3.20.0
- 3.19.0
- 3.18.0
- 3.17.0
- 3.16.0
- 3.15.0
- 3.14.0
- 3.13.0
- 3.12.0
- 3.11.0
- 3.10.0
- 3.9.0
- 3.7.0
- 3.6.0
- 3.5.0
- 3.4.0
- 3.3.0
- 3.2.0
- 3.1.3
- 2.1.4
- 2.0.29
Package com.google.cloud.vision.v1p1beta1 (3.78.0)
| GitHub Repository | RPC Documentation | REST Documentation |
This package is not the recommended entry point to using this client library!
For this library, we recommend using com.google.cloud.vision.v1 for new applications.
Prerelease Implications
This package is a prerelease version! Use with caution.
Prerelease versions are considered unstable as they may be shut down and/or subject to breaking changes when upgrading. Use them only for testing or if you specifically need their experimental features.
Client Classes
Client classes are the main entry point to using a package. They contain several variations of Java methods for each of the API's methods.
| Client | Description |
|---|---|
| com. |
Service Description: Service that performs Google Cloud Vision API detection tasks over client images, such as face, landmark, logo, label, and text detection. The ImageAnnotator service returns detected entities from the images. |
Settings Classes
Settings classes can be used to configure credentials, endpoints, and retry settings for a Client.
| Settings | Description |
|---|---|
| com. |
Settings class to configure an instance of ImageAnnotatorClient.
The default instance has everything set to sensible defaults: |
Classes
| Class | Description |
|---|---|
| com. |
Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features. |
| com. |
Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features. |
| com. |
Response to an image annotation request. |
| com. |
Response to an image annotation request. |
| com. |
Multiple image annotation requests are batched into a single service call. |
| com. |
Multiple image annotation requests are batched into a single service call. |
| com. |
Response to a batch image annotation request. |
| com. |
Response to a batch image annotation request. |
| com. |
Logical element on the page. |
| com. |
Logical element on the page. |
| com. |
A bounding polygon for the detected image annotation. |
| com. |
A bounding polygon for the detected image annotation. |
| com. |
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image. |
| com. |
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the image. |
| com. |
Single crop hint that is used to generate a new crop when serving an image. |
| com. |
Single crop hint that is used to generate a new crop when serving an image. |
| com. |
Set of crop hints that are used to generate new crops when serving images. |
| com. |
Set of crop hints that are used to generate new crops when serving images. |
| com. |
Parameters for crop hints annotation request. |
| com. |
Parameters for crop hints annotation request. |
| com. |
Set of dominant colors and their corresponding scores. |
| com. |
Set of dominant colors and their corresponding scores. |
| com. |
Set of detected entity features. |
| com. |
Set of detected entity features. |
| com. |
A face annotation object contains the results of face detection. |
| com. |
A face annotation object contains the results of face detection. |
| com. |
A face-specific landmark (for example, a face feature). |
| com. |
A face-specific landmark (for example, a face feature). |
| com. |
Users describe the type of Google Cloud Vision API tasks to perform over images by using Features. Each Feature indicates a type of image detection task to perform. Features encode the Cloud Vision API |
| com. |
Users describe the type of Google Cloud Vision API tasks to perform over images by using Features. Each Feature indicates a type of image detection task to perform. Features encode the Cloud Vision API |
| com. |
|
| com. |
Client image to perform Google Cloud Vision API tasks over. |
| com. |
Client image to perform Google Cloud Vision API tasks over. |
| com. |
Service that performs Google Cloud Vision API detection tasks over client images, such as face, landmark, logo, label, and text detection. The ImageAnnotator service returns detected entities from the images. |
| com. |
Base class for the server implementation of the service ImageAnnotator. Service that performs Google Cloud Vision API detection tasks over client |
| com. |
|
| com. |
Builder for ImageAnnotatorSettings. |
| com. |
Image context and/or feature-specific parameters. |
| com. |
Image context and/or feature-specific parameters. |
| com. |
Stores image properties, such as dominant colors. |
| com. |
Stores image properties, such as dominant colors. |
| com. |
External image source (Google Cloud Storage image location). |
| com. |
External image source (Google Cloud Storage image location). |
| com. |
Rectangle determined by min and max LatLng pairs.
|
| com. |
Rectangle determined by min and max LatLng pairs.
|
| com. |
Detected entity location information. |
| com. |
Detected entity location information. |
| com. |
Detected page from OCR. |
| com. |
Detected page from OCR. |
| com. |
Structural unit of text representing a number of words in certain order. |
| com. |
Structural unit of text representing a number of words in certain order. |
| com. |
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image. |
| com. |
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and y coordinates. The position coordinates are in the same scale as the original image. |
| com. |
A Property consists of a user-supplied name/value pair.
|
| com. |
A Property consists of a user-supplied name/value pair.
|
| com. |
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). |
| com. |
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). |
| com. |
A single symbol representation. |
| com. |
A single symbol representation. |
| com. |
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol |
| com. |
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol |
| com. |
Detected start or end of a structural component. |
| com. |
Detected start or end of a structural component. |
| com. |
Detected language for a structural component. |
| com. |
Detected language for a structural component. |
| com. |
Additional information detected on the structural component. |
| com. |
Additional information detected on the structural component. |
| com. |
|
| com. |
Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features. |
| com. |
Parameters for text detections. This is used to control TEXT_DETECTION and DOCUMENT_TEXT_DETECTION features. |
| com. |
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image. |
| com. |
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image. |
| com. |
Relevant information for the image from the Internet. |
| com. |
Relevant information for the image from the Internet. |
| com. |
Entity deduced from similar images on the Internet. |
| com. |
Entity deduced from similar images on the Internet. |
| com. |
Metadata for online images. |
| com. |
Metadata for online images. |
| com. |
Label to provide extra metadata for the web detection. |
| com. |
Label to provide extra metadata for the web detection. |
| com. |
Metadata for web pages. |
| com. |
Metadata for web pages. |
| com. |
Parameters for web detection request. |
| com. |
Parameters for web detection request. |
| com. |
|
| com. |
A word representation. |
| com. |
A word representation. |
Interfaces
Enums
| Enum | Description |
|---|---|
| com. |
Type of a block (text, image etc) as identified by OCR. |
| com. |
Face landmark (feature) type.
Left and right are defined from the vantage of the viewer of the image
without considering mirror projections typical of photos. So, LEFT_EYE, |
| com. |
Type of image feature. |
| com. |
A bucketized representation of likelihood, which is intended to give clients highly stable results across model upgrades. |
| com. |
Enum to denote the type of break found. New line, space etc. |