Reference documentation and code samples for the Cloud Dataproc V1 API class Google::Cloud::Dataproc::V1::OrderedJob.
A job executed by the workflow.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#flink_job
def flink_job() -> ::Google::Cloud::Dataproc::V1::FlinkJob-
(::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.
Note: The following fields are mutually exclusive:
flink_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#flink_job=
def flink_job=(value) -> ::Google::Cloud::Dataproc::V1::FlinkJob-
value (::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.
Note: The following fields are mutually exclusive:
flink_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::FlinkJob) — Optional. Job is a Flink job.
Note: The following fields are mutually exclusive:
flink_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#hadoop_job
def hadoop_job() -> ::Google::Cloud::Dataproc::V1::HadoopJob-
(::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.
Note: The following fields are mutually exclusive:
hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#hadoop_job=
def hadoop_job=(value) -> ::Google::Cloud::Dataproc::V1::HadoopJob-
value (::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.
Note: The following fields are mutually exclusive:
hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::HadoopJob) — Optional. Job is a Hadoop job.
Note: The following fields are mutually exclusive:
hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#hive_job
def hive_job() -> ::Google::Cloud::Dataproc::V1::HiveJob-
(::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.
Note: The following fields are mutually exclusive:
hive_job,hadoop_job,spark_job,pyspark_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#hive_job=
def hive_job=(value) -> ::Google::Cloud::Dataproc::V1::HiveJob-
value (::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.
Note: The following fields are mutually exclusive:
hive_job,hadoop_job,spark_job,pyspark_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::HiveJob) — Optional. Job is a Hive job.
Note: The following fields are mutually exclusive:
hive_job,hadoop_job,spark_job,pyspark_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#labels
def labels() -> ::Google::Protobuf::Map{::String => ::String}-
(::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job.
Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}
Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}
No more than 32 labels can be associated with a given job.
#labels=
def labels=(value) -> ::Google::Protobuf::Map{::String => ::String}-
value (::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job.
Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}
Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}
No more than 32 labels can be associated with a given job.
-
(::Google::Protobuf::Map{::String => ::String}) — Optional. The labels to associate with this job.
Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}][\p{Ll}\p{Lo}\p{N}_-]{0,62}
Label values must be between 1 and 63 characters long, and must conform to the following regular expression: [\p{Ll}\p{Lo}\p{N}_-]{0,63}
No more than 32 labels can be associated with a given job.
#pig_job
def pig_job() -> ::Google::Cloud::Dataproc::V1::PigJob-
(::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.
Note: The following fields are mutually exclusive:
pig_job,hadoop_job,spark_job,pyspark_job,hive_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#pig_job=
def pig_job=(value) -> ::Google::Cloud::Dataproc::V1::PigJob-
value (::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.
Note: The following fields are mutually exclusive:
pig_job,hadoop_job,spark_job,pyspark_job,hive_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::PigJob) — Optional. Job is a Pig job.
Note: The following fields are mutually exclusive:
pig_job,hadoop_job,spark_job,pyspark_job,hive_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#prerequisite_step_ids
def prerequisite_step_ids() -> ::Array<::String>- (::Array<::String>) — Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
#prerequisite_step_ids=
def prerequisite_step_ids=(value) -> ::Array<::String>- value (::Array<::String>) — Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- (::Array<::String>) — Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
#presto_job
def presto_job() -> ::Google::Cloud::Dataproc::V1::PrestoJob-
(::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.
Note: The following fields are mutually exclusive:
presto_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#presto_job=
def presto_job=(value) -> ::Google::Cloud::Dataproc::V1::PrestoJob-
value (::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.
Note: The following fields are mutually exclusive:
presto_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::PrestoJob) — Optional. Job is a Presto job.
Note: The following fields are mutually exclusive:
presto_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#pyspark_job
def pyspark_job() -> ::Google::Cloud::Dataproc::V1::PySparkJob-
(::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.
Note: The following fields are mutually exclusive:
pyspark_job,hadoop_job,spark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#pyspark_job=
def pyspark_job=(value) -> ::Google::Cloud::Dataproc::V1::PySparkJob-
value (::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.
Note: The following fields are mutually exclusive:
pyspark_job,hadoop_job,spark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::PySparkJob) — Optional. Job is a PySpark job.
Note: The following fields are mutually exclusive:
pyspark_job,hadoop_job,spark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#scheduling
def scheduling() -> ::Google::Cloud::Dataproc::V1::JobScheduling- (::Google::Cloud::Dataproc::V1::JobScheduling) — Optional. Job scheduling configuration.
#scheduling=
def scheduling=(value) -> ::Google::Cloud::Dataproc::V1::JobScheduling- value (::Google::Cloud::Dataproc::V1::JobScheduling) — Optional. Job scheduling configuration.
- (::Google::Cloud::Dataproc::V1::JobScheduling) — Optional. Job scheduling configuration.
#spark_job
def spark_job() -> ::Google::Cloud::Dataproc::V1::SparkJob-
(::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.
Note: The following fields are mutually exclusive:
spark_job,hadoop_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_job=
def spark_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkJob-
value (::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.
Note: The following fields are mutually exclusive:
spark_job,hadoop_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::SparkJob) — Optional. Job is a Spark job.
Note: The following fields are mutually exclusive:
spark_job,hadoop_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_r_job
def spark_r_job() -> ::Google::Cloud::Dataproc::V1::SparkRJob-
(::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.
Note: The following fields are mutually exclusive:
spark_r_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_r_job=
def spark_r_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkRJob-
value (::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.
Note: The following fields are mutually exclusive:
spark_r_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::SparkRJob) — Optional. Job is a SparkR job.
Note: The following fields are mutually exclusive:
spark_r_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_sql_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_sql_job
def spark_sql_job() -> ::Google::Cloud::Dataproc::V1::SparkSqlJob-
(::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.
Note: The following fields are mutually exclusive:
spark_sql_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#spark_sql_job=
def spark_sql_job=(value) -> ::Google::Cloud::Dataproc::V1::SparkSqlJob-
value (::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.
Note: The following fields are mutually exclusive:
spark_sql_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::SparkSqlJob) — Optional. Job is a SparkSql job.
Note: The following fields are mutually exclusive:
spark_sql_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,presto_job,trino_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#step_id
def step_id() -> ::String-
(::String) — Required. The step id. The id must be unique among all jobs
within the template.
The step id is used as prefix for job id, as job
goog-dataproc-workflow-step-idlabel, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
#step_id=
def step_id=(value) -> ::String-
value (::String) — Required. The step id. The id must be unique among all jobs
within the template.
The step id is used as prefix for job id, as job
goog-dataproc-workflow-step-idlabel, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
-
(::String) — Required. The step id. The id must be unique among all jobs
within the template.
The step id is used as prefix for job id, as job
goog-dataproc-workflow-step-idlabel, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
#trino_job
def trino_job() -> ::Google::Cloud::Dataproc::V1::TrinoJob-
(::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.
Note: The following fields are mutually exclusive:
trino_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
#trino_job=
def trino_job=(value) -> ::Google::Cloud::Dataproc::V1::TrinoJob-
value (::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.
Note: The following fields are mutually exclusive:
trino_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.
-
(::Google::Cloud::Dataproc::V1::TrinoJob) — Optional. Job is a Trino job.
Note: The following fields are mutually exclusive:
trino_job,hadoop_job,spark_job,pyspark_job,hive_job,pig_job,spark_r_job,spark_sql_job,presto_job,flink_job. If a field in that set is populated, all other fields in the set will automatically be cleared.