Description and usage of PutBigQueryBatch processor:

Batch loads flow files content to a Google BigQuery table.

Tags:

google, google cloud, bq, bigquery

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, whether a property supports the Expression Language Guide, and whether a property is considered “sensitive”, meaning that its value will be encrypted. Before entering a value in a sensitive property, ensure that the nifi.properties file has an entry for the property nifi.sensitive.props.key.

Name

Default Value

Allowable Values

Description

GCP Credentials Provider Service

Controller Service API: 


GCPCredentialsService

Implementation: 

GCPCredentialsControllerService


The Controller Service used to obtain Google Cloud Platform credentials.
Project ID Google Cloud Project ID

Supports Expression Language: true (will be evaluated using variable registry only)


Number of retries

6 How many retry attempts should be made before routing to the failure relationship.
Proxy host IP or hostname of the proxy to be used. You might need to set the following properties in bootstrap for https proxy usage: -Djdk.http.auth.tunneling.disabledSchemes= -Djdk.http.auth.proxying.disabledSchemes=

Supports Expression Language: true (will be evaluated using variable registry only)


Proxy port Proxy port number

Supports Expression Language: true (will be evaluated using variable registry only)


HTTP Proxy Username HTTP Proxy Username

Supports Expression Language: true (will be evaluated using variable registry only)


HTTP Proxy Password HTTP Proxy Password

Sensitive Property: true


Supports Expression Language: true (will be evaluated using variable registry only)


Proxy Configuration Service

Controller Service API: 


ProxyConfigurationService

Implementation: 

StandardProxyConfigurationService


Specifies the Proxy Configuration Controller Service to proxy network requests. If set, it supersedes proxy settings configured per component. Supported proxies: HTTP + AuthN

Dataset

${bq.dataset} BigQuery dataset name (Note - The dataset must exist in GCP)

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


Table Name

${bq.table.name} BigQuery table name

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


Table Schema BigQuery schema in JSON format

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


Read Timeout

5 minutes Load Job Time Out

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


Load file type

Data type of the file to be loaded. Possible values: AVRO, NEWLINE_DELIMITED_JSON, CSV.

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


Create Disposition

CREATE_IF_NEEDED * CREATE_IF_NEEDED 
* CREATE_NEVER 
Sets whether the job is allowed to create new tables

Write Disposition

WRITE_EMPTY * WRITE_EMPTY
* WRITE_APPEND 
* WRITE_TRUNCATE 
Sets the action that should occur if the destination table already exists.

Max Bad Records

0 Sets the maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. By default no bad record is ignored.

Ignore Unknown Values

false * true
* false
Sets whether BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. By default unknown values are not allowed.

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


CSV Input - Allow Jagged Rows

false * true
* false
Set whether BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. By default, rows with missing trailing columns are considered bad records.

CSV Input - Allow Quoted New Lines

false * true
* false
Sets whether BigQuery should allow quoted data sections that contain newline characters in a CSV file. By default quoted newline are not allowed.

CSV Input - Character Set

UTF-8 * UTF-8
* ISO-8859-1
Sets the character encoding of the data.

CSV Input - Field Delimiter

, Sets the separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence " " to specify a tab separator. The default value is a comma (',').

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


CSV Input - Quote

" Sets the value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the Allow Quoted New Lines property to true.

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


CSV Input - Skip Leading Rows

0 Sets the number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.

Supports Expression Language: true (will be evaluated using flow file attributes and variable registry)


Relationships:

Name

Description

success FlowFiles are routed to this relationship after a successful Google BigQuery operation.
failure FlowFiles are routed to this relationship if the Google BigQuery operation fails.

Reads Attributes:

None specified.

Writes Attributes:

Name

Description

bq.dataset BigQuery dataset name (Note - The dataset must exist in GCP)
bq.table.name BigQuery table name
bq.table.schema BigQuery schema in JSON format
bq.load.type Data type of the file to be loaded. Possible values: AVRO, NEWLINE_DELIMITED_JSON, CSV.
bq.load.ignore_unknown Sets whether BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. By default unknown values are not allowed.
bq.load.create_disposition Sets whether the job is allowed to create new tables
bq.load.write_disposition Sets the action that should occur if the destination table already exists.
bq.load.max_bad records Sets the maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. By default no bad record is ignored.
bq.job.stat.creation_time Time load job creation
bq.job.stat.end_time Time load job ended
bq.job.stat.start_time Time load job started
bq.job.link API Link to load job
bq.error.message Load job error message
bq.error.reason Load job error reason
bq.error.location Load job error location
bq.records.count Number of records successfully inserted

State management:

This component does not store state.

Restricted:

This component is not restricted.

Input requirement:

This component requires an incoming relationship.

System Resource Considerations:

None specified.