Description:

Retrieves a listing of objects from an S3 bucket. For each object that is listed, creates a FlowFile that represents the object so that it can be fetched in conjunction with FetchS3Object. This Processor is designed to run on Primary Node only in a cluster. If the primary node changes, the new Primary Node will pick up where the previous node left off without duplicating all of the data.

Tags:

Amazon, S3, AWS, list

Properties:

In the list below, the names of required properties appear in bold. Any other properties (not in bold) are considered optional. The table also indicates any default values, whether a property supports the Expression Language Guide, and whether a property is considered “sensitive”, meaning that its value will be encrypted. Before entering a value in a sensitive property, ensure that the nifi.properties file has an entry for the property nifi.sensitive.props.key.

</tr> </table>

Name

Default Value

Allowable Values

Description

Bucket

No Description Provided.

Supports Expression Language: true


Region

us-west-2
  • us-gov-west-1
  • us-east-1
  • us-west-1
  • us-west-2
  • eu-west-1
  • eu-central-1
  • ap-southeast-1
  • ap-southeast-2
  • ap-northeast-1
  • ap-northeast-2
  • sa-east-1
  • cn-north-1
No Description Provided.
Access Key No Description Provided.

Sensitive Property: true


Supports Expression Language: true


Secret Key No Description Provided.

Sensitive Property: true


Supports Expression Language: true


Credentials File Path to a file containing AWS access key and secret key in properties file format.
AWS Credentials Provider service

Controller Service API: 


AWSCredentialsProviderService

Implementation:


AWSCredentialsProviderControllerService


The Controller Service that is used to obtain aws credentials provider

Communications Timeout

30 secs No Description Provided.
SSL Context Service

Controller Service API: 


SSLContextService

Implementation:


StandardSSLContextService


Specifies an optional SSL Context Service that, if provided, will be used to create connections
Endpoint Override URL Endpoint URL to use instead of the AWS default including scheme, host, port, and path. The AWS libraries select an endpoint URL based on the AWS region, but this property overrides the selected endpoint URL, allowing use with other S3-compatible endpoints.
Signer Override Default Signature
  • Default Signature
  • Signature v4
  • Signature v2
The AWS libraries use the default signer but this property allows you to specify a custom signer to support older S3-compatible services.
Proxy Host Proxy host name or IP

Supports Expression Language: true


Proxy Host Port Proxy host port

Supports Expression Language: true


Delimiter The string used to delimit directories within the bucket. Please consult the AWS documentation for the correct use of this field.
Prefix The prefix used to filter the object list. In most cases, it should end with a forward slash ('/').

Supports Expression Language: true


Use Versions

false
  • true
  • false
    • </td>
Specifies whether to use S3 versions, if applicable. If false, only the latest version of each object will be returned.

Name

Description

success FlowFiles are routed to success relationship
Reads Attributes:
None specified.

Writes Attributes:

Name

Description

s3.bucket The name of the S3 bucket
filename The name of the file
s3.etag The ETag that can be used to see if the file has changed
s3.isLatest A boolean indicating if this is the latest version of the object
s3.lastModified The last modified time in milliseconds since epoch in UTC time
s3.length The size of the object in bytes
s3.storeClass The storage class of the object
s3.version The version of the object, if applicable
State management:

Scope

Description

CLUSTER After performing a listing of keys, the timestamp of the newest key is stored, along with the keys that share that same timestamp. This allows the Processor to list only keys that have been added or modified after this date the next time that the Processor is run. State is stored across the cluster so that this Processor can be run on Primary Node only and if a new Primary Node is selected, the new node can pick up where the previous node left off, without duplicating the data.