Amazon Kinesis Firehose captures and loads streaming data in storage and business intelligence (BI) tools to enable near real-time analytics in the Amazon Web Services (AWS) cloud. Kinesis Firehose manages the underlying resources for cloud-based compute, storage, networking and configuration and can scale to meet data throughput requirements.
Amazon Kinesis Firehose delivers data to Amazon Simple Storage Service (S3) buckets, Amazon Redshift and Amazon Elasticsearch Service. Kinesis Firehose can batch, compress and encrypt data to increase security and minimize the amount of storage space needed. During transport, the service synchronizes data across three facilities in an AWS region to increase redundancy.
To use Amazon Kinesis Firehose, the developer must create a delivery stream through either the Firehose console or an API. That delivery stream shuttles data from the source to a specified destination. Data can be added to the delivery stream by using Java-based Amazon Kinesis Agent or API calls.
The frequency of data delivered to S3 is based on configurations the developer determines when creating the delivery stream and align with the S3 buffer size and the buffer interval. As of this writing, buffer sizes range from 1 to 128 MB; buffer intervals range from one to 15 minutes. Amazon Elasticsearch Service can experience buffer size and interval delays, whereas Redshift receives data as quickly as the copy command executes it.
Amazon Kinesis Firehose has no upfront costs. An AWS user is billed for the resources used and the data volume Amazon Kinesis Firehose ingests. An AWS account can have up to 20 delivery streams per region, and each stream can ingest 2,000 transactions per second, 5,000 records per second and 5 MB per second.