AWS Data Pipeline
AWS Data Pipeline¶
AWS Data Pipeline is a web service that helps you reliably process and move data between different
AWS compute and storage services, as well as on-premises data sources, at specified intervals. With AWS
Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and
efficiently transfer the results to AWS services such as Amazon S3 (p. 74), Amazon RDS (p. 28),
Amazon DynamoDB (p. 26), and Amazon EMR (p. 11).
AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant,
repeatable, and highly available. You don’t have to worry about ensuring resource availability, managing
inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure
notification system. AWS Data Pipeline also allows you to move and process data that was previously
locked up in on-premises data silos.
Backlinks¶
- AWS overview
- Amazon Athena
Amazon Elasticsearch Service
Amazon EMR
Amazon FinSpace
Amazon Kinesis
Amazon Kinesis Data Firehose
Amazon Kinesis Data Analytics
Amazon Kinesis Data Streams
Amazon Kinesis Video Streams
Amazon Redshift
Amazon QuickSight
AWS Data Exchange
AWS Data Pipeline
AWS Glue
AWS Lake Formation
Amazon Managed Streaming for Apache Kafka
- Amazon Athena