Blog / AWS CLI

Every S3 CLI Command You Actually Need
With Real Examples.

A practical, copy-paste reference for the AWS CLI S3 commands engineers use daily. Upload, sync, delete, presign, change storage classes — all with real paths and real flags.

By Saurav Sharma | | 15 min read

The AWS docs list every S3 CLI flag ever created. This guide doesn't. It covers the commands you'll actually type on a weekly basis — uploading builds, syncing deployments, generating share links, cleaning up old data — and nothing else. Every example uses realistic bucket names and file paths you can adapt in seconds.

All commands assume AWS CLI v2. If you're still on v1, everything here still works — v2 just handles defaults better.

1. Quick Setup

Before touching S3, make sure the CLI is installed and configured.

Check your CLI version:

aws --version

You want aws-cli/2.x.x. If you see 1.x, upgrade here.

Configure credentials:

aws configure

Prompts for Access Key, Secret Key, default region (us-east-1), and output format (json).

Configure a named profile (for multiple AWS accounts):

aws configure --profile staging

Then use --profile staging on any command.

Verify access works:

aws sts get-caller-identity

Returns your account ID, user ARN, and user ID. If this errors, your credentials are wrong.

2. Create & List Buckets

Create a bucket:

aws s3 mb s3://acme-app-assets-prod

Bucket names are globally unique. If it's taken, you'll get a BucketAlreadyExists error.

Create a bucket in a specific region:

aws s3 mb s3://acme-app-assets-prod --region eu-west-1

List all buckets:

aws s3 ls

List contents of a bucket:

aws s3 ls s3://acme-app-assets-prod/

List recursively (see every object, all prefixes):

aws s3 ls s3://acme-app-assets-prod/ --recursive --human-readable --summarize

--human-readable shows sizes as KB/MB/GB. --summarize adds total object count and size at the bottom.

Gotcha: The trailing slash matters. aws s3 ls s3://my-bucket lists buckets matching that prefix. aws s3 ls s3://my-bucket/ lists the contents of that bucket.

3. Upload Files

Upload a single file:

aws s3 cp ./report-q4.pdf s3://acme-internal-docs/reports/

Copies report-q4.pdf into the reports/ prefix.

Upload an entire folder:

aws s3 cp ./build/ s3://acme-frontend-prod/ --recursive

Uploads every file inside ./build/ to the bucket root. Preserves folder structure.

Upload a folder, but exclude certain files:

aws s3 cp ./project/ s3://acme-deploy/v2.3/ --recursive \
  --exclude "*.log" \
  --exclude ".git/*" \
  --exclude "node_modules/*"

Upload only specific file types:

aws s3 cp ./assets/ s3://acme-cdn/images/ --recursive \
  --exclude "*" \
  --include "*.jpg" \
  --include "*.png" \
  --include "*.webp"

Order matters. --exclude "*" first drops everything, then --include adds back what you want.

Upload and set content type (useful for static sites):

aws s3 cp ./index.html s3://acme-frontend-prod/index.html \
  --content-type "text/html" \
  --cache-control "max-age=60"
Tip: The CLI auto-detects MIME types for most files. You only need --content-type when it guesses wrong (extensionless files, .woff2 fonts, etc.) or when you want to explicitly set --cache-control headers.

4. Download Files

Download a single file:

aws s3 cp s3://acme-internal-docs/reports/report-q4.pdf ./downloads/

Download an entire prefix (folder):

aws s3 cp s3://acme-data-lake/exports/2026-03/ ./local-exports/ --recursive

Download only CSVs from a prefix:

aws s3 cp s3://acme-data-lake/raw/ ./csv-only/ --recursive \
  --exclude "*" \
  --include "*.csv"

Download and pipe to stdout (useful for quick inspection):

aws s3 cp s3://acme-logs/app/2026-03-14.log -

The dash (-) sends file contents to stdout. Pipe it to grep, head, jq, whatever you need.

5. Sync

sync is the smarter cousin of cp --recursive. It only transfers files that are new or changed (by size + last-modified time). Use it for deployments and backups.

Deploy a static site:

aws s3 sync ./dist/ s3://acme-frontend-prod/ --delete

--delete removes files from S3 that no longer exist locally. This keeps your bucket an exact mirror of your build output.

Backup a local folder to S3 (without deleting old backups):

aws s3 sync /var/data/postgres-backups/ s3://acme-db-backups/postgres/

Without --delete, S3 keeps files even if they're removed locally. Good for append-only backups.

Sync from S3 to local (restore a backup):

aws s3 sync s3://acme-db-backups/postgres/ ./restore/

Sync but exclude build artifacts:

aws s3 sync ./repo/ s3://acme-source-archive/ \
  --exclude ".git/*" \
  --exclude "node_modules/*" \
  --exclude "*.pyc"

Preview what sync would do (dry run):

aws s3 sync ./dist/ s3://acme-frontend-prod/ --delete --dryrun
Always dry-run first when using --delete. It's easy to accidentally wipe objects in a bucket if your local directory is wrong. --dryrun shows every upload, download, and delete that would happen.

Sync between two S3 buckets:

aws s3 sync s3://acme-frontend-staging/ s3://acme-frontend-prod/

Works for cross-account too, as long as your IAM role has permissions on both buckets.

6. Move & Rename

S3 doesn't have a native rename operation. mv copies to the new key and deletes the old one.

Rename a file in S3:

aws s3 mv s3://acme-uploads/temp/draft.pdf s3://acme-uploads/final/report-2026-q1.pdf

Move all files from one prefix to another:

aws s3 mv s3://acme-data-lake/incoming/ s3://acme-data-lake/processed/ --recursive

Move from local to S3 (deletes the local file after upload):

aws s3 mv ./export.csv s3://acme-data-lake/imports/2026-03/export.csv
Gotcha: mv within S3 is a server-side CopyObject + DeleteObject. The data never leaves S3's internal network, so it's fast and avoids data transfer charges. You're billed for a COPY request (same price as PUT) and a DELETE request — not GET + PUT.

7. Delete

Delete a single file:

aws s3 rm s3://acme-uploads/temp/draft.pdf

Delete everything under a prefix:

aws s3 rm s3://acme-logs/app/2025/ --recursive

Deletes all objects whose key starts with app/2025/.

Delete all objects in a bucket:

aws s3 rm s3://acme-temp-bucket/ --recursive

Delete only log files older than a pattern:

aws s3 rm s3://acme-logs/ --recursive \
  --exclude "*" \
  --include "*.log"

Delete a bucket (must be empty first):

aws s3 rb s3://acme-temp-bucket

Force-delete a bucket (empties it, then deletes):

aws s3 rb s3://acme-temp-bucket --force
Warning: rb --force is irreversible. If the bucket has versioning enabled, it only deletes current versions — delete markers and old versions remain. For versioned buckets, you need to use the s3api commands to fully purge all versions first.

8. Presigned URLs

Generate a temporary URL that lets anyone download (or upload to) a private S3 object without AWS credentials.

Generate a download link (default 1 hour):

aws s3 presign s3://acme-internal-docs/reports/report-q4.pdf

Generate a download link valid for 7 days:

aws s3 presign s3://acme-internal-docs/reports/report-q4.pdf --expires-in 604800

Value is in seconds. 604800 = 7 days (the max for IAM user credentials).

Generate a presigned URL for a specific region:

aws s3 presign s3://acme-eu-assets/logo.png --region eu-west-1
Gotcha: If you generated your credentials with aws sts assume-role (temporary session credentials), the maximum presigned URL expiry is limited to the remaining lifetime of the session token — not 7 days. If your URLs expire early, this is usually why.

9. Permissions & ACLs

ACLs are legacy — AWS recommends bucket policies instead. But you'll still see --acl in the wild, especially for quick public access.

Upload a file and make it publicly readable:

aws s3 cp ./logo.png s3://acme-public-assets/logo.png --acl public-read
Important: As of April 2023, new S3 buckets have ACLs disabled by default (BucketOwnerEnforced). The --acl flag will fail unless you explicitly enable ACLs on the bucket. Use bucket policies for public access instead.

Apply a bucket policy (the modern approach):

aws s3api put-bucket-policy --bucket acme-frontend-prod --policy '{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "PublicRead",
    "Effect": "Allow",
    "Principal": "*",
    "Action": "s3:GetObject",
    "Resource": "arn:aws:s3:::acme-frontend-prod/*"
  }]
}'

View current bucket policy:

aws s3api get-bucket-policy --bucket acme-frontend-prod --output text | jq .

Block all public access on a bucket (recommended for private buckets):

aws s3api put-public-access-block --bucket acme-internal-docs \
  --public-access-block-configuration \
  "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"

10. Storage Classes

S3 charges differently based on storage class. Use the right one and your bill drops significantly.

Upload directly to Glacier (cheap archival):

aws s3 cp ./compliance-archive-2025.tar.gz s3://acme-archives/ \
  --storage-class GLACIER

Retrieval takes minutes to hours. Good for data you rarely (or never) access.

Upload to Intelligent-Tiering (auto-optimizes):

aws s3 cp ./dataset.parquet s3://acme-data-lake/parquet/ \
  --storage-class INTELLIGENT_TIERING

AWS automatically moves objects between frequent and infrequent tiers based on access patterns. No retrieval fees.

Move existing objects to a cheaper storage class:

aws s3 cp s3://acme-logs/2024/ s3://acme-logs/2024/ --recursive \
  --storage-class GLACIER_IR

Copies objects in-place with the new storage class. GLACIER_IR (Instant Retrieval) is great for logs you rarely read but need available immediately when you do.

Available storage classes:

CLI Value Use Case
STANDARDDefault. Frequently accessed data.
INTELLIGENT_TIERINGUnknown access patterns. Auto-moves between tiers.
STANDARD_IAInfrequent access. Lower storage cost, retrieval fee.
ONEZONE_IASame as IA but single AZ. Cheaper, less durable.
GLACIER_IRArchive with instant retrieval. Great for compliance.
GLACIERS3 Glacier Flexible Retrieval. Minutes to hours retrieval.
DEEP_ARCHIVECheapest. 12+ hour retrieval. Regulatory archives.
EXPRESS_ONEZONESingle-digit ms latency. Single AZ. ML, analytics, performance-critical workloads.

11. Multipart & Large Files

The CLI automatically uses multipart upload for files above 8 MB (default threshold). You can tune this.

Configure multipart threshold and chunk size:

aws configure set default.s3.multipart_threshold 64MB
aws configure set default.s3.multipart_chunksize 16MB

Higher threshold = fewer parts for medium files. Larger chunk size = fewer API calls but more memory. 16MB chunks work well for most uploads.

Increase max concurrent requests (faster uploads on good bandwidth):

aws configure set default.s3.max_concurrent_requests 20

Default is 10. Bump to 20-50 if you're uploading lots of files on a fast connection.

View your current S3 config:

aws configure get default.s3.multipart_threshold

For one-off large uploads, override inline:

aws s3 cp ./database-dump-50gb.sql.gz s3://acme-db-backups/ \
  --expected-size 53687091200

--expected-size (in bytes) helps the CLI pick optimal part sizes for very large files. Not required, but prevents the "too many parts" error for files over ~80 GB at default settings.

Tip: S3 allows a maximum of 10,000 parts per upload. With the default 8MB part size, that's ~80 GB max. For larger files, increase multipart_chunksize. At 100MB chunks, you can upload up to ~1 TB. Or just use --expected-size and the CLI figures it out.

12. Useful Flags Cheat Sheet

Flags that work across cp, sync, mv, and rm:

Flag What It Does
--dryrun Shows what would happen without actually doing it. Use before every destructive operation.
--recursive Apply the command to all objects under a prefix. Required for folder operations.
--exclude "PATTERN" Skip files matching the pattern. Supports * and ? wildcards.
--include "PATTERN" Include files matching pattern. Evaluated after --exclude.
--quiet Suppresses all output. Useful in scripts and CI/CD.
--only-show-errors Shows nothing unless something fails. The sweet spot between verbose and silent.
--profile NAME Use a named profile from ~/.aws/credentials.
--region REGION Override the default region for this command.
--delete sync only. Removes destination files not in the source.
--storage-class CLASS Set the storage class for uploaded objects.
--content-type TYPE Override MIME type detection.
--cache-control VALUE Set Cache-Control header. Critical for static site deployments.
--sse AES256 Enable server-side encryption with S3-managed keys.

Putting it together — a real CI/CD deploy command:

# Step 1: Sync hashed assets with long cache (these filenames change on every build)
aws s3 sync ./dist/ s3://acme-frontend-prod/ \
  --delete \
  --exclude "index.html" \
  --exclude ".DS_Store" \
  --cache-control "max-age=31536000,immutable" \
  --only-show-errors \
  --profile production \
  --region us-east-1

# Step 2: Upload index.html with no-cache so browsers always fetch the latest
aws s3 cp ./dist/index.html s3://acme-frontend-prod/index.html \
  --cache-control "no-cache" \
  --content-type "text/html" \
  --profile production \
  --region us-east-1

# Step 3: Invalidate CloudFront cache
aws cloudfront create-invalidation \
  --distribution-id E1A2B3C4D5E6F \
  --paths "/*" \
  --profile production
Why two steps? Hashed assets (main.a1b2c3.js) get new filenames on each build, so a 1-year cache is safe. But index.html always has the same name — if you cache it for a year, users will be stuck on the old version until they hard-refresh.

Want to master the AWS CLI?

This guide covers S3. The full course covers EC2, IAM, Lambda, CloudFormation, and 35+ more services — with hands-on labs.

Get the full AWS CLI course — $9.99 →