MySQL HeatWave: Migrating Your Data to HeatWave Service

Migrating data to MySQL HeatWave is a critical step in leveraging the cloud’s power for your database workloads. MySQL Shell dump utilities provide the recommended method for exporting data from source MySQL databases and importing them into HeatWave D…


This content originally appeared on DEV Community and was authored by Ryan Giggs

Migrating data to MySQL HeatWave is a critical step in leveraging the cloud's power for your database workloads. MySQL Shell dump utilities provide the recommended method for exporting data from source MySQL databases and importing them into HeatWave DB systems. These utilities offer HeatWave-specific compatibility checks, parallel processing, and automatic modifications to ensure smooth migrations.

This guide covers dump utilities, compatibility checks, and best practices for successful data migration to MySQL HeatWave.

Understanding Data Migration

Data migration to MySQL HeatWave involves moving data from a source MySQL database (on-premises, cloud, or other infrastructure) to a target MySQL HeatWave DB system in Oracle Cloud Infrastructure.

Migration Process:

  1. Export data from source using MySQL Shell dump utilities
  2. Store dump files in OCI Object Storage or locally
  3. Run compatibility checks for HeatWave
  4. Apply necessary modifications
  5. Import data into HeatWave DB system using util.loadDump()

MySQL Shell Dump Utilities

MySQL Shell provides three dump utilities: util.dumpInstance(), util.dumpSchemas(), and util.dumpTables(). Each serves different migration scenarios.

1. util.dumpInstance()

Exports all compatible schemas to Object Storage bucket or local files, including users, events, routines, and triggers by default

What It Exports:

  • All user schemas (excludes system schemas)
  • User accounts and privileges
  • Events, routines, and triggers
  • Complete database structure

What It Excludes:

  • System schemas: mysql, sys, performance_schema, information_schema, ndbinfo

Example Usage:

util.dumpInstance("/path/to/dump", {
  ocimds: true,
  threads: 8,
  compatibility: ["force_innodb", "strip_definers", "strip_tablespaces"]
})

To Object Storage:

util.dumpInstance("", {
  osBucketName: "myMigrationBucket",
  osNamespace: "my-namespace",
  ocimds: true,
  threads: 16,
  compatibility: ["force_innodb", "strip_definers", "strip_restricted_grants", 
                  "strip_tablespaces", "create_invisible_pks"]
})

2. util.dumpSchemas()

Exports selected schemas with all tables, triggers, routines, and events of those schemas

What It Exports:

  • All tables in specified schemas
  • Triggers associated with tables
  • Routines (stored procedures and functions)
  • Events
  • Views

Example Usage:

util.dumpSchemas(["sales", "inventory", "customers"], "/path/to/dump", {
  ocimds: true,
  threads: 4,
  compatibility: ["force_innodb", "strip_definers"]
})

Multiple Schemas to Object Storage:

util.dumpSchemas(
  ["tpch", "employees"], 
  "", 
  {
    osBucketName: "mysqlData",
    osNamespace: "abcdefghijkl",
    ocimds: true,
    threads: 16,
    compatibility: ["force_innodb", "strip_restricted_grants", 
                    "strip_definers", "strip_tablespaces", "create_invisible_pks"]
  }
)

3. util.dumpTables()

Exports selected tables or views from a schema, including triggers of exported tables

What It Exports:

  • Specified tables or views
  • Triggers associated with those tables
  • Schema structure needed to recreate tables

Example Usage:

util.dumpTables("sales", ["orders", "order_items", "customers"], "/path/to/dump", {
  ocimds: true,
  threads: 2,
  compatibility: ["force_innodb"]
})

Dump Utility Options

Dump utilities support parallel dumping with multiple threads, file compression, OCI Object Storage streaming, and MySQL HeatWave compatibility checks and modifications

Essential Options

ocimds

Enables HeatWave Service compatibility checks. When set to true, the dump cannot proceed if incompatibilities exist.

ocimds: true

Critical: Dumps loaded into HeatWave must be created with ocimds: true.

threads

Number of parallel threads for dump operation. More threads = faster dumps but higher source load.

threads: 16  // Use 4-16 for most workloads

compatibility

Array of modifications to apply during export to ensure HeatWave compatibility.

compatibility: ["force_innodb", "strip_definers", "strip_tablespaces", 
                "create_invisible_pks"]

bytesPerChunk

Defines chunk size for large datasets. Default is 64MB.

bytesPerChunk: "128M"  // For large datasets

compression

Compression algorithm for dump files.

compression: "zstd"  // Options: zstd (default), gzip, none

maxRate

Maximum data read throughput during dump (in bytes per second).

maxRate: "50M"  // Limit to 50 MB/s to reduce source impact

Advanced Options

dryRun

Displays what would be dumped and results of compatibility checks without proceeding

mode: "dryrun"

Always run dryRun first to identify issues before actual export.

consistent

Ensures data consistency by using FLUSH TABLES WITH READ LOCK.

consistent: true  // Default

excludeSchemas / includeSchemas

Filter which schemas to dump.

excludeSchemas: ["test", "staging"]
// Or
includeSchemas: ["production", "analytics"]

excludeTables / includeTables

Filter tables within schemas (for dumpSchemas and dumpInstance).

excludeTables: ["sales.temp_*", "logs.old_*"]

defaultCharacterSet

Specify character set for dump files.

defaultCharacterSet: "utf8mb4"  // Recommended

Performing a Dry Run

Always perform a dry run before actual export to identify compatibility issues

Dry Run Command

util.dumpInstance("", {
  mode: "dryrun",
  ocimds: true
})

What Dry Run Does

  1. Connects to source database
  2. Analyzes schemas, tables, and objects
  3. Runs HeatWave compatibility checks
  4. Lists all objects to be dumped
  5. Reports compatibility issues
  6. Does not export any data

Sample Dry Run Output

Checking for compatibility with MySQL HeatWave Service

Compatibility issues with MySQL HeatWave Service 8.0.32
1) Schema 'sales'
  Table 'legacy_orders' uses MyISAM storage engine (issue: engine)
  Table 'customers' has no primary key (issue: no_pk)
  View 'active_orders' has DEFINER='admin'@'localhost' (issue: definer)

2) Schema 'inventory'
  Table 'items' uses MyISAM storage engine (issue: engine)

Compatibility issues found. Use 'compatibility' option to apply modifications.
Suggested options: ["force_innodb", "strip_definers", "create_invisible_pks"]

HeatWave Compatibility Checks

Setting ocimds option to true enables compatibility checks that list all issues when instance is dumped

Common Compatibility Issues

1. Non-InnoDB Storage Engines

Issue: HeatWave only supports InnoDB
Solution: Use force_innodb compatibility option

2. Missing Primary Keys

Issue: High Availability requires primary keys on all tables
Solution: Use create_invisible_pks or ignore_missing_pks

3. DEFINER Clauses

Issue: Requires special privileges to create with custom definer
Solution: Use strip_definers

4. Tablespace Clauses

Issue: HeatWave has tablespace restrictions
Solution: Use strip_tablespaces

5. Restricted Grants

Issue: Some privileges not supported in HeatWave
Solution: Use strip_restricted_grants

6. External Authentication

Issue: Some authentication plugins not supported
Solution: Use skip_invalid_accounts

HeatWave Compatibility Modifications

Compatibility options automatically modify exported data to resolve compatibility issues

Available Compatibility Options

force_innodb

Modifies ENGINE clause of CREATE TABLE statements to specify InnoDB for tables using incompatible storage engines

What It Does:

  • Changes ENGINE=MyISAM to ENGINE=InnoDB
  • Changes other engines (MEMORY, CSV, etc.) to InnoDB
  • Leaves InnoDB tables unchanged

Example:

-- Before
CREATE TABLE orders (...) ENGINE=MyISAM;

-- After
CREATE TABLE orders (...) ENGINE=InnoDB;

strip_definers

Removes DEFINER=account clause from views, routines, events, and triggers, changing SQL SECURITY to INVOKER

What It Does:

  • Removes DEFINER clause
  • Changes SQL SECURITY from DEFINER to INVOKER
  • Objects created with default definer (user loading schema)

Example:

-- Before
CREATE DEFINER=`admin`@`localhost` VIEW active_orders AS ...

-- After
CREATE VIEW active_orders AS ...

strip_tablespaces

Removes TABLESPACE clause from CREATE TABLE statements so tables are created in default tablespaces

What It Does:

  • Removes TABLESPACE option from CREATE TABLE
  • Comments out DATA DIRECTORY, INDEX DIRECTORY, ENCRYPTION options

Example:

-- Before
CREATE TABLE orders (...) TABLESPACE=ts1;

-- After
CREATE TABLE orders (...);

create_invisible_pks

Adds invisible primary keys to tables without primary keys, required for High Availability DB systems

What It Does:

  • Adds invisible AUTO_INCREMENT column named my_row_id
  • Sets as primary key
  • No impact on applications (column is invisible)
  • Requires MySQL 8.0.24+ on target

Example:

-- Before
CREATE TABLE logs (
  timestamp DATETIME,
  message TEXT
);

-- After
CREATE TABLE logs (
  my_row_id BIGINT UNSIGNED AUTO_INCREMENT INVISIBLE PRIMARY KEY,
  timestamp DATETIME,
  message TEXT
);

Alternative: ignore_missing_pks - ignores missing PKs but cannot load to HA systems.

strip_restricted_grants

Removes privileges that are restricted in MySQL HeatWave Service

Restricted Privileges:

  • SUPER
  • FILE
  • RELOAD
  • BINLOG_ADMIN
  • Others specific to managed service

skip_invalid_accounts

Skips user accounts with external authentication plugins not supported in HeatWave.

Also removes:

  • Accounts without passwords (except roles)
  • Accounts using unsupported plugins

Applying Compatibility Options

util.dumpInstance("", {
  osBucketName: "migrationBucket",
  ocimds: true,
  compatibility: [
    "force_innodb",
    "strip_definers",
    "strip_restricted_grants",
    "strip_tablespaces",
    "create_invisible_pks"
  ]
})

Step-by-Step Migration Process

Step 1: Prepare Source Database

Ensure MySQL Shell is installed and source is accessible:

mysqlsh --version  # Should be 8.0.21 or later

Step 2: Run Dry Run

Identify compatibility issues:

mysqlsh admin@source_server:3306 -- util dump-instance "" \
  --mode=dryrun \
  --ocimds=true

Step 3: Analyze Dry Run Output

Review compatibility issues and note suggested options:

  • Storage engine issues → force_innodb
  • Missing PKs → create_invisible_pks
  • Definer clauses → strip_definers
  • Tablespace issues → strip_tablespaces

Step 4: Export Data with Modifications

To Local Files:

mysqlsh admin@source_server:3306 -- util dump-instance /backup/migration \
  --ocimds=true \
  --threads=8 \
  --bytesPerChunk=128M \
  --compatibility='["force_innodb","strip_definers","strip_tablespaces","create_invisible_pks"]'

To Object Storage:

mysqlsh admin@source_server:3306 -- util dump-instance "" \
  --osBucketName=migrationBucket \
  --osNamespace=my-namespace \
  --ocimds=true \
  --threads=16 \
  --compression=zstd \
  --compatibility='["force_innodb","strip_definers","strip_restricted_grants","strip_tablespaces","create_invisible_pks"]'

Step 5: Import Data into HeatWave

From Object Storage (during DB system creation):

  • Select "Import data from Object Storage"
  • Provide bucket details
  • HeatWave automatically imports during provisioning

To Existing DB System:

mysqlsh admin@heatwave_endpoint:3306 -- util load-dump /backup/migration \
  --threads=16 \
  --progressFile=/tmp/load-progress.json

From Object Storage to Existing System:

mysqlsh admin@heatwave_endpoint:3306 -- util load-dump migrationBucket \
  --osBucketName=migrationBucket \
  --osNamespace=my-namespace \
  --threads=16

Best Practices

1. Always Use ocimds: true

Ensures HeatWave compatibility checks run during export. Dumps without this cannot be loaded into HeatWave.

2. Run Dry Run First

Identify compatibility issues before actual export to avoid failed migrations

3. Use Appropriate Thread Count

  • Small databases (< 10GB): 2-4 threads
  • Medium databases (10-100GB): 8-12 threads
  • Large databases (> 100GB): 16-32 threads

Balance: More threads = faster but higher source load.

4. Chunk Large Tables

For very large tables, use bytesPerChunk:

bytesPerChunk: "256M"  // For tables > 10GB

5. Use Compression

Reduces storage and transfer costs:

compression: "zstd"  // Best compression ratio and speed

6. Monitor Dump Progress

Dumps display progress including:

  • Total rows to dump
  • Rows dumped so far
  • Percentage complete
  • Throughput (rows/sec, MB/sec)

7. Test with Subset First

For large migrations, test with single schema:

util.dumpSchemas(["test_schema"], ...)

Validate before full migration.

8. Plan for Downtime

  • Read-only mode during export for consistency
  • Or use consistent: true for point-in-time snapshot
  • Plan maintenance window for production migrations

9. Validate After Import

-- Check row counts
SELECT COUNT(*) FROM imported_table;

-- Verify structure
SHOW CREATE TABLE imported_table;

-- Test queries
SELECT * FROM imported_table LIMIT 10;

10. Keep Dump Files

Retain dump files until migration validated:

  • Local backups or Object Storage
  • Useful for troubleshooting
  • Allows re-import if needed

Troubleshooting Common Issues

Issue 1: Dump Fails with Compatibility Errors

Symptoms: Export aborts with list of incompatible objects

Solution:

  1. Run dry run to see all issues
  2. Apply suggested compatibility options
  3. For manual issues, fix source schema
  4. Re-run export

Issue 2: Out of Memory During Dump

Symptoms: Process killed, "out of memory" errors

Solution:

  • Reduce thread count
  • Increase system memory
  • Use bytesPerChunk to process smaller chunks

Issue 3: Load Fails with "local_infile" Error

Symptoms: Import fails, local_infile error

Solution:

-- On HeatWave DB system
SET GLOBAL local_infile = ON;

(Default is ON in HeatWave, but verify)

Issue 4: Tables Missing After Import

Symptoms: Some tables not imported

Solution:

  • Check excluded tables in dump command
  • Verify tables exported in dump metadata
  • Check for errors in load-progress.json
  • Ensure compatibility options applied correctly

Migrating data to MySQL HeatWave using MySQL Shell dump utilities provides a reliable, efficient path to the cloud. By understanding the three dump utilities, running compatibility checks, and applying appropriate modifications, you can ensure smooth migrations with minimal issues.

Key takeaways:

  • Use util.dumpInstance() for complete database migrations
  • Use util.dumpSchemas() for selective schema migrations
  • Use util.dumpTables() for specific table exports
  • Always run dry run first to identify compatibility issues
  • Apply compatibility options to auto-fix common issues
  • Test with subset before full migration
  • Monitor and validate throughout the process

Whether migrating from on-premises, other clouds, or between HeatWave regions, MySQL Shell dump utilities provide the tools needed for successful data migration to MySQL HeatWave.


This content originally appeared on DEV Community and was authored by Ryan Giggs


Print Share Comment Cite Upload Translate Updates
APA

Ryan Giggs | Sciencx (2025-11-26T10:08:01+00:00) MySQL HeatWave: Migrating Your Data to HeatWave Service. Retrieved from https://www.scien.cx/2025/11/26/mysql-heatwave-migrating-your-data-to-heatwave-service/

MLA
" » MySQL HeatWave: Migrating Your Data to HeatWave Service." Ryan Giggs | Sciencx - Wednesday November 26, 2025, https://www.scien.cx/2025/11/26/mysql-heatwave-migrating-your-data-to-heatwave-service/
HARVARD
Ryan Giggs | Sciencx Wednesday November 26, 2025 » MySQL HeatWave: Migrating Your Data to HeatWave Service., viewed ,<https://www.scien.cx/2025/11/26/mysql-heatwave-migrating-your-data-to-heatwave-service/>
VANCOUVER
Ryan Giggs | Sciencx - » MySQL HeatWave: Migrating Your Data to HeatWave Service. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2025/11/26/mysql-heatwave-migrating-your-data-to-heatwave-service/
CHICAGO
" » MySQL HeatWave: Migrating Your Data to HeatWave Service." Ryan Giggs | Sciencx - Accessed . https://www.scien.cx/2025/11/26/mysql-heatwave-migrating-your-data-to-heatwave-service/
IEEE
" » MySQL HeatWave: Migrating Your Data to HeatWave Service." Ryan Giggs | Sciencx [Online]. Available: https://www.scien.cx/2025/11/26/mysql-heatwave-migrating-your-data-to-heatwave-service/. [Accessed: ]
rf:citation
» MySQL HeatWave: Migrating Your Data to HeatWave Service | Ryan Giggs | Sciencx | https://www.scien.cx/2025/11/26/mysql-heatwave-migrating-your-data-to-heatwave-service/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.