We have a relatively simple data transport process we run 2x/daily wherein we use Kettle to grab some data from Sybase and push it over to a MySQL database. I put this in place myself about a year and a half ago and it's been trouble-free all this time.

We recently replaced the MySQL database with a VCS-based "high availability" cluster solution, still running MySQL... but to maximize data safety we need to configure MySQL to flush the binlog to disk (external disk subsystem) frequently to minimize loss in case of failover. We're still running the same data transport using Kettle, but it needs to write a few million rows... and it does this with a single row per insert statement. This *kills* performance due to the need to flush the binlog out for every statement (or 2, or 3, we're still tuning for fastest performance) and since the binlog flush is on a per-statement level, I'm wondering if there's a way to get Kettle to use "extended inserts" wherein multiple rows are handled by a single INSERT statement. Does Kettle support this?

Apologies if this has already been answered elsewhere or if I missed something simple in the docs.