I have had some issues with several steps that use Commit Size(table output, synchronize after merge ...) when I Change the number of copies to start(which helps performance a lot for me). We use Merge Rows (diff) a lot to get tables to a known good state.

My use case:
On first run many rows flow through the "synchronize after merge step" at a high rate. On subsequent runs, a small number of rows flow through, sometimes a very low rate(not including the Identical rows). If I have four of these running, a row for Insert will flow to one copy of the step and another row say update will flow to another copy of the step. The Insert will sit in the transaction until the number of Commit size are reached. In MySQL this locks the table. The copy with the update will wait for the lock to be cleared by the insert step. In MySQL a timeout will occur on the update and the transform will fail.

I could bump up the timeouts in mysql db, but I think a better solution would be to add a parameter("Timeout to Commit") to steps that have a Commit Size . So for example, you could set the commit size to 1000 which helps performance when lots of rows are flowing through it in a short amount of time. Or If there are few rows flowing through the Timeout to Commit set to 3 seconds then the step would go ahead and commit any rows in the transaction even if only one.

My current work around is to run one copy of the step, which in my environment slows things down when lots of data flows through.

I pre-appoligize if this has been discussed elsewhere..