Hi,

I'm trying to transfer a local Oracle table with 22 million rows to an Azure SQL database (plain transfer, no transformations). On the "table output" step, I can easily pump up "Number of copies to start" (e. g. 200), and it works fine. However, now the input part can't keep up with the output, i. e. it became the bottleneck.

I tried to start the table input step multiple times (and I made sure data movement is set to "Round-Robin"), but I noticed that data gets multiplied nonetheless, so that doesn't seem to be an option either.

I then came across 'table partitioning", here's what I've tried (and it seems to "somehow" work)": I set up a "Partition" in the "View" tab, and set it randomly to 5 partitions. Then I attach a select step to the input step and select "right click -> partitions -> Remainder of division -> Select the new partition schema -> select a field name). Then, attach a table output step and start it multiple times.

It seems to work on smaller tables (that's all I tested), but my approach is a bit like "hit and miss" (or rather "voodoo"). Before I try it on a large table, could someone give me some feedback if I'm on the right track here? And do I need to include other steps, for example, I could partition the transactions table by year. Do I need to sort the table by year first before partitioning).

Thanks in advance