Is there a way to create a kettle job/transformation that queries a hive table, transforms the data, and then feeds a spark job directly?

It seems like the only way to do this would be to run the transformation, write to a hive table, and then have the spark job re-read that data with a Hive SQL command using a hive context (lots of IO).