Hitachi Vantara Pentaho Community Forums
Results 1 to 3 of 3

Thread: Running 'job' under Spark Adaptive Execution Layer

  1. #1

    Default Running 'job' under Spark Adaptive Execution Layer

    Hello,

    I've started the pdi-daemon successfully on my machine under HDP Sandbox. Now I want to run a 'JOB' under Spark, but under Spoon I'm not able to run a job under Spark. Seems like it is available only for 'Transformation'. Is that true?

    Is there a way to run a 'Job' under Spark AEL? I read somewhere that I can 'submit' a job using 'spark-submit'. Can someone point me to some documentation?

    Thanks.

  2. #2
    Join Date
    Aug 2008
    Posts
    563

    Default

    Did you create a Run Configuration for Spark? See my blog post for me details.
    Best regards,
    Diethard
    ===============
    Visit my Pentaho blog which offers some tutorials mainly on Kettle, Report Designer and Mondrian
    ===============

  3. #3
    Join Date
    Nov 2017
    Posts
    1

    Default

    Hi achitre. I'm not sure what are you trying to accomplish. It doesn't make really much sense to me trying to run a "Job" under Spark AEL, since Spark is used as a processing engine and all the "processing" work happens inside a transformation. I recommend you to check out this answer in the PDI FAQ: What's the difference between transformations and jobs.

    Considering you want to use Spark Submit, there are a few differences from using pdi-daemon and spark as a transformation engine, as it's more similar to running a native "spark-submit" as you might know. I find this a good reading on this subject: Using Spark With PDI

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Privacy Policy | Legal Notices | Safe Harbor Privacy Policy

Copyright © 2005 - 2019 Hitachi Vantara Corporation. All Rights Reserved.