Hitachi Vantara Pentaho Community Forums
Results 1 to 4 of 4

Thread: How many CPU cores does kettle use for execution?

  1. #1
    Join Date
    Sep 2017
    Posts
    9

    Default How many CPU cores does kettle use for execution?

    Hi Everyone,

    Currently in our project we are using Kettle 5.2.4.0 community version which is being installed in windows server.When running few jobs(all in serial only) involving xml and xsl components processing roughly around 300K records the CPU memory utilization of server reaches close to 100%. We are not sure why it's taking such a huge amount of CPU memory.

    Below is the hardware specification of our server:

    Server : Windows 2012
    CPU Cores : 16
    Server type: Stand alone(no other application is installed in this server except kettle)


    Can somebody tell why even after having 16 CPU cores, the memory utilization is high or how can we find out whether the current ETL uses all the available CPU cores in memory.

    Thanks in advance.

  2. #2

    Default

    Hi Prad90,

    PDI jobs are typically run within a single thread, so any job entries will use that same thread. Because of this, an individual job will not see any benefit from having multiple cores. If you are running multiple jobs currently, having more cores will help, as those concurrent jobs can each make use of 1 core each.

    Where it gets a bit complicated is where you run a transformation, either independently or as part of a job. When this happens, each step (or "step copy" for complex transformations) runs in their own individual threads (e.g. 5 steps means 5 threads). Transformations will benefit from having more cores available, as it can spread the processing of steps across the available CPU cores. When a transformation is called from a job, the job's Thread will wait for the transformation to complete, before moving onto the next job entry.

    I can't speak to the performance of the XML/XSL job entries, but I would recommend reviewing the power settings within Windows AND the BIOS/UEFI settings, as the CPU may try to lower the CPU clock speed for power/cooling benefits, and may impact the performance of your workload. To check this, you might run CPU-z on your server to see what the real-time clock speed is, and see if it closely matches the expected clock speed according to the CPU vendor.

    Hope that helps.

  3. #3
    Join Date
    Sep 2017
    Posts
    9

    Default

    Thanks for the response Mathhew. I will validate the CPU clock speed and let you know the findings here. Meanwhile to answer on parallel job runs,most of our jobs are dependent on each other so, ruling that option out. With that being said,i am assuming having those many cores is of no significance for our requirement. Let me know if i am wrong here.

    Also just wanted to get your expertise whether calling more transformations(3 on overage) in a single job will increase CPU usage, as this is the design scenario in most of our jobs.

  4. #4

    Default

    If you're only running one job at a time, that's correct, additional CPU cores will have no impact on the performance of the job.

    It's hard to give good guidance on how transformations perform on a system when we're talking in general terms. At the job-level, I'd imagine that you'd see a single CPU-core peak to 100% usage, but any other cores to remain idle. When a transformation is running, it's likely that there will be more activity across a number of cores, but whether it's a lot of CPU usage or a little CPU usage really depends upon the specific operations being done within the transformation.

    From what experience I've had with PDI and XML documents, your increased memory usage is likely due to the underlying technologies that are used to parse XML documents. In the past, I've seen instances where a 100MB XML document takes much more RAM than just 100MB, as Java will convert the text of the XML document into an XML object in order to perform actions on the XML data. This behavior isn't PDI- or Java-specific, so if this is the issue you're running into, there isn't necessarily a simple solution for this.

    Off-hand, you'd have to look into using additional step-copies to spread XML/XSL processing across your available CPU cores, or spread the load out into multiple servers (Carte clusters or Hadoop). You could also look into the StAX PDI step, which avoids converting the XML document in-memory, but requires a completely different way to handle the resulting parsed data.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Privacy Policy | Legal Notices | Safe Harbor Privacy Policy

Copyright © 2005 - 2019 Hitachi Vantara Corporation. All Rights Reserved.