Hitachi Vantara Pentaho Community Forums
Results 1 to 4 of 4

Thread: Unexpected error occurred while launching entry - Java heap space

  1. #1

    Default Unexpected error occurred while launching entry - Java heap space

    Dear All,

    I have transformations running in separate JVM in The master job runs those transformation on different slave servers each in own JVM.
    After the successful completion of each slave transformation the master can not receive status of slave transformation run.

    Why getting Heap problem as master job is only launching the slaves, not doing anything expensive with memory ... (could be logging or something like that?)

    ERROR 16-11 22:33:35,327 - AFJob_s_in - org.pentaho.di.core.exception.KettleException:
    Unexpected error occurred while launching entry [Aggregation00_s.0]
    Java heap space

    at org.pentaho.di.job.Job$
    Caused by: java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(
    at java.lang.AbstractStringBuilder.expandCapacity(
    at java.lang.AbstractStringBuilder.append(
    at java.lang.StringBuffer.append(
    at org.pentaho.di.www.SlaveServerTransStatus.<init>(
    at org.pentaho.di.
    at org.pentaho.di.cluster.SlaveServer.getTransStatus(
    at org.pentaho.di.job.entries.trans.JobEntryTrans.execute(
    at org.pentaho.di.job.Job.execute(
    at org.pentaho.di.job.Job.access$000(
    at org.pentaho.di.job.Job$
    ... 1 more

    M.Name:  master_slaves.jpg
Views: 141
Size:  24.5 KB

  2. #2
    Join Date
    Nov 1999

  3. #3


    Hi Matt,

    Thank you for your time and help.
    Are those variables applicable also for PDI 3.2. ? If yes, default is not set there i assume ...

    • KETTLE_MAX_LOGGING_REGISTRY_SIZE : Make sure to consider this parameter in fast paced environments where a job never ends and the registry is not cleaned automatically because of this. The default of 1000 should be enough to provide accurate logging. If you have complex jobs you might want to increase this number.
    • KETTLE_MAX_JOB_ENTRIES_LOGGED : For never ending jobs this makes a big difference. Please note that you can enable interval logging on the job entry log table. Make sure you keep enough entries in memory until the next time you write them out to the database table. The default is also set to a reasonably low 1000 entries.
    • KETTLE_MAX_JOB_TRACKER_SIZE: Again this parameter makes a difference in never ending jobs as it allows another possible memory leak to be cleaned up automatically beyond a certain size. The job tracker keeps track of the results of job entries. In a never ending job you rarely need more than the default, 1000 again.
    • KETTLE_MAX_LOG_SIZE_IN_LINES: If you accidentally execute a transformation in say “Row level” logging mode, an enormous amount of very detailed logging will be produced. In the past, before version 4, this was usually a common cause for running out of memory and crashing your whole cluster. By setting this value to a fair maximum (default is 5000 in 4.2) you will prevent this situation. You can also specify this parameter in your Carte XML configuration file with <max_log_lines> parameter.
    • KETTLE_MAX_LOG_TIMEOUT_IN_MINUTES: If you prefer to let the records time out after a while, then that is possible. You can specify a maximum age with this parameter. The default maximum age is 1440 (one day). You can also specify this parameter in your Carte XML configuration file with <max_log_timeout_minutes> parameter.


  4. #4
    Join Date
    Nov 1999


    No, they are applicable to 4.2 or higher.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Privacy Policy | Legal Notices | Safe Harbor Privacy Policy

Copyright © 2005 - 2019 Hitachi Vantara Corporation. All Rights Reserved.