Hitachi Vantara Pentaho Community Forums
Page 1 of 2 12 LastLast
Results 1 to 10 of 19

Thread: heap space

  1. #1
    Join Date
    Jul 2012
    Posts
    200

    Default heap space

    Hi ,

    Issue with number of value mapper .. I have 37 value mappers to decode the values from 37 columns and so on...

    Is this the good way whatever i am doing or how can i approach this solution without javascript or value mapper

    If i am doing with value mapper i am facing this error

    Failed to execute runnable (java.lang.OutOfMemoryError: Java heap space)


    After increasing the size if "%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xmx1024m" "-XX:MaxPermSize=512m"

    can u please suggest what can we do for this issue..

  2. #2
    Join Date
    Dec 2009
    Posts
    609

    Default

    Hi,

    if the valuemapper gets too large, you might try this approach:
    Write the original value and the mapped value into a Database table or a file (Excel or CSV)
    then read this additional soure and perform a stream lookup...

    Maybe this is consumes less memory...

    The other idea might be: Increase XmX even more... this usually only will work on 64bit machines using a 64-bit JRE and a machine with more RAM.


    Cheers,

    Tom

  3. #3
    Join Date
    Nov 2008
    Posts
    271

    Default

    Hi,

    another possible solution: try and distribute your value mappers between two transformation and have the stream flows along the result (copy to / get from result). The solution can be more time-consuming but it should require less heap space.

    HTH
    Andrea Torre
    twitter: @andtorg

    join the community on ##pentaho - a freenode irc channel

  4. #4
    Join Date
    Jul 2012
    Posts
    200

    Default

    Actually I have to decode more than 1000 column values from the valuemapper for 37 fields or more..
    so for each field i have created one value mapper with 1000 columns in each..

    Is this the good way whatever i am doing can u please suggest which will be the good solution without time consuming


    Quote Originally Posted by Ato View Post
    Hi,

    another possible solution: try and distribute your value mappers between two transformation and have the stream flows along the result (copy to / get from result). The solution can be more time-consuming but it should require less heap space.

    HTH

  5. #5
    Join Date
    Jul 2012
    Posts
    200

    Default

    Hi tom,

    Thanks for ur reply my system is 32 bit..
    If i try increase more than "%PENTAHO_DI_JAVA_OPTIONS%"=="" set PENTAHO_DI_JAVA_OPTIONS="-Xmx1024m" "-XX:MaxPermSize=512m"

    Xmx1024m to Xmx1500m and xx:maxPermsize=1024m i am not able to increase it is giving we cant increase the size or heap size is too large



  6. #6
    Join Date
    Nov 2008
    Posts
    271

    Default

    Quote Originally Posted by yvkumar View Post
    Actually I have to decode more than 1000 column values from the valuemapper for 37 fields or more..
    so for each field i have created one value mapper with 1000 columns in each..

    Is this the good way whatever i am doing can u please suggest which will be the good solution without time consuming
    Are you using the same list of 1000 and counting values for the 37 value-mapper steps?
    Andrea Torre
    twitter: @andtorg

    join the community on ##pentaho - a freenode irc channel

  7. #7
    Join Date
    Jul 2012
    Posts
    200

    Default

    Thanks for ur quick reply..

    Yes I am using the same thing...

    For 37 fields 37 mappers i have created..
    Last edited by yvkumar; 07-27-2012 at 12:58 PM.

  8. #8
    Join Date
    Dec 2009
    Posts
    609

    Default

    Hi,

    a pretty large amount of mapping-values...
    I would recommend using a "mapping table" and switch to database-lookup/join steps...

    Cheers,

    Tom

  9. #9
    Join Date
    Nov 2008
    Posts
    271

    Default

    What about a single javascript step using an object containing the 1000 properties, i.e. the key/value pairs? Kind of associative array, in the end

    I attached a sample. Hope it is useful.


    PS: pdi 4.3.0-GA
    Attached Files Attached Files
    Andrea Torre
    twitter: @andtorg

    join the community on ##pentaho - a freenode irc channel

  10. #10
    Join Date
    Jul 2012
    Posts
    200

    Default

    Hi,,

    Thanks for giving the solution...

    sorry my question might be wrong .

    In the below attachment i am attaching u source file,target file, decode file
    As i discussed here decode file values will be same for all the columns i.e 37 the file which i am attaching here is sample file which i need ouput like this..
    Book1 --> contains source fields
    Book2 --> Target field values
    Book3 --> Value Decode (for testing i took only 4 but it contains 10000+ rows) so i mapped it to value mapper and the solution what u gave to me i tried in tat way also.10000 rows i have to decode for more than 30 columns in this sample file i gave only 4 rows.. when i added 37 value mappers for 10000+ rows containing the ktr file size is increasing more than 40mb.. The file is not at all saving due to my 32 bit os after increasing the xmx size
    Javascript also performance will degrade i think if i am adding 10000+ rows...

    As per the tom suggestion If i am trying with to load decode values in one table and looking up from the file or database for one column i am getting output but if i do for second column i am not able to get through lookup..

    Can u please check these files


    Anyway thanks for giving me so many possibilities how to do in PDI...
    Attached Files Attached Files
    Last edited by yvkumar; 07-28-2012 at 03:34 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Privacy Policy | Legal Notices | Safe Harbor Privacy Policy

Copyright © 2005 - 2019 Hitachi Vantara Corporation. All Rights Reserved.