Hitachi Vantara Pentaho Community Forums
Results 1 to 21 of 21

Thread: A column not seen in spoon

  1. #1

    Default A column not seen in spoon

    Hello,

    As a new user of Pentaho, I'm trying to get simple things done in order to evaluate the capabilities of Pentaho for our purposes (namely getting statistics out of Clearquest and Requisite Pro databases for report and dashboard generation).

    Starting from the example from document "Getting started with Pentaho Data Integration", I tried to generate a database containing only:

    • the date at which the input file was parsed;
    • the number of lines with a "STATE" equal to "In Process";

    Seemed simple enough... Well, it was not that simple for me.

    I ended up with something like Transformation 1.ktr, and it doesn't work, giving me the following error message:
    2011/09/27 15:00:53 - Spoon - Logging goes to file:///D:/DOCUME~1/ebp5425/LOCALS~1/Temp/spoon_bb601042-e908-11e0-aff4-3d6469023f98.log
    2011/09/27 15:00:56 - class org.pentaho.agilebi.platform.JettyServer - WebServer.Log.CreateListener localhost:10000
    2011/09/27 15:00:57 - Spoon - Asking for repository
    2011/09/27 15:00:57 - RepositoriesMeta - Reading repositories XML file: D:\documents and Settings\ebp5425\.kettle\repositories.xml
    2011/09/27 15:01:13 - Spoon - Transformation opened.
    2011/09/27 15:01:13 - Spoon - Launching transformation [Transformation 1]...
    2011/09/27 15:01:13 - Spoon - Started the transformation execution.
    2011/09/27 15:01:13 - Transformation 1 - Dispatching started for transformation [Transformation 1]
    2011/09/27 15:01:13 - Transformation metadata - Natural sort of steps executed in 0 ms (13 time previous steps calculated)
    2011/09/27 15:01:13 - Table output 2.0 - Connected to database [Data_output_SQLite] (commit=1000)
    2011/09/27 15:01:13 - Table output.0 - Connected to database [Data_output_SQLite] (commit=1000)
    2011/09/27 15:01:13 - Get System Info.0 - Finished processing (I=0, O=0, R=1, W=1, U=0, E=0)
    2011/09/27 15:01:13 - Table output 2.0 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) : Unexpected error
    2011/09/27 15:01:13 - Table output 2.0 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) : org.pentaho.di.core.exception.KettleStepException:
    2011/09/27 15:01:13 - Table output 2.0 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) : Field [In_Process_state_c] is required and couldn't be found!
    2011/09/27 15:01:13 - Table output 2.0 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) :
    2011/09/27 15:01:13 - Table output 2.0 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) : at org.pentaho.di.trans.steps.tableoutput.TableOutput.processRow(TableOutput.java:95)
    2011/09/27 15:01:13 - Table output 2.0 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) : at org.pentaho.di.trans.step.RunThread.run(RunThread.java:40)
    2011/09/27 15:01:13 - Table output 2.0 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) : at java.lang.Thread.run(Unknown Source)
    2011/09/27 15:01:13 - Table output 2.0 - Finished processing (I=0, O=0, R=1, W=0, U=0, E=1)
    2011/09/27 15:01:13 - Transformation 1 - Transformation 1
    2011/09/27 15:01:13 - Transformation 1 - Transformation 1
    2011/09/27 15:01:14 - Spoon - The transformation has finished!!
    2011/09/27 15:01:14 - Transformation 1 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) : Errors detected!
    2011/09/27 15:01:14 - Transformation 1 - ERROR (version 4.2.0-stable, build 15748 from 2011-09-08 13.11.42 by buildguy) : Errors detected!

    2011/09/27 15:05:42 - Spoon - Transformation opened.
    2011/09/27 15:05:42 - Spoon - Launching transformation [Recup CQ]...
    2011/09/27 15:05:42 - Spoon - Started the transformation execution.
    2011/09/27 15:05:42 - Transformation metadata - Natural sort of steps executed in 0 ms (8 time previous steps calculated)
    2011/09/27 15:05:42 - Spoon - The transformation has finished!!
    The "In_Process_state_c" column is coming from the "Select Values" step (it's a rename of "Lines red" coming from "Output steps metrics").

    Even if this is not the most efficient way to perform such a transformation with Spoon, I do not understand why it does not work.

    I use Spoon 4.2.0 stable (it was originally done with 4.2.0 rc1) on Windows XP SP3 with JRE 1.6.0_27.

    Any help would be appreciated.

    Have a nice day.

  2. #2
    Join Date
    Nov 2008
    Posts
    777

    Default

    In spoon, press the "Verify this transformation" button. You have a few issues that need resolving. One of the main issues is that you can't just merge two stream together willy-nilly. The rows of each stream have to be identical in structure. It's easy to fix though. At your Table output 2 step, for instance, just put the Get System Info step "inline" with the Select values step. That way, the collection_date_c field will be added to the row stream properly.

    Name:  verify.jpg
Views: 290
Size:  23.0 KB
    pdi-ce-4.4.0-stable
    Java 1.7 (64 bit)
    MySQL 5.6 (64 bit)
    Windows 7 (64 bit)

  3. #3

    Default

    Thanks for your reply, but I do not understand what you exactly mean by "just put the Get System Info step "inline" with the Select values step"...

    I see the errors in "Verify this transformation"; I think you refer to this particular line for "Table output 2":
    The name of field number 1 is not the same as in the first row received: you're mixing rows with different layout. Field [In_Process_state_c Integer(10)] does not have the same name as field [Collection_date_c Date].
    But this seems to imply that I'm trying to insert values from "In_Process_state_c" column in column "Collection_date_c", which is not what I want.
    (and I understand that types of those columns are not identical, so they cannot be mapped to each other)

    To explain a little more, there are 3 columns in the final destination database:

    • "id" auto-generated by SQLite;
    • "In_Process_state" coming from step "Select values" (as "In_Process_state_c");
    • "Collection_date" coming from step "Get system Info" (as "Collection_date_c");

    In Table output 2 step, I mapped the columns as this:
    Name:  column_mapping.jpg
Views: 299
Size:  22.7 KB

    (BTW, I put different names on "source" and "target" columns in order to be sure which was which in the resulting mapping)

    I thought it was enough to tell Spoon where to get values, but it seems I did not understand a lot of things, yet.
    Last edited by Alain VALLETON; 09-27-2011 at 10:48 AM. Reason: further clarification of mapping

  4. #4
    Join Date
    Nov 2008
    Posts
    777

    Default

    By "inline" i mean this. The fields defined in the Get System Info step will be appended to each row output by the Select Values step.

    Name:  inline.jpg
Views: 280
Size:  11.4 KB

    Note that in Spoon you can right-click on a step and select "Show output fields" or "Show input fields". This tells you exactly what is being passed from step to step.

    Also, I never use Mapping so I won't be much help with that but if all you want to do is rename your fields, you can easily do that in the Select Values step.
    Last edited by darrell.nelson; 09-27-2011 at 04:21 PM.
    pdi-ce-4.4.0-stable
    Java 1.7 (64 bit)
    MySQL 5.6 (64 bit)
    Windows 7 (64 bit)

  5. #5

    Default

    Thanks a lot!

    Now it works and I can proceed with other tests.

    And also thanks for the tips and hints.

    Have a nice day.

  6. #6

    Default

    Huh, it's me again...

    Of course when I tried the next step of my statistics table creation, I stumbled upon the same problem. Now, I know what the problem is, but I do not know how to fix it.

    In fact, I want to create a table of statistics (for example, number of rows in a certain status) from an existing data set (here, the Pentaho example data set, but really a Clearquest or Requisite Pro database).

    For that purpose, I want to filter some fields in the original data set in order to be able to count the number of rows returned by these filters. Then, I want to insert the count returned by each filter in a column dedicated to this filter. I also want to add a date to each row.

    When there is only one filter, it works as Darrell Nelson kindly explained in previous posts.

    When I try to add a second filter and counting the rows for this filter, I do not know how to "connect the boxes" since counting rows on different filters are essentially mutually exclusive. Connecting the boxes "in parallel" to the output does not work, of course (same as original problem), but I cannot find a way to explain to Pentaho that these counts should be put in different columns of the same row:
    Name:  where_hop.jpg
Views: 284
Size:  18.6 KB

    Here are the columns in the table:
    Name:  columns.png
Views: 278
Size:  8.5 KB

    And here is the (non working) transformation:
    Transformation 1.ktr

    Maybe, this is not the right way to create this kind of tables...

    And, of course, the idea is to have more than two filters/counts.

    Any help appreciated.

  7. #7
    Join Date
    Apr 2008
    Posts
    1,771

    Default

    Hi Alain.
    A quick reply.

    I had the same problem and I solved it as follow:
    1. rename the fields that contains the counts differently using a "Select Value" step, for example: Count_In_Process, Count_Disputed.
    2. Add a constant value to both streams: insert "Add Constants" step between "Only Count Rows" and "Get System Info" and another one between "Only Count Rows 2" and "Get System Info".
    3. Insert a "Lookup Stream" and use the constant as your matching key.
    Your resulting file should have ID, Constant, Count_In_Process, Count_Disputed.

    Hope it's what you're looking for.

    Mick
    PS: I'm sure there are better ways...

  8. #8

    Default

    Thanks for your help. But, I do not seem to be able to make it work.

    I tried something like this (following your instructions):
    Name:  maybe_not_like_this.jpg
Views: 297
Size:  22.3 KB

    But when I try to configure the "Stream lookup" step, Spoon sends me a "stack overflow error" dialog box...

    If I insist (for example trying to verify the transformation or getting lookup fields in the "Stream lookup" step dialog box), I get various size of such boxes and in the end Spoon advises me to save my work under a new name and to run to the nearer exit

    Here is a traceback of the error:
    oops_1.txt

    Here is also the transformation:
    Transformation 2.ktr

    Have a nice day,

  9. #9
    Join Date
    Apr 2008
    Posts
    1,771

    Default

    Hi Alain.
    Your transformation is exactly as I explained it :-)
    I think that your error is caused by other factors.
    How many records do you pass through this transformation?

    One thing that I did sometime was to add a "Block this step until step finishes" just before the Stream lookup.
    In your tranformation I would insert it after "Add Constants 2" and Stream lookup.

    You could increase the Java Memory..?

    Mick

  10. #10
    Join Date
    Nov 2008
    Posts
    777

    Default

    Again, you need to be careful about merging streams together. The Stream Lookup field is one of them that is fussy. In the options for that step, you have to select THE preceeding step that the lookup will be performed with.

    However, it doesn't seem to me that you need to look up anything at all in this case. All you are doing is splitting the row stream (with the filters) and now you need to put the branches back together. Instead of the Stream Lookup step, a simple Dummy step would bring the two streams back together nicely. Just be sure that your two branches add exactly the same fields to the streams so that when you put them back together again they match. The "Verify this transformation" button should yell if they don't.

    Edit: Okay maybe I didn't understand your mission correctly the first time. What I think you should do is run your counters in series instead of in parallel. Count the "In Process" records in a filtered branch and set the in_process count to Zero in the main branch and merge them back together. Then count the "Disputed" rows in a filtered branch and set the disputed count to Zero in the main branch and merge them back together again. Make sense?

    Re-edit: I think we need to research what the Return Step Metrics step does! I've never used it before but I don't think it does what you want it to do. What do you want it to do btw? What are you trying to output to Table 2?

    Re-re-edit: I think the Group By step on the STATUS field might be what you want. It would replace all those filters and branches. Check it out.
    Last edited by darrell.nelson; 09-28-2011 at 10:37 AM.
    pdi-ce-4.4.0-stable
    Java 1.7 (64 bit)
    MySQL 5.6 (64 bit)
    Windows 7 (64 bit)

  11. #11

    Default

    I found what was causing the stack overflow: I had "Stream lookup" step after the "Get System Info", and after moving it, it still took "Get System Info" as input step... Oops.

    It still does not work because calculated number of rows inserted in final database are false: 2823 instead of 41 and 2782 (which, btw, is 2823 - 41) instead of 14.

    Still investigating...

    Thank you for your help.

    P.S.
    I can add the transformation, if anyone has an idea: Transformation 3.ktr
    Last edited by Alain VALLETON; 09-28-2011 at 10:32 AM.

  12. #12
    Join Date
    Nov 2008
    Posts
    777

    Default

    What is it that you are trying to output to Table 2? Can you walk us through what you are trying to do with all the filters and branches and lookups?
    Last edited by darrell.nelson; 09-28-2011 at 10:57 AM.
    pdi-ce-4.4.0-stable
    Java 1.7 (64 bit)
    MySQL 5.6 (64 bit)
    Windows 7 (64 bit)

  13. #13

    Default

    Quote Originally Posted by darrell.nelson View Post
    Again, you need to be careful about merging streams together. The Stream Lookup field is one of them that is fussy. In the options for that step, you have to select THE preceeding step that the lookup will be performed with.
    Yep, I saw that

    Quote Originally Posted by darrell.nelson View Post
    However, it doesn't seem to me that you need to look up anything at all in this case. All you are doing is splitting the row stream (with the filters) and now you need to put the branches back together. Instead of the Stream Lookup step, a simple Dummy step would bring the two streams back together nicely. Just be sure that your two branches add exactly the same fields to the streams so that when you put them back together again they match. The "Verify this transformation" button should yell if they don't.

    Edit: Okay maybe I didn't understand your mission correctly the first time. What I think you should do is run your counters in series instead of in parallel. Count the "In Process" records in a filtered branch and set the in_process count to Zero in the main branch and merge them back together. Then count the "Disputed" rows in a filtered branch and set the disputed count to Zero in the main branch and merge them back together again. Make sense?
    Huh, I'll try...

    Quote Originally Posted by darrell.nelson View Post
    Re-edit: I think we need to research what the Return Step Metrics step does! I've never used it before but I don't think it does what you want it to do. What do you want it to do btw?
    At one point it seems to me it did the right thing for me, that is count the rows that pass through it (among other things): as there are 41 rows with status == "In Process", the lines red value should return 41.

    And I had checked it by looking inside the table and it was 41.

    Now, it's acting as if the filter does not work anymore...

    Weird.

  14. #14
    Join Date
    Nov 2008
    Posts
    777

    Default

    I think the Group By step is what you want to use to get the metrics you are looking for. Immediately after your Table Output 1, try this:

    Sort (on the STATUS field)
    Group By (on the STATUS field) and create a counter field using the appropriate aggregation
    (you may have to flatten your rows here depending on what you are wanting output...)
    Get System Info (adds the date/time field)
    Table Output 2
    Last edited by darrell.nelson; 09-28-2011 at 11:20 AM.
    pdi-ce-4.4.0-stable
    Java 1.7 (64 bit)
    MySQL 5.6 (64 bit)
    Windows 7 (64 bit)

  15. #15

    Default

    Quote Originally Posted by darrell.nelson View Post
    What is it that you are trying to output to Table 2? Can you walk us through what you are trying to do with all the filters and branches and lookups?
    Ok, each time I launch the transformation, I want it to insert inside the final table :

    • the date;
    • the number of rows in the input (a file here) which status is "In Process";
    • the number of rows in the input (a file here) which status is "Disputed";

    So for that, I filter the rows with a "Filter rows" step with a rule like this:
    Name:  filter_in_process.png
Views: 79
Size:  3.7 KB

    I then use a "Output steps metrics" step to count (among other things) how many rows we got from previous step:
    Name:  count_rows_in_process.jpg
Views: 79
Size:  27.8 KB

    Then I use a "Select values" step to only get the column ("lines red") I want from the "Output steps metrics" step. I rename it to "In_Process_state_c" to have a unique name:
    Name:  only_count_rows.png
Views: 81
Size:  3.7 KB

    That was one branch, now I do the same thing with the second branch (that is for rows that did NOT match the first filter).

    The hard part is getting all these columns values to get into the same final table and for that, the solution of Mick_data works.

    Well, except that inserted data are not correct (as if filter did not work), which eludes me.

    All the columns names are remapped at the "Table output" step:
    Name:  output_cols_mapping.jpg
Views: 78
Size:  13.0 KB

    I hope, this is clearer now

    Thanks again for your help.

  16. #16
    Join Date
    Nov 2008
    Posts
    777

    Lightbulb

    As you well know, flexibility is abundant in PDI. There are often several ways to accomplish the same goal. This sometimes brings both quick success and frustration.

    Having never used the Return Step Metrics step, my concern in your application is the flow of the rows. If 41 rows go into the step how many come out? Does it block until the final record is received? How else would it know there were 41 rows? Or does it pass all 41 rows and incrementent the counter for each successive row? So if it blocks until the last row comes through, what happens to your other Return Step Metrics step? Does it also block? Which one will get the "last" row? If each of your Report Metrics steps only output one row, what does the Stream Lookup step do? There would be only 1 main row and 1 lookup row. As you can see, I raise a lot of questions and don't have a lot of answers. The Return Metrics step is new and the documentation isn't very clear to me.

    One other concept to be aware of is that in transformations all the steps run concurrently, each one in its own thread. Rows are constantly being passed from one step to the next with little FIFOs in between. Steps independently run flat-out fast until either their incoming FIFO is empty or their outgoing FIFO is full. Of course, certain steps by nature will have to block in order to complete their assigned task, e.g., sorting, grouping, etc. This may help you visualize what your transformation is doing, especially in the Report Metrics steps and the Lookup step.

    To me, you are after row statistics. Thus I recommend using a step designed to collect and produce statistics. Enter the Group By step. If you sort the row stream and run it through the grouping step on the STATUS field, you would get two rows out of it:

    STATUS COUNT
    IN PROCESS 41
    DISPUTED 2000 (or so, I think you said)


    All that in two steps. No Filters, no Report Metrics, no Constants, no Lookups, and no timing or blocking issues.
    Last edited by darrell.nelson; 09-28-2011 at 03:29 PM.
    pdi-ce-4.4.0-stable
    Java 1.7 (64 bit)
    MySQL 5.6 (64 bit)
    Windows 7 (64 bit)

  17. #17

    Default

    Quote Originally Posted by darrell.nelson View Post
    As you well know, flexibility is abundant in PDI. There are often several ways to accomplish the same goal. This sometimes brings both quick success and frustration.
    Yes: TIMTOWTDI, but I'm sure I master Perl much better

    Quote Originally Posted by darrell.nelson View Post
    Having never used the Return Step Metrics step, my concern in your application is the flow of the rows. If 41 rows go into the step how many come out? Does it block until the final record is received? How else would it know there were 41 rows? Or does it pass all 41 rows and incrementent the counter for each successive row? So if it blocks until the last row comes through, what happens to your other Return Step Metrics step? Does it also block? Which one will get the "last" row? If each of your Report Metrics steps only output one row, what does the Stream Lookup step do? There would be only 1 main row and 1 lookup row. As you can see, I raise a lot of questions and don't have a lot of answers. The Return Metrics step is new and the documentation isn't very clear to me.

    One other concept to be aware of is that in transformations all the steps run concurrently, each one in its own thread. Rows are constantly being passed from one step to the next with little FIFOs in between. Steps independently run flat-out fast until either their incoming FIFO is empty or their outgoing FIFO is full. Of course, certain steps by nature will have to block in order to complete their assigned task, e.g., sorting, grouping, etc. This may help you visualize what your transformation is doing, especially in the Report Metrics steps and the Lookup step.
    Well, yes, I guess you're right about the description of the flow.

    But now, I see that this kind of tool (Spoon or Talent Studio or any other one (?)) may not be the right tool for what we need, given how complicated it is to accomplish as trivial a task as counting the number of rows with a column having a certain value ( select count(*) from SALES_DATA where STATUS="In Process"; )

    Please, note that I'm in no way trying to bash Pentaho Data Integration (or any other such tool, btw), I'm just reflecting on the fact that PDI might not be right tool for our needs.

    Quote Originally Posted by darrell.nelson View Post
    To me, you are after row statistics. Thus I recommend using a step designed to collect and produce statistics. Enter the Group By step. If you sort the row stream and run it through the grouping step on the STATUS field, you would get two rows out of it:

    STATUS COUNT
    IN PROCESS 41
    DISPUTED 2000 (or so, I think you said)
    Unfortunately, I cannot seem to be able to make it work: when I try this, I get "1" as value for the count of each different status (there are six different values for "STATUS" column in the table: "Cancelled", "Disputed", "In Process", "On Hold", "Resolved" and "Shipped") which is obviously not the right answer.

    See screenshots below for more information.

    The whole transformation at a glance:
    Name:  drawing.jpg
Views: 75
Size:  12.1 KB

    How I configured the "Group by" step:
    Name:  group_by.jpg
Views: 75
Size:  28.0 KB

    The final table columns mapping:
    Name:  final_columns_mapping.jpg
Views: 76
Size:  32.0 KB

    And the result of transformation:
    Name:  execution_results.png
Views: 76
Size:  5.6 KB

    Finally the transformation itself:
    Transformation 4.ktr

    And my second problem is that I do not need six rows with three columns (the date, status name and the count), but one row with seven columns (date and one column per status for the number of rows that have each of these statuses in the input base).

    Quote Originally Posted by darrell.nelson View Post
    All that in two steps. No Filters, no Report Metrics, no Constants, no Lookups, and no timing or blocking issues.
    Yes, this seems simpler, hopefully

    Thanks again for your help (and also for your patience).

  18. #18
    Join Date
    Nov 2008
    Posts
    777

    Cool

    In the Sort Rows step you need to un-tick the "Only pass unique rows? (Verifies keys only)" option. Did you notice on the execution metrics tab that only 6 rows were written by the Sort Rows step? You need all the rows passed on to the aggregation (Group By) step. Also, in the Group By step you need to un-tick the "Include all rows?" option. You want just the aggregation rows written by that step, not every single input row.

    Besides watching the Executions Results Step Metrics tab, one thing you can do to "debug" most steps is to right-click and select "Preview". This may show you what's wrong in the intermediate steps if you don't get the end results you expect.

    Since I don't have your data files and database tables, I trimmed your problem area way down and saved it in the attached file. Notice that I used a Data Grid step to simulate the input data rows. Do a "Preview" all along the way and you will see how it works. Notice also that I added a Row Denormalizer step at the end in order to turn the six summary rows into one output row.
    Name:  Transformation 5.jpg
Views: 68
Size:  18.0 KB
    Attached Files Attached Files
    Last edited by darrell.nelson; 09-29-2011 at 01:06 PM.
    pdi-ce-4.4.0-stable
    Java 1.7 (64 bit)
    MySQL 5.6 (64 bit)
    Windows 7 (64 bit)

  19. #19
    Join Date
    Nov 2008
    Posts
    777

    Default

    Quote Originally Posted by Alain VALLETON View Post
    But now, I see that this kind of tool (Spoon or Talent Studio or any other one (?)) may not be the right tool for what we need, given how complicated it is to accomplish as trivial a task as counting the number of rows with a column having a certain value ( select count(*) from SALES_DATA where STATUS="In Process"; )
    With PDI there's a trememdous amout of power at your fingertips. You might expect that to harness all that power there will be a bit of a learning curve. I've been using it for two years (with unbelievable success) and I still learn something new every time I use it.
    pdi-ce-4.4.0-stable
    Java 1.7 (64 bit)
    MySQL 5.6 (64 bit)
    Windows 7 (64 bit)

  20. #20

    Default

    Quote Originally Posted by darrell.nelson View Post
    In the Sort Rows step you need to un-tick the "Only pass unique rows? (Verifies keys only)" option. Did you notice on the execution metrics tab that only 6 rows were written by the Sort Rows step? You need all the rows passed on to the aggregation (Group By) step. Also, in the Group By step you need to un-tick the "Include all rows?" option. You want just the aggregation rows written by that step, not every single input row.

    Besides watching the Executions Results Step Metrics tab, one thing you can do to "debug" most steps is to right-click and select "Preview". This may show you what's wrong in the intermediate steps if you don't get the end results you expect.

    Since I don't have your data files and database tables, I trimmed your problem area way down and saved it in the attached file. Notice that I used a Data Grid step to simulate the input data rows. Do a "Preview" all along the way and you will see how it works. Notice also that I added a Row Denormalizer step at the end in order to turn the six summary rows into one output row.
    Name:  Transformation 5.jpg
Views: 68
Size:  18.0 KB
    Thanks a lot!

    Now it works, and better yet, I even think I understand the logic in this transformation

    Again thank you and have a nice day.

  21. #21

    Default

    Quote Originally Posted by darrell.nelson View Post
    With PDI there's a tremendous amount of power at your fingertips. You might expect that to harness all that power there will be a bit of a learning curve. I've been using it for two years (with unbelievable success) and I still learn something new every time I use it.
    Well I see that there is a lot to "unlearn" in my way of thinking to get my mind around PDI logic, but once you get on the right track, transformations might not be as complicated as I feared initially (by getting down a wrong track).

    Again a great "thank you" to all who helped!

    Have a nice day.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Privacy Policy | Legal Notices | Safe Harbor Privacy Policy

Copyright © 2005 - 2019 Hitachi Vantara Corporation. All Rights Reserved.