Hello Kettle and Pentaho fans!
Yes indeed we’ve got another present for you in the form of a new Pentaho release: version 6.1
This predictable steady flow of releases has in my opinion pushed the popularity of PDI/Kettle over the years so it’s great that we manage to keep this up.
The image above shows the evolution of PDI download counts over the years on SourceForge only.


There’s actually a ton of really nice stuff to be found in version 6.1 so for a more complete recap I’m going to refer to my friend and PDI product manager Jens on his blog.
However, there are a few favorite PDI topics I would like to highlight…
Dynamic ETL
Doing dynamic ETL has been on my mind for a long time. In fact, we started working on this idea in the summer of 2010 to have something to show for at the Pentaho Community Meetup of that year in the beautiful Cascais (Portugal). Back then I remember getting a lot of blank stares and incomprehensive grunts from the audience when I presented the idea. However, the last couple of years dynamic ETL (or ETL metadata injection) has been a tremendous driver for solving the really complex cases out there in many areas like Big Data, Data Ingestion and archiving, IoT and many more. For a short video explaining a few driving principles behind the concept see here:

More comprehensive material on the topic can be found here.
Well in any case I’m really happy to see us keeping up the continued investment to make metadata injection better and more widely supported. So in version 6.1 we’re adding support for a bunch of new steps including:

  • Stream Lookup (!!!)
  • S3 Input and Output
  • Metadata Injection (try to keep your sanity while you’re wrestling with this recursive puzzle)
  • Excel Output
  • XML Output
  • Value Mapper
  • Google Analytics

It’s really nice to see these new improvements drive solutions across the Pentaho stack, helping out with Streamlined Data Refinery, auto-modeling and much more. Jens has a tutorial on his blog with step by step instructions so make sure to check it out!
Data Services
PDI data services is another of these core technologies which frankly take time to mature and be accepted by the larger Pentaho community. However, I strongly feel like these technologies make such a big difference with what anyone else in the DI/ETL market is doing. In this case, simple being able to run standard SQL on a Kettle transformation is a game changer. As you can tell I’m very happy to see the following advances being piled on the improvements of the last couple of releases:

  • Extra Parameter Pushdown Optimization for Data Services – You can improve the performance of your Pentaho data service through the new Parameter Pushdown optimization technique. This technique is helpful if your transformation contains any step that should be optimized, including input steps like REST where a parameter in the URL could limit the results returned by a web service.
  • Driver Download for Data Services in Pentaho Data Integration – When connecting to a Pentaho Data Service from a non-Pentaho tool, you previously needed to manually download a Pentaho Data Service driver and install it. Now in 6.1, you can use the Driver Details dialog in Pentaho Data Integration to download the driver.
  • Pentaho Data Service as a Build Model Source Edit section – You can use a Pentaho Data Service as the source in your Build Model job entry, which streamlines the ability to generate data models when you are working with virtual tables.


Virtual Data Sets overview in 6.1

Other noteworthy PDI improvements
As always, the change-list for even the point releases like 6.1 is rather large but I just wanted to pick 2 improvements that I really like:

  • JSON Input: we made it a lot faster and the step can now handle large files (hundreds of MBs) with 100% backward compatibility
  • The transformation and job execution dialogs have been cleaned up!

The new run dialog in 6.1

I hope you’re all as excited as I am to see these improvements release after release after release…
As usual, please keep giving us feedback on the forums or through our JIRA case tracking system. This helps us to keep our software stable in an ever changing ICT landscape.
Cheers,
Matt


More...