View Full Version : LMT routine in Weka

scottmac99

11-11-2008, 07:35 PM

As a relatively new user of the Weka software, I am finding it fascinating (the only mining technique I have used extensively in the past has been Ross Quinlan's See5).

With Weka, I have been using the LMT routine to predict membership of pre-specified segments (that I first derived using K means) and am obtaining around 70% accuracy in doing so, which I am happy with given the quality of the input data.

When I review the LMT output, it appears to say that the tree has one branch and one leaf, which makes sense given that there are just four LM functions to allocated cases into one of the four segments I have.

Is this normal? From my reading of the Witten/Frank book, I had thought that a full tree would have been produced, with LM functions for each leaf.

Scott

Hi Scott,

Nice to hear that folks are using LMT. One of the strengths of LMT is that it produces small, accurate trees (it uses the CART pruning mechanism). Since it sounds like you have just the root in your tree, LMT hasn't been able to find any non-linear structure to exploit. In your case, you could probably just use multinomial logistic regression (weka.classifiers.functions.Logsitic) or simple logistic regression (weka.classifiers.functions.SimpleLogistic). LMT uses the latter. One nice feature of SimpleLogistic is that, due to the boosting process, it has built-in feature selection.

Cheers,

Mark.

scottmac99

11-12-2008, 07:12 PM

Thanks Mark ! Interestingly, I have tried a standard multinomial logistic regression fit in SPSS but the results were nowhere near as good. Perhaps this is because Weka doesn't necessarily select all predictors for each logistic function? Perhaps it's because Weka does a better job a handling missing values (a common occurrence in market research, the field I work in)? Or because it's using boosting?

Of course, one advantage of the simple LMT solution I have is that it is very easy to put in a spreadsheet to allow people to be allocated to segments on the fly (eg. when recruiting certain types of people for qualitative market research).

I just wanted to be sure that I wasn't doing anything dumb, so thank you very much for the prompt input :-)

Scott

Hi Scott,

It sounds like you have things in hand and that LMT is working out just fine. The ML researcher in me is curious though :-) You say that you didn't get as good a fit using SPSS's implementation of multinomial logistic regression. If you get the chance, try Weka's Logistic class - I'd be interested to know how it compares. Weka's implementation jointly optimizes the result across all classes (instead of using 1-against-rest or 1-against-1 to handle multi-class problems).

I suspect you are probably right about the built in feature selection of SimpleLogistic (used by LMT) selecting different features for each class being the reason for better performance. The feature selection is a by product of using Friedman et. al. LogitBoost algorithm (using univariate linear regression as the base learner) to learn the logistic function.

scottmac99

12-09-2008, 12:38 AM

You mean Logistic instead of SimpleLogistic ? OK, I tried that and it gave a small improvement in the _overall_ precision of allocation of instances (a fraction of a percentage point better). It did that by performing slightly less well on the best predicted class and slightly better on the least well predicted class (4 classes altogether).

Logistic also seemed quite a bit faster than SimpleLogistic, and (of course) used _all_ predictors (that is, all 15 predictors - out of the original 36 approx - that I had filtered down to using attribute pre-selection).

One question re Logistic ... it outputs 3 functions for 4 classes (see below) - I am presuming that these relate to classes 2, 3 and 4 and that the function for the first is effectively set to zero ?

Logistic Regression with ridge parameter of 1.0E-8

Coefficients...

Variable Coeff.

1 -0.1092 0.0823 0.3614

2 -0.3611 -0.0914 0.1501

3 0.3712 -0.1175 0.1117

4 -0.0536 0.0193 0.1764

5 -0.0056 -0.0622 0.135

6 -0.2136 -0.2928 -0.1929

7 0.1708 -0.1906 -0.0694

8 -0.1714 -0.2783 -0.1736

9 0.1951 -0.1401 0.0764

10 0.1584 0.1119 0.2896

11 0.0597 0.072 0.1995

12 -0.6417 -0.2747 -0.3583

13 -0.0291 0.057 0.2929

14 0.4227 -0.0379 -0.0942

15 -0.2309 -0.185 -0.1961

Intercept 1.7119 6.8928 -4.7189

One question re Logistic ... it outputs 3 functions for 4 classes (see below) - I am presuming that these relate to classes 2, 3 and 4 and that the function for the first is effectively set to zero ?

The functions output are for the first k-1 classes. See this wiki page for more info on how the probability estimates are computed:

http://wiki.pentaho.com/display/DATAMINING/Logistic

P.S. The latest version of Weka (3.5.8) has improved the output of Logistic. Here is an example on the iris data:

Logistic Regression with ridge parameter of 1.0E-8

Coefficients...

Class

Variable Iris-setosa Iris-versicolor

===============================================

sepallength 21.8065 2.4652

sepalwidth 4.5648 6.6809

petallength -26.3083 -9.4293

petalwidth -43.887 -18.2859

Intercept 8.1743 42.637

scottmac99

12-15-2008, 07:08 PM

The functions output are for the first k-1 classes. See this wiki page for more info on how the probability estimates are computed:

http://wiki.pentaho.com/display/DATAMINING/Logistic

[/code]

Ok that's all clear now, basically a standard MNL formulation. Thanks.