PDA

View Full Version : Experimenter: no results?



boemans
04-28-2008, 06:11 AM
Hello,

I was using the Experimenter to evaluate some algorithms on a dataset.

I just selected my dataset and set the parameters of the algorithms and saved the output as a csv-file.
But in the analyse-tab, I can't see any results from my csv-file.

The only thing I can see is:


Available resultsets
(1) trees.J48 -C_0.05_-M_2 -2.17733168393644448E17
(2) functions.SimpleLogistic -I_0_-M_500_-H_50 7.3977106263047055E18
(3) rules.JRip -F_3_-N_0.0_-O_2_-S_1 -6.5893129968321475E18
(4) functions.SMO -C_1.0_-E_1.0_-G_0.01_-A_250007_-L_0.0010_-P_1.0E-12_-N_0_-V_-1_-W_1 -6.5858836363786916E18
(5) functions.MultilayerPerceptron -L_0.3_-M_0.2_-N_500_-V_0_-S_0_-E_20_-H_a -1.18324858260785264E17
(6) lazy.IBk -K_1_-W_0 -7.9020085943859118E18
(7) rules.ZeroR '' 4.8055541465867952E16

When I select everything for 'column', it gives this error:


Available resultsets
Instance has missing value in resultset key column 47!
churn2-weka.filters.supervised.instance.Resample-B0.1-S1-Z100.0-weka.filters.supervised.instance.Resample-B0.1-S1-Z100.0-weka.filters.supervised.instance.Resample-B0.1-S1-Z100.0,1,1,weka.classifiers.trees.J48,-C_0.05_-M_2,-2.17733168393644448E17,20080427.1827,4500,500,424,76,0,84.8,15.2,0,0.239513,0.223621,0.343314,79.964792,91.828001,326.554612,269.977677,56.576935,0.653109,0.539955,0.113154,51.855058,0.10371,7935.588523,0.980769,408,0.809524,68,0.190476,16,0.019231,8,0.857143,0.980769,0.914798,0.088,0.001,'Number_of_leaves:_30\nSize_of_the_tree:_45\n',45,30,30,?,?


Any idea what's the problem here?

Mark
04-28-2008, 05:58 PM
For a standard experiment involving x learners applied to y data sets, you should not need to adjust anything in the "Analyse" panel aside from the evaluation measure that you are interested in (i.e. "Comparison field"). The defaults for "Column" ("Scheme", "Scheme_options" and "Scheme_version_ID") are sufficient to identify unique result sets for comparison.

So, once you see:

Available resultsets
(1) trees.J48 -C_0.05_-M_2 -2.17733168393644448E17
(2) functions.SimpleLogistic -I_0_-M_500_-H_50 7.3977106263047055E18
(3) rules.JRip -F_3_-N_0.0_-O_2_-S_1 -6.5893129968321475E18
(4) functions.SMO -C_1.0_-E_1.0_-G_0.01_-A_250007_-L_0.0010_-P_1.0E-12_-N_0_-V_-1_-W_1 -6.5858836363786916E18
(5) functions.MultilayerPerceptron -L_0.3_-M_0.2_-N_500_-V_0_-S_0_-E_20_-H_a -1.18324858260785264E17
(6) lazy.IBk -K_1_-W_0 -7.9020085943859118E18
(7) rules.ZeroR '' 4.8055541465867952E16


All you have to do is press the "Perform test" button.

Cheers,
Mark.

boemans
04-29-2008, 02:22 AM
Ok, thanks a lot. Didn't know life could be so easy :)

However, I have another issue:
Now, I have 3 datasets and about 8 classifiers. So am I correct Weka will measure the performance of the 8 classifiers on the 3 datasets? Because the thing is a have a separate dataset for JRIP/J48 evaluation, separate for SMO and a separate for all the rest. Is it not possible to say he only has to perform eg. JRIP and J48 on 1 specific dataset.
And how do you specify which algorithm you want to compare with the rest if you have more than 1 dataset?

Thanks!

Mark
04-29-2008, 04:51 AM
It is not possible in a single experiment to specify that a certain learning algorithm should only be applied to a subset
of the data sets. For this situation, you should configure multiple experiments. If you are using a database to store results,
you can still have all the results from the separate experiments stored in the same results table.

Cheers,
Mark.