I was also thinking of moving to ThreadLocal, I will take over your modifications as soon as they
are checked in - thanks.

I did develop some new features in the mean while and added the possibility to attach a data source
listener to a star.

Because there are other multi-user related problems with loading of aggregates, I designed a new principle.

Now agg cache is checked before a query is run. If any aggregation has changed due to changes in the database,
a new - thread local - aggregation is made. The thread will fill this aggregation. Any other threads that where using
global (non thread local) cache, will not be interfered (what was the case in the past). After the thread has finished,
it will try to move the local cache to the global cache in case no other threads are using the aggregation.

A time of creation is maintained, so only the latest changes are put into global cache.

If the thread has cache turned off, then the principle that is currently in place, stays the same, thread local cache
is flushed after the query has finished.

Later on, if transactions are put in to place (I will not do this in the upcoming patch), I think multi-user access
and data integrity will be much better.

The only problem - up to now - is the hierarchy cache, since it does not follow the execution of a query, but
I am thinking of applying the same principle of that of the agg cache.

Bart

________________________________

From: mondrian-bounces (AT) pentaho (DOT) org [mailto:mondrian-bounces (AT) pentaho (DOT) org] On Behalf Of Julian Hyde
Sent: dinsdag 23 januari 2007 11:57
To: 'Mondrian developer mailing list'
Subject: RE: [Mondrian] Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP


I think the problem is with how mondrian evaluates members using multiple passes. When the measures are coming from a virtual cube, of course there are multiple real cubes, and each of those has a cell reader. But the code in RolapResult assumes there is only one cell reader.

Mondrian should check the cell readers for all applicable cubes, and only emit a result when all cell readers have been populated.

I haven't implemented the fix yet, but this cause seems very plausible to me.

I'm not exactly sure why this problem surfaced after Bart's change - maybe thread-local caches increased the chances of one cache being populated and another not - or why it appears on SMP machines.

By the way, in an effort to get this working, I removed Bart's RolapStarAggregationKey (a compound key of BitKey and thread id) and moved to a two-tier hashing scheme. The first tier is a ThreadLocal of maps, and the second tier is a map. Threads which want access to the global map just skip the first tier. Given the difficulties obtaining a unique id for a thread, using a ThreadLocal seemed cleaner. So, even though this didn't fix the bug, I'm going to check in.

Julian


________________________________

From: mondrian-bounces (AT) pentaho (DOT) org [mailto:mondrian-bounces (AT) pentaho (DOT) org] On Behalf Of michael bienstein
Sent: Monday, January 22, 2007 12:06 PM
To: Mondrian developer mailing list
Subject: Re : [Mondrian] Re: VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP


I've seen issues with server mode JIT before related to memory boundaries and multiple threads. But that's mutiple threads and it was in JDK 1.4 (the memory model changed in 1.5 I think). The issue is that the instructions in the Java code can be run out of order to the way you've coded them. E.g. a=1; b=2; a=b; can be run just a=2; b=2; because that's what it is equivalent to. The only way to force it to do what you really expected is to synchronize your accesses because that prevents the instruction re-ordering across the memory boundary. This was an issue in Apache Struts at one point because they used a custom Map implementation called "FastHashMap" which gets filled with values and then flipped to be in immutable mode. The problem was that the get() method tested if it was flipped already without synchronizing which looked safe because the flip flag was set only after the insertion code. But the JIT reversed the order and the flip was done before the last insertions leading to certain problems on high-end servers.

All that's a moot point if we can't see how multiple threads are being used.

Michael


----- Message d'origine ----
De : John V. Sichi <jsichi (AT) gmail (DOT) com>