View Full Version : RE: [Mondrian]Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailingon SMP

Julian Hyde
01-27-2007, 12:22 PM

First some review comments:


I may be wrong, but I still think that RolapStarAggregationKey is
not needed. If this object model were a database schema, it would be
described as poorly normalized (a primary key which contains columns which
are not necessary to uniquely identify rows). Two RolapStarAggregationKey's
are equal even if their timestamps are different -- which strongly suggests
that the timestamp should be an attribute of the aggregation not the key.
You may find that some maps need to be keyed on bitkey+timestamp but most
need to be keyed on bitkey alone.

If RolapStarAggregationKey is still needed, please make it a static
inner class or move it to its own java file.

The javadoc for RolapStarAggregationKey still refers to threads, not
timestamps. If you don't update javadoc, don't be surprised if people get
the wrong idea!

The bitKey and timeStamp members should be final, and the
setTimeStamp and setBitKey methods should be removed (they're never called).
Rename timeStamp to timestamp, to match the capitalization of the Timestamp

You may have noticed that I'm not willing or able to get my head around your
proposed design for multi-user access to the cache and for dealing with
dynamic databases. It's a problem of timing. As you know, I have a lot of
changes for cache flushing written and almost ready to check in. It just
doesn't make sense for us to be working on overlapping areas at the same

My sole priorities for now are (a) get the cache flushing changes in and (b)
stabilize the code line for a release in January. I'm going to have to ask
you to wait til I've got that work checked in, and the release out, before
we do any more work in the area of multi-user access or dynamic databases.



From: mondrian-bounces (AT) pentaho (DOT) org [mailto:mondrian-bounces (AT) pentaho (DOT) org] On
Behalf Of Pappyn Bart
Sent: Thursday, January 25, 2007 11:44 PM
To: Mondrian developer mailing list
Subject: RE:
[Mondrian]Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP


Change 8582 introduces ThreadLocal, I do not use thread-id any more, so I
guess your changes are redundant.
However, the change does more than the previous one.

Like already told in the mailing list last week, I was busy with also
implementing the data change listener plugin to support flushing
of aggregate cache. That part is finished (however, I noticed I need to put
a few things public to let my plugin access to necessary
data, something I will likely check in today) en is in change 8582.

RolapStarAggregationKey is not obsolete any more, since in my latest
implementation, I must besides the BitKey also remember
the timeStamp the aggregation was registered the first time in the star.

This is because I have implemented the following :

* When a query starts, changes are checked (using the plugin). When changes
are detected, a thread local aggregation is made,
the thread will fill this one up. It will not change the global cache any
more, because an other thread my depend on that data.

* After the query finishes, it will first cleanup aggregates belonging to a
star that does not maintain any cache.

* Afterwards, it will try to push thread local cache to global cache in case
the global aggregate is not used by any thread. If this
is not the cache, it will be moved to pending cache (that is not thread

* After each query, pending cache is checked again to see if it can be
checked in global cache.

The RolapStarAggregationKey is needed, because the timeStamp is used to only
check in the latest version of the cache into
global cache.

This is the first step into an attempt to make mondrian A) work better in
multi-user environments B) live along a dynamic database.

The only thing that needs to be done is the introduction of transactions and
better control of how jdbc connections are used.

The biggest problem up to now, is the fact that hierarchy cache is not in
sync with a mdx query. So a mdx query might depend on
different data. In my project, I don't see any errors, this is of course
largely due to the fact how the database is structured and filled.

I have ran junit tests at least 10 times on different machines. I also ran
in JIT 'server mode' on a SMP machine, because the behavior is different.
All works up to now.

Kind regards,


From: mondrian-bounces (AT) pentaho (DOT) org [mailto:mondrian-bounces (AT) pentaho (DOT) org] On
Behalf Of Julian Hyde
Sent: donderdag 25 januari 2007 22:00
To: mondrian (AT) pentaho (DOT) org
Subject: RE: [Mondrian]
Re:VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP


Your change 8582 clashes with the stuff I referred to in the last paragraph
of this message. The only reason I didn't check it in was because it wasn't
sufficient -- on its own -- to fix the test failure, and now I either have
to merge or discard my changes.

My change was to obsolete RolapStarAggregationKey and revert to BitKey to
identify aggregations. Any aggregations which belong to a particular thread
are in a collection specific to that thread. I am convinced that using
thread-id is unsound - in particular, things will tend to stay in the cache
after their thread has died, a problem which ThreadLocal neatly avoids.

I'd like either you to make that change -- or stop making changes in that
area and let me the go-ahead to make that change. Which do you prefer?



From: Julian Hyde [mailto:julianhyde (AT) speakeasy (DOT) net]
Sent: Tuesday, January 23, 2007 2:57 AM
To: 'Mondrian developer mailing list'
Subject: RE: [Mondrian] Re:
VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP

I think the problem is with how mondrian evaluates members using multiple
passes. When the measures are coming from a virtual cube, of course there
are multiple real cubes, and each of those has a cell reader. But the code
in RolapResult assumes there is only one cell reader.

Mondrian should check the cell readers for all applicable cubes, and only
emit a result when all cell readers have been populated.

I haven't implemented the fix yet, but this cause seems very plausible to

I'm not exactly sure why this problem surfaced after Bart's change - maybe
thread-local caches increased the chances of one cache being populated and
another not - or why it appears on SMP machines.

By the way, in an effort to get this working, I removed Bart's
RolapStarAggregationKey (a compound key of BitKey and thread id) and moved
to a two-tier hashing scheme. The first tier is a ThreadLocal of maps, and
the second tier is a map. Threads which want access to the global map just
skip the first tier. Given the difficulties obtaining a unique id for a
thread, using a ThreadLocal seemed cleaner. So, even though this didn't fix
the bug, I'm going to check in.



From: mondrian-bounces (AT) pentaho (DOT) org [mailto:mondrian-bounces (AT) pentaho (DOT) org] On
Behalf Of michael bienstein
Sent: Monday, January 22, 2007 12:06 PM
To: Mondrian developer mailing list
Subject: Re : [Mondrian] Re:
VirtualCubeTest.testCalculatedMemberAcrossCubesfailing on SMP

I've seen issues with server mode JIT before related to memory boundaries
and multiple threads. But that's mutiple threads and it was in JDK 1.4 (the
memory model changed in 1.5 I think). The issue is that the instructions in
the Java code can be run out of order to the way you've coded them. E.g.
a=1; b=2; a=b; can be run just a=2; b=2; because that's what it is
equivalent to. The only way to force it to do what you really expected is
to synchronize your accesses because that prevents the instruction
re-ordering across the memory boundary. This was an issue in Apache Struts
at one point because they used a custom Map implementation called
"FastHashMap" which gets filled with values and then flipped to be in
immutable mode. The problem was that the get() method tested if it was
flipped already without synchronizing which looked safe because the flip
flag was set only after the insertion code. But the JIT reversed the order
and the flip was done before the last insertions leading to certain problems
on high-end servers.

All that's a moot point if we can't see how multiple threads are being used.


----- Message d'origine ----
De : John V. Sichi <jsichi (AT) gmail (DOT) com>