Hitachi Vantara Pentaho Community Forums
Results 1 to 15 of 15

Thread: Issue with Mondrian Cache

  1. #1
    Join Date
    Aug 2012
    Posts
    7

    Exclamation Issue with Mondrian Cache

    Hi,

    I am using Mondrian 3.4.1 version. I have tried using eh-cache with Mondrian and it works fine as adding and getting cached elements from my cache.
    I have noticed that after some time, caching doesn't work. i have analysed eh-cache and surprisingly found that even though eh-cache has data, Mondrian reloads the cache.

    I have looked in further and found out SegmentCacheIndexRegistry maintains Map of indexes, which has Week Reference key and Soft Reference value. because of indexes Map losing the state after some time, PeekCommand unable to locate SegmentHeaders and reloads the whole Segment again.

    I have also noticed that contains method of My Segment Cache never called by Mondrian.


    Kindly throw some light for above mentioned Mondrian behavior.

    Thanks in Advance,
    Jeet

  2. #2
    Join Date
    Mar 2007
    Posts
    142

    Default

    You are right. This might be the cause. Please log a Jira case at http://jira.pentaho.com and link it to this forum post. Our project managers will look at it, prioritize and assign ASAP.
    Luc Boudreau
    aka. Luc le Magnifique
    aka. Monsieur Oui Oui

    Lead Engineer, Pentaho Corporation
    Web: http://devdonkey.blogspot.com
    Twitter: luclemagnifique
    IRC: Monsieur_Oui_Oui@freenode

  3. #3
    Join Date
    Aug 2012
    Posts
    7

    Default

    Yes, Reference-map in SegmentCacheIndexRegistry need to be hard-referenced.

    Still, contains [ boolean contains(SegmentHeader header) ] method of my SegmentCache is never called,is it no more required??

    Mondrian decides the expiration of Cache Segment based on it's own index registry. why it doesn't look-up external cache for this. what if GC clears index registry cache, but external cache has it on disk.

  4. #4
    Join Date
    Aug 2012
    Posts
    7

    Default

    Any help, Please..!!!

  5. #5
    Join Date
    Aug 2012
    Posts
    7

    Default

    Please provide your valuable inputs.

    Thanks in Advance,
    Jeet

  6. #6
    Join Date
    Mar 2007
    Posts
    142

    Default

    Your issues was logged as http://jira.pentaho.com/browse/MONDRIAN-1209

    It was also fixed and will be part of the 4.8.0 suite release.
    Luc Boudreau
    aka. Luc le Magnifique
    aka. Monsieur Oui Oui

    Lead Engineer, Pentaho Corporation
    Web: http://devdonkey.blogspot.com
    Twitter: luclemagnifique
    IRC: Monsieur_Oui_Oui@freenode

  7. #7
    Join Date
    Dec 2011
    Posts
    10

    Default

    +1 for a #1209 and #1208 fix!

    Can we take it from any CI build now?

    Thanks,
    Stephane.

    PS: Unfortunately (for him), Luc can only say "oui oui"...

  8. #8
    Join Date
    Mar 2007
    Posts
    142

    Default

    It is on CI at http://ci.pentaho.com/view/Analysis/...uild/artifact/

    You can also grab it off from Pentaho's maven repo under pentaho:mondrian:TRUNK-SNAPSHOT.


    And yes. I can pronounce correctly oui oui.
    Luc Boudreau
    aka. Luc le Magnifique
    aka. Monsieur Oui Oui

    Lead Engineer, Pentaho Corporation
    Web: http://devdonkey.blogspot.com
    Twitter: luclemagnifique
    IRC: Monsieur_Oui_Oui@freenode

  9. #9

    Default

    I am on 3.4.1 community edition. Does the Community edition provide "Scalable caching in Mondrian" feature. In that article, Julian mentioned that the Community edition provides a default implementation that uses JVM memory. Are there any instructions on how to go about getting things running?

    Thanks in advanced!
    Chang

  10. #10
    Join Date
    Mar 2007
    Posts
    142

    Default

    Pentaho has a "enterprise ready" implementation that we do support. There is also a community project called Community Distributed Cache ( http://cdc.webdetails.org/ )

    To implement your own solution, take a look at the class mondrian.spi.SegmentCache.
    Luc Boudreau
    aka. Luc le Magnifique
    aka. Monsieur Oui Oui

    Lead Engineer, Pentaho Corporation
    Web: http://devdonkey.blogspot.com
    Twitter: luclemagnifique
    IRC: Monsieur_Oui_Oui@freenode

  11. #11
    Join Date
    Jul 2007
    Posts
    2,498

    Default

    Pedro Alves
    Meet us on ##pentaho, a FreeNode irc channel

  12. #12

    Default

    Hi Luc,


    Using Pentaho Server and the CTools installer, I was able to configure Pentaho to use the Community Distributed Cache (CDC). This is really cool.


    Now, I am trying to do a similar setup for just Mondrian without the use of the Pentaho Server. I have the Mondrain.war in my JBoss app server. I edited mondrain.properties and set "mondrian.rolap.SegmentCache=pt.webdetails.cdc.mondrian.SegmentCacheHazelcast" and have the CDC and related jar files in the class path. When JBoss is started and the Mondrian server is initializing, I get the following errors. I have the log4j debug turned on for "pt.webdetails" and there are no logs from the CDC classes. Any idea what configuration/setup I am missing? Does the standalone Mondrain.war support Custom SegmentCache SPI?


    =================================
    2012-10-19 22:31:31,757 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Segment cache initialized: mondrian.rolap.cache.MemorySegmentCache
    2012-10-19 22:31:31,757 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Starting cache instance: pt.webdetails.cdc.mondrian.SegmentCacheHazelcast
    . . .
    2012-10-19 22:31:31,773 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Segment cache initialized: pt.webdetails.cdc.mondrian.SegmentCacheHazelcast
    . . .
    2012-10-19 22:31:31,773 ERROR [org.jboss.ejb.plugins.LogInterceptor] Unexpected Error in method: public abstract java.lang.Object com.vitria.component.server.beans.Administration.invokeService(java.lang.String,java.lang.String,java.lang.Object[]) throws com.vitria.component.api.ComponentException,java.rmi.RemoteException
    java.lang.ExceptionInInitializerError
    at mondrian.olap.MondrianServer.forId(MondrianServer.java:77)
    at mondrian.olap.DriverManager.getConnection(DriverManager.java:98)
    at mondrian.olap.DriverManager.getConnection(DriverManager.java:68)
    . . .
    . . .
    Caused by: java.lang.NullPointerException
    at pt.webdetails.cdc.mondrian.SegmentCacheHazelcast.getCache(SegmentCacheHazelcast.java:30)
    at pt.webdetails.cdc.mondrian.SegmentCacheHazelcast.addListener(SegmentCacheHazelcast.java:71)
    at mondrian.rolap.agg.SegmentCacheManager.<init>(SegmentCacheManager.java:273)
    at mondrian.rolap.agg.AggregationManager.<init>(AggregationManager.java:58)
    at mondrian.server.MondrianServerImpl.<init>(MondrianServerImpl.java:172)
    at mondrian.server.MondrianServerRegistry.createWithRepository(MondrianServerRegistry.java:184)
    at mondrian.server.MondrianServerRegistry.<init>(MondrianServerRegistry.java:48)
    at mondrian.server.MondrianServerRegistry.<clinit>(MondrianServerRegistry.java:33)
    =================================


    Thanks,
    Chang

  13. #13
    Join Date
    Mar 2007
    Posts
    142

    Default

    I don't think CDC ever worked without the BA server. You should contact the makers of the plugin @ WebDetails.
    Luc Boudreau
    aka. Luc le Magnifique
    aka. Monsieur Oui Oui

    Lead Engineer, Pentaho Corporation
    Web: http://devdonkey.blogspot.com
    Twitter: luclemagnifique
    IRC: Monsieur_Oui_Oui@freenode

  14. #14

    Default

    Hi Luc,

    QUESTION: What is the cache segment identifier that Mondrian uses to know that the 2 Mondrian instances are pointing to the same segment? Does it use the "Segment Header"? If so, the 2 values (Instance 1 and 2) looks the same. If it is the same, why Instance 2 did not locate the cache segment populated by Instance 1? Instead, Instance 2 added a new segment to the cache? Is my analysis correct or am I misunderstanding the logs?


    In my MDX query, the query time of Instance 1 (slower machine) is approximately the same as Instance 2 (e.g. 376 sec. vs. 288 sec.); if there was caching the Instance 2 query should only take 0.5 sec or less.


    Any help is very much appreciated.
    ==============================

    This is my configuration:

    Mondrian Instance 1 on Machine A pointing to CDC Instance 1

    Mondrian Instance 2 on Machine B pointing to CDC Instance 1
    CDC Instance 1 is on Machine C.

    I see some communication between the nodes. However, the 2 Mondrian instances do not seem to be sharing the same segment cache. Please see the log below.


    =========== LOG =============
    ==========================================
    CDC Instance 1 starting up:
    ==========================================


    Oct 30, 2012 7:58:04 PM com.hazelcast.impl.AddressPicker
    INFO: Picked Address[10.206.133.99]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
    Oct 30, 2012 7:58:05 PM com.hazelcast.system
    INFO: [10.206.133.99]:5701 [cdc] Hazelcast Community Edition 2.2 (20120803) starting at Address[10.206.133.99]:5701
    Oct 30, 2012 7:58:05 PM com.hazelcast.system
    INFO: [10.206.133.99]:5701 [cdc] Copyright (C) 2008-2012 Hazelcast.com
    Oct 30, 2012 7:58:05 PM com.hazelcast.impl.LifecycleServiceImpl
    INFO: [10.206.133.99]:5701 [cdc] Address[10.206.133.99]:5701 is STARTING
    Oct 30, 2012 7:58:09 PM com.hazelcast.impl.MulticastJoiner
    INFO: [10.206.133.99]:5701 [cdc]




    Members [1] {
    Member [10.206.133.99]:5701 this
    }


    Oct 30, 2012 7:58:09 PM com.hazelcast.impl.LifecycleServiceImpl
    INFO: [10.206.133.99]:5701 [cdc] Address[10.206.133.99]:5701 is STARTED
    namespace: cdaCache
    hazelcast[cdaCache] > Oct 30, 2012 8:02:11 PM com.hazelcast.nio.SocketAcceptor
    INFO: [10.206.133.99]:5701 [cdc] 5701 is accepting socket connection from /10.206.133.51:1896
    Oct 30, 2012 8:02:11 PM com.hazelcast.nio.ConnectionManager
    INFO: [10.206.133.99]:5701 [cdc] 5701 accepted socket connection from /10.206.133.51:1896
    Oct 30, 2012 8:02:17 PM com.hazelcast.cluster.ClusterManager
    INFO: [10.206.133.99]:5701 [cdc]


    Members [2] {
    Member [10.206.133.99]:5701 this
    Member [10.206.133.51]:5701
    }


    Oct 30, 2012 8:02:19 PM com.hazelcast.impl.PartitionManager
    INFO: [10.206.133.99]:5701 [cdc] Initializing cluster partition table first arrangement...


    ==========================================
    Mondrain Instance 1 (10.206.133.51) starting up:
    ==========================================
    2012-10-30 20:02:32,064 INFO [pt.webdetails.cdc.CdcServletContextListener] contextInitialized
    2012-10-30 20:02:32,079 DEBUG [pt.webdetails.cdc.HazelcastManager] CDC init for config C:\ws\yoda\export\home\jboss\server\vtba/deploy/mondrian.war/WEB-INF/hazelcast.xml
    2012-10-30 20:02:32,282 INFO [pt.webdetails.cdc.HazelcastManager] Launching Hazelcast with C:\ws\yoda\export\home\jboss\server\vtba/deploy/mondrian.war/WEB-INF/hazelcast.xml
    2012-10-30 20:02:32,282 INFO [pt.webdetails.cdc.HazelcastManager] starting hazelcast
    2012-10-30 20:02:40,876 INFO [pt.webdetails.cdc.HazelcastManager] hazelcast running
    2012-10-30 20:02:40,876 INFO [pt.webdetails.cdc.HazelcastManager] registering with mondrian
    2012-10-30 20:02:40,876 DEBUG [pt.webdetails.cdc.HazelcastManager] adding mondrian listener


    ==========================================
    Mondrain Instance 2 (10.206.133.76)starting up:
    ==========================================
    2012-10-30 20:10:35,229 INFO [pt.webdetails.cdc.CdcServletContextListener] contextInitialized
    2012-10-30 20:10:35,244 DEBUG [pt.webdetails.cdc.HazelcastManager] CDC init for config D:\ws\yoda\export\home\jboss\server\vtba/deploy/mondrian.war/WEB-INF/hazelcast.xml
    2012-10-30 20:10:35,291 INFO [pt.webdetails.cdc.HazelcastManager] Launching Hazelcast with D:\ws\yoda\export\home\jboss\server\vtba/deploy/mondrian.war/WEB-INF/hazelcast.xml
    2012-10-30 20:10:35,291 INFO [pt.webdetails.cdc.HazelcastManager] starting hazelcast
    2012-10-30 20:10:43,337 INFO [pt.webdetails.cdc.HazelcastManager] hazelcast running
    2012-10-30 20:10:43,337 INFO [pt.webdetails.cdc.HazelcastManager] registering with mondrian
    2012-10-30 20:10:43,337 DEBUG [pt.webdetails.cdc.HazelcastManager] adding mondrian listener


    ==========================================
    Mondrain Instance 1 (10.206.133.51) communicating with Mondrian Instance 2:
    ==========================================
    2012-10-30 20:10:41,517 DEBUG [pt.webdetails.cdc.HazelcastManager] MEMBER ADDED: MembershipEvent {Member [10.206.133.76]:5701 lite} added
    2012-10-30 20:10:41,517 INFO [pt.webdetails.cdc.HazelcastManager] Adding maps to new member
    2012-10-30 20:10:41,564 INFO [pt.webdetails.cdc.HazelcastConfigHelper] sending map default to Member [10.206.133.76]:5701 lite...OK


    ==========================================
    CDC Instance (10.206.133.99) 1 accepting connections:
    ==========================================
    Oct 30, 2012 7:58:09 PM com.hazelcast.impl.LifecycleServiceImpl
    INFO: [10.206.133.99]:5701 [cdc] Address[10.206.133.99]:5701 is STARTED
    namespace: cdaCache
    hazelcast[cdaCache] > Oct 30, 2012 8:02:11 PM com.hazelcast.nio.SocketAcceptor
    INFO: [10.206.133.99]:5701 [cdc] 5701 is accepting socket connection from /10.206.133.51:1896
    Oct 30, 2012 8:02:11 PM com.hazelcast.nio.ConnectionManager
    INFO: [10.206.133.99]:5701 [cdc] 5701 accepted socket connection from /10.206.133.51:1896
    Oct 30, 2012 8:02:17 PM com.hazelcast.cluster.ClusterManager
    INFO: [10.206.133.99]:5701 [cdc]


    Members [2] {
    Member [10.206.133.99]:5701 this
    Member [10.206.133.51]:5701
    }


    Oct 30, 2012 8:02:19 PM com.hazelcast.impl.PartitionManager
    INFO: [10.206.133.99]:5701 [cdc] Initializing cluster partition table first arrangement...
    Oct 30, 2012 8:10:13 PM com.hazelcast.nio.SocketAcceptor
    INFO: [10.206.133.99]:5701 [cdc] 5701 is accepting socket connection from /10.206.133.76:2650
    Oct 30, 2012 8:10:13 PM com.hazelcast.nio.ConnectionManager
    INFO: [10.206.133.99]:5701 [cdc] 5701 accepted socket connection from /10.206.133.76:2650
    Oct 30, 2012 8:10:19 PM com.hazelcast.cluster.ClusterManager
    INFO: [10.206.133.99]:5701 [cdc]


    Members [3] {
    Member [10.206.133.99]:5701 this
    Member [10.206.133.51]:5701
    Member [10.206.133.76]:5701 lite
    }


    Oct 30, 2012 8:10:20 PM com.hazelcast.impl.PartitionManager
    INFO: [10.206.133.99]:5701 [cdc] Re-partitioning cluster data... Immediate-Tasks: 0, Scheduled-Tasks: 0


    =================================================
    Mondrian Instance 1 - Executing Mondrian Query:
    =================================================
    2012-10-30 20:25:32,814 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Segment cache initialized: mondrian.rolap.cache.MemorySegmentCache
    2012-10-30 20:25:32,814 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Starting cache instance: pt.webdetails.cdc.mondrian.SegmentCacheHazelcast
    2012-10-30 20:25:32,829 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Segment cache initialized: pt.webdetails.cdc.mondrian.SegmentCacheHazelcast
    2012-10-30 20:25:32,829 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Segment cache initialized: pt.webdetails.cdc.mondrian.SegmentCacheHazelcast
    . . .
    2012-10-30 20:25:38,501 DEBUG [mondrian.rolap.agg.AggregationManager] generateSqlQuery: sql=select ( ...SQL statement has been removed...)
    2012-10-30 20:25:38,517 DEBUG [mondrian.rolap.RolapUtil] Segment.load: executing sql [( ...SQL statement has been removed...)], exec 9 ms
    2012-10-30 20:25:38,626 DEBUG [pt.webdetails.cdc.HazelcastManager] (local) Mondrian cache entry ADDED: *Segment Header
    Schema:[M3OAnalytics]
    Checksum:[8c6b050d4add5e722b93fbbf7891b41a]
    Cube:[ProcessTrackAndTraceBPM]
    Measure:[Average Processing Time]
    Axes:[
    {VT_PROC_DEF_D.MODEL_ID=('9d9a6fc2-a71f-475c-a3c8-c4ecb710f8e3')}
    {VT_TIME_DEF_D.YEAR=(*)}
    {VT_TIME_DEF_D.QUARTER=(*)}
    {VT_TIME_DEF_D.MONTH=(*)}]
    Excluded Regions:[]
    Compound Predicates:[]
    ID:[2709b76db8bf0ce693926bd2f459e634ca2c0a9cd3021e81c7597fd2ade584e2]


    =================================================
    Mondrian Instance 2 - responding to Mondrian Instance 1 cache changes:
    =================================================
    2012-10-30 20:25:38,804 DEBUG [pt.webdetails.cdc.HazelcastManager] (remote) Mondrian cache entry ADDED: *Segment Header
    Schema:[M3OAnalytics]
    Checksum:[8c6b050d4add5e722b93fbbf7891b41a]
    Cube:[ProcessTrackAndTraceBPM]
    Measure:[Average Processing Time]
    Axes:[
    {VT_PROC_DEF_D.MODEL_ID=('9d9a6fc2-a71f-475c-a3c8-c4ecb710f8e3')}
    {VT_TIME_DEF_D.YEAR=(*)}
    {VT_TIME_DEF_D.QUARTER=(*)}
    {VT_TIME_DEF_D.MONTH=(*)}]
    Excluded Regions:[]
    Compound Predicates:[]
    ID:[2709b76db8bf0ce693926bd2f459e634ca2c0a9cd3021e81c7597fd2ade584e2]


    =================================================
    Mondrian Instance 2 - Executing the same MDX Query, same OLAP Schema and same DB data sources
    =================================================


    2012-10-30 20:39:47,003 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Segment cache initialized: mondrian.rolap.cache.MemorySegmentCache
    2012-10-30 20:39:47,003 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Starting cache instance: pt.webdetails.cdc.mondrian.SegmentCacheHazelcast
    2012-10-30 20:39:47,003 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Segment cache initialized: pt.webdetails.cdc.mondrian.SegmentCacheHazelcast
    2012-10-30 20:39:47,003 DEBUG [mondrian.rolap.agg.SegmentCacheWorker] Segment cache initialized: pt.webdetails.cdc.mondrian.SegmentCacheHazelcast
    . . .
    2012-10-30 20:39:48,878 DEBUG [pt.webdetails.cdc.HazelcastManager] (local) Mondrian cache entry ADDED: *Segment Header
    Schema:[M3OAnalytics]
    Checksum:[8c6b050d4add5e722b93fbbf7891b41a]
    Cube:[ProcessTrackAndTraceBPM]
    Measure:[Average Processing Time]
    Axes:[
    {VT_PROC_DEF_D.MODEL_ID=('9d9a6fc2-a71f-475c-a3c8-c4ecb710f8e3')}
    {VT_TIME_DEF_D.YEAR=(*)}
    {VT_TIME_DEF_D.QUARTER=(*)}
    {VT_TIME_DEF_D.MONTH=(*)}]
    Excluded Regions:[]
    Compound Predicates:[]
    ID:[2709b76db8bf0ce693926bd2f459e634ca2c0a9cd3021e81c7597fd2ade584e2]




    =================================================
    Mondrian Instance 1 - responding to Mondrian Instance 2 cache changes:
    =================================================
    2012-10-30 20:39:48,767 DEBUG [pt.webdetails.cdc.HazelcastManager] (remote) Mondrian cache entry ADDED: *Segment Header
    Schema:[M3OAnalytics]
    Checksum:[8c6b050d4add5e722b93fbbf7891b41a]
    Cube:[ProcessTrackAndTraceBPM]
    Measure:[Average Processing Time]
    Axes:[
    {VT_PROC_DEF_D.MODEL_ID=('9d9a6fc2-a71f-475c-a3c8-c4ecb710f8e3')}
    {VT_TIME_DEF_D.YEAR=(*)}
    {VT_TIME_DEF_D.QUARTER=(*)}
    {VT_TIME_DEF_D.MONTH=(*)}]
    Excluded Regions:[]
    Compound Predicates:[]
    ID:[2709b76db8bf0ce693926bd2f459e634ca2c0a9cd3021e81c7597fd2ade584e2]


    =================================================

  15. #15

    Default

    I was hoping MONDRIAN-1209 would fix the cache reload issue. I took the CI build today but still got the same reload issue.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Privacy Policy | Legal Notices | Safe Harbor Privacy Policy

Copyright © 2005 - 2019 Hitachi Vantara Corporation. All Rights Reserved.