Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

 Ask Your Question

# GigaSpaces and Hibernate 2nd Level Cache - Sync Replicated

Hi

We are in the process of replacing TreeCache with GigaSpaces Hibernate 2nd level cache. We are using the sync replicated embedded topolgy with the following space url:

app 1 cache.gigaspaces.url=/./space?cluster_schema=sync_replicated&total_members=2&id=1&groups=xap6

app 2 cache.gigaspaces.url=/./space?cluster_schema=sync_replicated&total_members=2&id=2&groups=xap6

our bean definition looks like:

<os-core:space id="space" url="${cache.gigaspaces.url}"/> <os-core:giga-space id="gigaSpace" space="space"/> <os-core:map id="gigaSpaceMap" space="space"/> Replication seems to work fine, but when tomcat starts up we get this warning: INFO: Space [space_container1:space] trying to perform recovery from [jini://*/space_container2/space?groups=xap6&ignoreValidation=true&state=started]. RecoveryChunkSize=200 13-Oct-2008 11:35:43 com.j_spaces.obf.pw a WARNING: Member: [space_container1:space] failed to find an available member to recover from. com.j_spaces.core.client.FinderException: LookupFinder failed to find service using the following service attributes:  Service attributes: [net.jini.lookup.entry.Name(name=space)] Service attributes: [com.j_spaces.lookup.entry.ContainerName(name=space_container2)] Service attributes: [com.gigaspaces.cluster.activeelection.core.ActiveElectionState(state=ACTIVE)] Service attributes: [com.j_spaces.lookup.entry.State(state=started,electable=null,replicable=null)] Lookup timeout: [5000] Classes: [interface com.j_spaces.core.service.Service] Jini Lookup Groups: [xap6] Number of Lookup Services: 1 at com.j_spaces.core.client.LookupFinder.find(SourceFile:332) at com.j_spaces.core.client.SpaceFinder.a(SourceFile:1044) at com.j_spaces.core.client.SpaceFinder._find(SourceFile:615) at com.j_spaces.core.client.SpaceFinder.find(SourceFile:450) at com.j_spaces.obf.pw.a(SourceFile:7524) at com.j_spaces.obf.pv.a(SourceFile:129) at com.j_spaces.obf.pv.a(SourceFile:24) at com.j_spaces.obf.ph.d(SourceFile:304) at com.j_spaces.obf.ph.recover(SourceFile:276) at com.j_spaces.obf.pv.a(SourceFile:223) at com.j_spaces.obf.pv.ia(SourceFile:177) at com.j_spaces.core.JSpaceImpl.y(SourceFile:3293) at com.j_spaces.core.JSpaceImpl.start(SourceFile:3092) at com.j_spaces.core.JSpaceImpl.<init>(SourceFile:350) at com.j_spaces.core.JSpaceImpl.<init>(SourceFile:290) at com.j_spaces.core.JSpaceContainerImpl.a(SourceFile:2785) at com.j_spaces.core.JSpaceContainerImpl.a(SourceFile:2708) at com.j_spaces.core.JSpaceContainerImpl.bg(SourceFile:1267) at com.j_spaces.core.JSpaceContainerImpl.aZ(SourceFile:661) at com.j_spaces.core.JSpaceContainerImpl.<init>(SourceFile:556) at com.j_spaces.core.client.SpaceFinder.a(SourceFile:769) at com.j_spaces.core.client.SpaceFinder.a(SourceFile:888) at com.j_spaces.core.client.SpaceFinder._find(SourceFile:612) at com.j_spaces.core.client.SpaceFinder.internalFind(SourceFile:412) at com.j_spaces.core.client.SpaceFinder.internalFind(SourceFile:387) at com.j_spaces.core.client.SpaceFinder.find(SourceFile:206) at org.openspaces.core.space.UrlSpaceFactoryBean.doCreateSpace(UrlSpaceFactoryBean.java:286) at org.openspaces.core.space.AbstractSpaceFactoryBean.afterPropertiesSet(AbstractSpaceFactoryBean.java:136) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowi  reCapableBeanFactory.java:1367) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireC apableBeanFactory.java:1333) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap ableBeanFactory.java:471) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBea nFactory.java:409) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab leBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory\$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis try.java:220) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeansOfType(DefaultListableBeanFactor y.java:308) at org.springframework.context.support.AbstractApplicationContext.getBeansOfType(AbstractApplicationContext.java :948) at org.springframework.context.support.AbstractApplicationContext.registerListeners(AbstractApplicationContext.j ava:702) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:378) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3843) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4342) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:525) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:627) at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:553) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1149) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:719) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) at org.apache.catalina.core.StandardService.start(StandardService.java:516) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:578) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)

We also get the following when the second node starts: Replication channel moved to state: CONNECTED [ source space: space_container2:space ] [ target space: space_container1:space ; target space url: jini://*/space_container1/space?groups=xap6&ignore Validation=true&state=started&timeout=5000 ]

The order in which we start up the 2 tomcat nodes makes no difference. Even if one of the nodes is started then we still get this error.

Any help would be highly appreciated as we hacked off with treecache!

Edited by: amin mohammed-coleman on Oct 13, 2008 6:09 AM h4. Attachments

# [result1NodeStandardSettings.txt|/upfiles/13759714182419887.txt]

{quote}This thread was imported from the previous ...

edit retag close merge delete

## 2 Answers

Sort by » oldest newest most voted

Amin,

Something is not clear here:
Can you explain which options you use here:
http://www.gigaspaces.com/wiki/display/XAP66/GigaSpacesforHibernateORMUsers

Are you using GigaSpaces API directly (and have Hibernate in the backend using the Hibernate External Data Source) or still using Hibernate API and use GigaSpaces as Hibernate 2nd level cache?

If you use GigaSpaces API - Are you using Space API with POJO or the Map API?

Shay

more

## Comments

Hi

We tried to use embedded sync replicated original as mentioned in the link that you have provided. After your recommendations we are using master local topology. We are still using hibernate api directly but leaving the caching part to gigaspaces. We do have a requirement of caching domain acls in the cache as well for which we are using the GigaMap which is configured using Spring.

Not sure if that makes sense...

Cheers h4. Attachments

# [result1NodeStandardSettings.txt|/upfiles/13759714204876696.txt]

( 2008-10-14 10:28:19 -0500 )edit

Just another quick question...do specify a back up of the master cache when starting the master space?

cheers

( 2008-10-14 10:56:40 -0500 )edit

It makes sense.

The embedded sync replicated is relevant when you have known amount of clients and would like to have fast read. The local cache is relevant when you have unknown amount of clients and would like to have fast read.

With the second option data will be loaded into the local cache once you read the data.

Can you share your hibernate config? How you use GigaMap ? Are you using the org.openspaces.hibernate.cache.SimpleMapCacheProvider?

Shay h4. Attachments

# [result1NodeStandardSettings.txt|/upfiles/13759714217365636.txt]

( 2008-10-14 11:18:00 -0500 )edit

Hi Shay

Thanks for your reply and patience! I'm currently at home so unfortunately I can't provide the hibernate config. However we are using SimpleMapCacheProvider and using gigaMap which is configured using Spring( you can see this at the beginning of this post).

We have a period (december) at which there will be a high volume of trade with our limited users. Therefore we are currently looking at GigaSpaces to provide hibernate 2nd level cache. Although I understand the master local topology I am wondering if sync replication might be a better option in this situation as we will have approx 4 nodes running 4 apps on a cluster, therefore changes one node should be reflected on the other nodes as well. I'm not sure what you think...

We would ideally like to remove the exception on start up with sync replication (if sync replication is the way to go), the other thing I would like to know if there would be a sync replication exception on the same data...does the master local topology remove this?

Thanks again!

#### Attachments

( 2008-10-14 13:34:29 -0500 )edit

So the question now what is better: - sync replication between 4 spaces (destructive operation replicated to 3 nodes) or - 2 serial remote calls (from the client to the primary and primary to backup) + parallel async notify to 3 local cache instances.

The second looks more expensive.

It is true our sync replication is parallel and this will allow each space to fully recover from one of the other once started and have the data instantly available (without lazy load as with the local cache) , but a client that perform a destructive operation will need to wait for all nodes to ack the operation. This means that the response time depends with the slowest node (one parallel remote call). So it will be hard get determistic behavior in this case.

With master-local , destructive operations will go to the primary and backup - 2 nodes (2 serial remote calls).

I guess u will need to test each topology and see which ones performs better for you.

Have u tried the parameter I suggested u to remove the exception?

With master-local u can't have problems with concurrent updates.

Shay h4. Attachments

# [result1NodeStandardSettings.txt|/upfiles/13759714217398482.txt]

( 2008-10-14 16:03:29 -0500 )edit

Do u see missing objects at the spaces?

It seems this message thrown since one of the spaces did not had his replica stated yet. This should be resolved once the replica will be available. It takes usually few seconds (in case u start these in the same time). Once they establish the replication connection u should not see this message.

Shay Sent from my BlackBerry® smartphone with SprintSpeed

more

## Comments

Hi

Thanks for your reply.

This stack trace is thrown on tomcat start up. Replication between nodes works fine, we can see the state of play using the management console when all nodes are started up.

We were wondering if we have configured our clustering properties incorrectly (although we have followed example provided). We haven't defined a replica (I presume you mean a back up). We have 2 nodes that are clustered, we start the nodes one by one, for each of the nodes we get this warning (prints stack trace), even though all seems to be ok once started up.

Node 1 starts up with:

INFO: Space <space_container1:space> with url [/./space?cluster_schema=sync_replicated&total_members=2&id=1&groups=xap6&schema=default&state=started] started successfully

Node 2 states that is has connected with node 1. We would like to get rid of this warning as it shows a stack trace which would confuse people as it looks as though something horrible has happened.

Cheers Amin

( 2008-10-13 09:42:22 -0500 )edit

Amin,

Try this: cluster-config.groups.group.repl-policy.repl-find-timeout=20000

Please remember sync-replication cluster schema is not primary-backup. With sync-replication cluster you have both spaces active. This means you need to be aware that 2 copies might be updated in the same time in case you have such situation. With primary-backup this could not happen.

Shay

( 2008-10-13 13:18:26 -0500 )edit

Hi

It may be worth mentioning our situation:

We have a product (A) which is deployed on several nodes, we use hibernate 2nd level cache inorder to improve the performance of the application. We currently use sync replication for treecache (we have had several issues with treecache which we have decided should be replaced). We're not interested in back ups as we want changes in the cache on node to be replicated to the other nodes in the cluster.

Is sync replication the best approach to take for the above scenario or are we missing something in our url configuration?

We would like the ability to add nodes and remove nodes from the cluster without the exception that we have noticed. I can understand a warning message being display stating that it could not find available member to recover from, but the stacktrace is very misleading.

Cheers

( 2008-10-14 03:21:18 -0500 )edit

In this case it sounds like using local cache might me better option for you than an embedded replicated space. You can have the master space as partitionedSync2Backup cluster.
- It will allow you to have a local cache with each node - Hub & Spoke topology
- You will not get this exception when node started
- You will have very fast read. Faster than embedded space. The local cache in this case is based on concurrent hashtable and not a space
- It is mostly relevant for read mostly scenarios.

Shay

( 2008-10-14 06:21:40 -0500 )edit

Hi again..

Apologies for coming back to you on this but we have hit a problem. Based on your recommendations we have decided to try the master local distributed topology however we cannot seem to get it to work even though we have followed the instructions on the wiki. We cannot even start the master space using the following command:

gs.bat pudeploy ..\deploy\templates\datagrid

the following stacktrace is thrown:

14-Oct-2008 15:43:14 SEVERE [com.gigaspaces.admin.cli]: Error deploying [..\deploy\templates\datagrid] java.io.IOException: Server returned HTTP response code: 403 for URL: http://192.168.188.51:1888/..\deploy\... at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1241) at java.net.URL.openStream(URL.java:1009) at org.openspaces.pu.container.servicegrid.deploy.Deploy.readPUFile(Deploy.java:435) at org.openspaces.pu.container.servicegrid.deploy.Deploy.buildOperationalString(Deploy.java:302) at org.openspaces.pu.container.servicegrid.deploy.Deploy.deploy(Deploy.java:171) at org.openspaces.pu.container.servicegrid.deploy.Deploy.deploy(Deploy.java:167) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:243) at java.beans.Statement.invoke(Statement.java:214) at java.beans.Statement.execute(Statement.java:123) at com.gigaspaces.admin.cli.PUDeployHandler.process(PUDeployHandler.java:60) at com.gigaspaces.admin.cli.GS.main(GS.java:1490)

We also could not find the message:[ INFO [org.jini.rio.cybernode]: Registered to a ProvisionManager] when starting gs-all.bat.

Suppose that we make it work. You said we could use partitionedSync2Backup. Does that mean that local cache can use this url "jini:////dataGrid?useLocalCache&groups=gigaspaces-6.0XAP" and we should set some parameters only for master space pu.xml (defined in the deploy\templates\datagrid\META-INF\spring)? Does the client url not need the schema definition?

Just worth mentioning we are using gigaspaces-xap-6.5.1-ga and also we haven't changed anything in the distribution (apart what was recommended in the wiki).

Once again thanks for the help!

( 2008-10-14 09:57:00 -0500 )edit

## Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

## Stats

Asked: 2008-10-13 05:57:24 -0500

Seen: 243 times

Last updated: Oct 14 '08