Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question
0

Space Replication is not working in any of the cluster modes

I have two servers both are having GSA and it has NIC_ADDR and LOOKUPLOCATORS variables in setenv.sh file.

While we are testing if we shutdown one server. The other server's cache information are wiped off (no cache objects in the memory).

please find the attached files:
1. space.zip file contains the pu.xml / sla.xml to deploy the space. (currently i am using partitioned-sync2backup cluster schema)
2. applicationContext.xml file used in the application (spring app) running on both the servers.

Which configuration that i am missing here in order to pickup the other server's cache when one server is down.

I have checked the cache information from (gs-webui.sh) web console. It was cleared.

Note:
While the tomcat starts (application starts) up I am adding the elements into the cache.

While deploying the upsSpace with pu and sla.xml files.

I have noticed,

Case 1:
Refer file 80_Primary.PNG

If the upsSpace.1 1 deployed on 81 (primary). Then if we stop 80 and start 81 its working fine.
But If we stop the 81 the cache cleared and the same is replicated 80.

Case 2:
Refer file 81_Primary.PNG

the upsSpace.1 1 deployed on 80 (primary). Then if we stop 81 and start 80 its working fine.
But If we stop the 80 the cache cleared and the same is replicated to 81.

Attachments

  1. aftershutdwon80.PNG
  2. after80started.PNG
  3. beforeshutdown80.PNG
  4. 8180Primary.PNG
  5. upsSpace.zip
  6. applicationContext.xml
  7. space.zip
  8. objects.zip
  9. 81_Primary.PNG
  10. 80_Primary.PNG

This thread was imported from the previous forum.
For your reference, the original is available here

asked 2013-05-07 17:53:29 -0600

get4gopim gravatar image

updated 2013-08-08 09:52:00 -0600

jaissefsfex gravatar image
edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
0

This properties you are added are redundant:

<prop key="cluster-config.groups.group.repl-policy.replication-mode">sync</prop>
<prop key="cluster-config.groups.group.repl-policy.sync-replication.throttle-when-inactive">true</prop>

They are the default.

What is happening is that you set the system with a synchronous external data source (read-write).
Do you have an external data source in the system? where is it configured?

<prop key="space-config.engine.cache_policy">1</prop>
<prop key="space-config.external-data-source.usage">read-write</prop>
<prop key="cluster-config.cache-loader.external-data-source">true</prop>
<prop key="cluster-config.cache-loader.central-data-source">true</prop>

If you set the space to work with an central external data base and LRU eviction policy, it will not replicate data to the backup space because it assumes it is in the data base and upon failover when the backup replace the primary it can load the data from the database upon cache miss.

Which version are you using?

answered 2013-05-08 09:00:13 -0600

eitany gravatar image
edit flag offensive delete link more
0

I have removed the unnecessary config properties in pu.xml file and added the below property for replication policy.

<prop key="cluster-config.groups.group.repl-policy.replication-mode">sync</prop>
<prop key="cluster-config.groups.group.repl-policy.sync-replication.throttle-when-inactive">true</prop>

After this I have noticed. The replication is happening but ,

Case 1:
80 and 81 servers are up both partitions are working fine.
if you notice the upsSpace is deployed in 80 (2 instances) and 81 (2 instances)

Case 2:
if 80 server is down. Then its replicating the cache in both servers after some secs.

Case 3:
After that if you notice the upsSpace is deployed in 81 (2 instances) and 81 (2 instances)

Case 4:
If you start 80, but the upsSpace is not deployed in 80

Case 5:
If you stop 81, then the cache is gone.

I think this problem will solve after the 80 and 81 restored it should change from
81 (2 instances) and 81 (2 instances)
to
80 (2 instances) and 81 (2 instances)

Attachments

  1. aftershutdwon80.PNG
  2. after80started.PNG
  3. beforeshutdown80.PNG
  4. 8180Primary.PNG
  5. upsSpace.zip

answered 2013-05-08 08:15:16 -0600

get4gopim gravatar image
edit flag offensive delete link more

Comments

Yes. I have noticed that the external datasource config which is not needed. I have removed that you can find the upsSpace.zip attached in my previous post, that is the latest one?

I am inserting the data when the application starts up first time (manually by gigasapce.write(...) method)

get4gopim gravatar imageget4gopim ( 2013-05-08 09:11:46 -0600 )edit

I am using :

GigaSpaces XAP: Edition: XAP Premium 9.1.2 GA Build: 7920

get4gopim gravatar imageget4gopim ( 2013-05-08 09:38:09 -0600 )edit

If I remove the below settings

<prop key="space-config.engine.cache_policy">1</prop>
<prop key="space-config.external-data-source.usage">read-write</prop>
<prop key="cluster-config.cache-loader.external-data-source">true</prop>
<prop key="cluster-config.cache-loader.central-data-source">true</prop>

is this will work? I need to remove these settings in the applicationContext of my application too right?

get4gopim gravatar imageget4gopim ( 2013-05-08 21:51:39 -0600 )edit

I have missed to specify the "max-instances-per-machine=1" in sla.xml file while deploying.

<os-sla:sla cluster-schema="sync_replicated" number-of-instances="2" number-of-backups="0" max-instances-per-vm="1" max-instances-per-machine="1">

</os-sla:sla>

After this change, if anyone of the server is down, the replication is happening properly.

This is working even if I switch any of the schema (async_replicated, sync_replicated, partitioned-sync2backup).

Edited by: Gopinathan Mani on May 9, 2013 4:16 AM

get4gopim gravatar imageget4gopim ( 2013-05-09 04:12:49 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2013-05-07 17:53:29 -0600

Seen: 185 times

Last updated: May 08 '13