Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question

deployment of multiple PU and dynamic template

Hi All,

I am planing to deploy multiple PU's with integrated space in sync_replicated mode (see attached picture) since we would like to get best read/write performance.
Our feeder application will feed data into the space using an algorithm which will set a routing variable> the algorithm is simple and looks like spaceRoutingId= hashKey.hashCode() % numberOfPUs. The routing variable matches with the numbers of he PU's .Let say, if we have 3 PU and 3 Spaces, the routing variable will be set to a number between 1 and 3.

Now my question is, if there is a way in GigaSpaces to identify the number of the PU instance. I would like to use it as the event template in the PU to filter the data in my NotifyContainer.

I have one more question, is there a way to speed up the write performance from CamelPu's to the cluster? Right now I see write time about 3 ms!



  1. app.png

This thread was imported from the previous forum.
For your reference, the original is available here

asked 2013-06-24 18:29:33 -0500

salemi gravatar image

updated 2013-08-08 09:52:00 -0500

jaissefsfex gravatar image
edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted


To speed up write performance:
- Run in ONEWAY mode. Use the ONEWAY write modifier.
- If you have non-primitive fields , implement Externalizable for these
- Use writeMultiple to batch write operations.

Are you deploying a partitioned-sync2Backup cluster or sync_replicated cluster? You should deploy a partitioned-sync2Backup cluster.

You don't need to worry about number of PUs / spaces to partition the data when using the partitioned-sync2Backup schema. By default the spaceId value (make sure you use auto-generate = false) is also the routing field , so you data should be evenly distributed across all partitions.

If you collocate the Data-Grid and your business logic , each partition will communicate with its collocated logic automatically (just don't have a clustered GigaSpace used). See the hello word example provided or the maven basic template:


answered 2013-06-27 13:57:18 -0500

shay hassidim gravatar image
edit flag offensive delete link more



I have deployed our application as partitioned-sync2Backup cluster. Unfortunately I can't do write multiple. As you have recommended, I am distributing the messages.

I will try the write ONE_WAY mode and let you know. I have posted another thread for the write performance and I mention the call stack seem to be causing the issue.

see: /[/question/217/gigaspaces-write-takes-a-long-time/]

Thanks, Ali

salemi gravatar image salemi  ( 2013-07-11 13:42:59 -0500 )edit

You can use @ClusterInfoContext annotation to get more information about the partition information. More info here http://wiki.gigaspaces.com/wiki/display/XAP95/ObtainingClusterInformation

But by default your notify container if it is embedded in the space will only get notifications for changes on its partition. Is your notify container running in a remote JVM? Can you move this to space side?

Regarding the write performance, you can make listener run next to the data (embedded in the space) to reduce the latency to microseconds instead of milliseconds.


answered 2013-06-25 13:54:13 -0500

seankumar gravatar image
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2013-06-24 18:29:33 -0500

Seen: 276 times

Last updated: Jun 27 '13