Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question

Repartitioning existing entries with minimum overhead

Assume this scenario: - 1 space split across 2 GSCs - Simple Routing Value

I want to perform multiple phases of processing, where the data is co-located by different criteria at each phase.

Clearly I could take all the entries from the space, update the routing value and then write them all back. But this is fairly inefficient.

If I try to use the Change API to update the routing value..... ISpaceQuery<testentry> idQuery = new IdQuery<testentry>(TestEntry.class, entry.getId()); ChangeResult<testentry> result = space.change(idQuery, new ChangeSet().set("routingValue", "fish"));

...then I get peration is rejected - the routing value in the change entry of type 'TestEntry' does not match this space partition id. The value within the entry's routing property named 'routingValue' is fish which matches partition id 1 while current partition id is 2. Having a mismatching routing value would result in a remote client not being able to locate this entry as the routing value will not match the partition the entry is located. (you can disable this protection, though it is not recommended, by setting the following system property: com.gs.protectiveMode.wrongEntryRoutingUsage=false)

I see this: http://docs.gigaspaces.com/xap97adm/t... which explains the warning.

Now, I'm totally fine with a remote client not being able to locate a particular entry - what I'm really looking to do is to repartition the data with the minimum performance overhead.

Any ideas? If I use the property to bypass the check then it doesn't move the data around :(

Am I right to assume that a local takeMultiple/writeMultiple of all entries will have a large overhead?

Thanks in advance for suggestions

asked 2015-04-24 05:32:17 -0600

Paul Hilliar gravatar image
edit retag flag offensive close merge delete


What are your processes do? Why do you need to update the routing value? Are you using event containers?

Yuval gravatar imageYuval ( 2015-04-26 09:13:36 -0600 )edit

It's a reconciliation process. Objects are grouped by different fields at different stages of the reconciliation. They need to be co-located for each stage because the objects are speculatively grouped in memory by the hash of a combination of fields.

So the requirement is to move the objects to be co-located several times during the overall process.

takeMultiple/writeMultiple in small batches is my fallback position, but I feel like Gigaspaces may be (should be) able to offer something better than re-writing the whole object in order to move it around.

The garbage collection spike of takeMultiple/writeMultiple is something I really want to avoid if at all possible....

Paul Hilliar gravatar imagePaul Hilliar ( 2015-04-27 04:42:45 -0600 )edit

3 Answers

Sort by ยป oldest newest most voted

takeMultiple/writeMultiple will work with small data set that fits client JVM heap size.It may generate large garbage collection spike.

You should consider using distributed transaction im this case.

For large data sets you may consider:

  • Exporting the data into a file. See the export utility best practice or space copy api into a single partition space with huge heap or SSD storage
  • Removing data (clear) from existing space or constructing new data grid
  • Writing data back data in small batches . 1000-5000 objects per batch is a good number. You can use also one way mode to speed up the data load.

Another option that will avoid data export in case you have persistence uaed is to change the routing data within the database and restart the data grid. This will initiate a data load that will load each object into its correct partition. Custom initial load will speed up this process significantly.

One cool option is to perform this transfer via a Distributed Task. In this case the task will iterate the relevant objects via relevant template (use a boolean indexed field to indicate object not been transfered) , take these (via collocated proxy using takeMultiple with 1000 as max_objects value) and write these using the new routing field and setting transfer boolean field to true (remote clustered proxy with write lMultiple). This approach should be relatively fast if you execute multiple Tasks concurrently.

Will any of these work for you?

answered 2015-04-26 09:30:44 -0600

shay hassidim gravatar image

updated 2015-04-28 14:22:05 -0600

edit flag offensive delete link more


Thanks. I think the Distributed Task is a good plan for finding the objects that need to be rewritten and updating their routing value.

However, I'm still really concerned about the potential GC spike of a big takeMultiple/writeMultiple and it seems so wasteful.

The ideal API for this task would look something like the Change API but would a) operate on multiple entries in one call (changeMultiple) b) allow the routing value to be updated

I should add that the call to rewrite the routing values will be made locally, not remotely as in the code above.

Maybe WriteModifiers.PartialUpdate is the answer? http://wiki.gigaspaces.com/wiki/displ...

Paul Hilliar gravatar imagePaul Hilliar ( 2015-04-27 04:56:50 -0600 )edit

I tried the DistributedTask option, trying both options - readMultiple->writeMultiple with PartialUpdate - takeMultiple -> writeMultiple ..to update the routing value and the entries were not moved! Any ideas?

space.execute(new DistributedTask<Integer, Integer>() {
@TaskGigaSpace private transient GigaSpace gs;
public Integer execute() throws Exception {
    System.out.println("DistributedTask execute takeMultiple gs: " + this.gs);
    TestEntry[] entries = this.gs.readMultiple(new TestEntry());

    List<TestEntry> entriesToWrite = new ArrayList<TestEntry>();
    for (TestEntry entry : entries) {
        TestEntry entryToWrite = new TestEntry();
    LeaseContext<TestEntry>[] results = this.gs.writeMultiple(entriesToWrite.toArray(new TestEntry[]{}), WriteModifiers.PARTIAL_UPDATE);
    System.out.println("DistributedTask updated: " + results.length + " entries");
    return results.length;

public Integer reduce(List<AsyncResult<Integer>> results) throws Exception {
    return results.size();


Paul Hilliar gravatar imagePaul Hilliar ( 2015-04-27 11:19:24 -0600 )edit

You should use takeMultiple and not readMultiple.You should use collocated space proxy for the takeMultiple call and clustered proxy (this.gs.getClustered()) for the writeMultiple.I don't see u set a boolean to avoid taking the same already transferred object again. Your template should take this into consideration.-------- Original message --------From: forum@ask.gigaspaces.orgDate:04/27/2015 12:18 PM (GMT-05:00)

shay hassidim gravatar imageshay hassidim ( 2015-04-27 11:40:04 -0600 )edit

2 things here 1) When I use gs.getClustered() and use it to write, the entries don't travel to the remote partition. Even when I write completely new entries (see below) 2) Surely if you are using PARTIAL_UPDATE then you would be using readMultiple not takeMultiple?

        space.execute(new DistributedTask<Integer, Integer>() {
        @TaskGigaSpace private transient GigaSpace gs;
        public Integer execute() throws Exception {
            System.out.println("Non-clustered space: " + this.gs);
            GigaSpace clusteredGs = this.gs.getClustered();
            System.out.println("Writing new entries using clustered space: " + clusteredGs);
            for (int i=0 ; i < 10 ; i++) {
                clusteredGs.write(new TestEntry(UUID.randomUUID(), String.valueOf(i), "from embedded task written via " + clusteredGs));
            return 1;

        public Integer reduce(List<AsyncResult<Integer>> results) throws Exception {
            return results.size();

When this runs you end up with none of the generated entries travelling to be co-located with others with the same @SpaceRouting value even though I used getClustered().....

Paul Hilliar gravatar imagePaul Hilliar ( 2015-04-28 06:40:08 -0600 )edit

Which XAP version are you using? getClustred() with a space proxy injected into a task had issues with 9.7.

You should use a patch release thatresolve this issue. Support can provide it.

shay hassidim gravatar imageshay hassidim ( 2015-04-28 07:49:13 -0600 )edit

What you are doing is very rare.

It's not exactly "repartitioning" IMHO , its more rerouting or data migration between partitions. In most cases this is done after you made a major data model changes where associated objects from different data types becomes disassociated.

These data model changes are dramatic and usually done offline, resulting entire data load from the database (which also undergo some changes as well). This performs the "rerouting" implicitly. That's why there was no incentive to provide "optimized" approach to implement such on-line data migration process.

With the right takeMultiple batch size and reasonable GC setting , garbage collection should not affect the stability. Especially when using the distributed task approach recommended since it cuts the amount of serialization and network activity by half.

How long it takes you to migrate the data with the distributed task approach?

answered 2015-04-28 14:23:44 -0600

shay hassidim gravatar image
edit flag offensive delete link more


Are the phases you mention part of a workflow? If so there is a mechanism you can use which will distribute the code to all partitions, and only process objects based on search criteria, eg. processed = false, or processed = true & approved = false. Please see this section and then indicate whether or not this meets your requirements:




answered 2015-04-27 03:20:48 -0600

jb gravatar image
edit flag offensive delete link more


Code to data doesn't work in this case - the algorithm relies on object co-location to perform in a reasonable way

Paul Hilliar gravatar imagePaul Hilliar ( 2015-04-27 04:44:51 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2015-04-24 05:21:49 -0600

Seen: 536 times

Last updated: Apr 28 '15