Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question
0

Partitioned space does not redistribute objects during sla-driven scaling

Hello,

I have a demo based on the feeder/matcher example with the primary modification that the sla includes a scale-up-policy instead of a relocation policy, i.e.

<os-sla:scale-up-policy monitor="matchingTime" low="50" high="1000" max-instances="3"/>

When I run the demo and trigger some scale-ups and scale-downs, I notice that objects that already exist within a partition are not moved to the one corresponding to the new routing after a scaling event has changed the number of partitions. For example, during a scaledown, all the objects from the removed partition dissappears!

In other words, when designing for sla-driven scaling, is there something to pay particular attention to enable redistribution of existing objects to the new partitioning on a scale-up or scale-down event.

I believe I have forgotten something obvious and I am grateful for any tips you have on this.

Best Regards -Thomas

{quote}This thread was imported from the previous forum. For your reference, the original is [available here|http://forum.openspaces.org/thread.jspa?threadID=2330]{quote}

asked 2008-05-12 08:11:17 -0500

updated 2013-08-08 09:52:00 -0500

jaissefsfex gravatar image
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

It is not clear how your services are deployed - i.e. if you have the services collocated with the spaces or running in separate PUs/JVMs.

If the services are not collocated with the spaces - you should not have any problems. The amount of spaces should be static.

If the services are collocated with the spaces you should remember that space data is handled at the partition level. When you start another partition or shutdown a partition its data is not moving to another existing partition. New written objects would be routed to another space based on the existing amount of active partitions.Changing the amount of active partition would change the target space calculation (based on active partitions % routing field value hash code).

This means that scaling up the amount of JVM storing your data and running your services should be done by provisioning spaces/services into another GSC - i.e. "moving" a space to an empty GSC. This means you need to have another space acting as backup allowing the restarted space in the other GSC to recovery from.

For example:
You have 10 partitions with 1 backup deployed into 5 GSCs.

When you want to scale and have additional JVM (3 for example) running your spaces and services you will need to "move" few partitions into newly started GSC.
You will still have 10 partitions with 1 backup each , but these will be spread over 8 JVMs.

Shay

answered 2008-05-12 23:17:00 -0500

shay hassidim gravatar image
edit flag offensive delete link more

Comments

You got it right.
Shay

shay hassidim gravatar imageshay hassidim ( 2008-05-23 11:09:20 -0500 )edit

Is there a way to have it work the way he was talking about? I.e. I'd like to have it add more partitions as needed and slowly migrate data over to new partitions to re-balance the grid data.

jcarreira gravatar imagejcarreira ( 2008-05-27 20:25:02 -0500 )edit

The idea is not to add partitions , but to add GSCs and 'move' partitions running at the same GSC to another empty one.

Shay

shay hassidim gravatar imageshay hassidim ( 2008-05-27 22:25:54 -0500 )edit

Sure, but as things continue to grow and you need more nodes than you have partitions, can you have it re-partition the data and migrate it to the new partitions?

jcarreira gravatar imagejcarreira ( 2008-05-27 23:24:27 -0500 )edit

Its a question on how dynamic you want to be.
If you want to scale from a certain point up to 10 times that point you'll need 10 partitions per container and relocate those partitions as you grow to the point where you have only one partition per container. you'd can easily do the same if you wan the flexibility to grow by a factor of 20 - in real life i never came a cross a scenario where there was a need for anything further than that.

Repartitioning is a costly operation that involves "stop the world" to ensure consistency and therefore is not a real practical approach.

natis gravatar imagenatis ( 2008-05-29 16:09:23 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2008-05-12 08:11:17 -0500

Seen: 44 times

Last updated: May 12 '08