Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question
1

Space-based Remoting across multiple sites/clusters

Hi gents! I'm trying to understand the geo-redundancy possibilities in an application using Executor based remoting: http://wiki.gigaspaces.com/wiki/displ...

Assumptions:

  1. I have my services implemented/deployed in 3 different data centers (site=DC)

  2. Each DC is a separate GS cluster (C1, C2, C3) with at least 2 servers, so minimum 6 service PUs.

  3. My Clients are in separate platform, so there are chances Client in DC1 would not be able to reach service in C1. This is the situation I intend to solve with geo redundancy.

My needs are:

  1. The "Client" of my remoting Service, should leverage the 6 service PUs, in order to achieve HA with Geo-Redundancy

  2. The Client should normally prefer the service PUs in the local DC where it is. So Client in DC1 should primarily use C1, unless my C1 PUs are all down or unreachable.

Questions:

  1. In a setup with 3 DCs an a GS Cluster on each, is it possible/recommended to have the LUS of each cluster "aware" of the PUs in the other 2 clusters.

  2. Is it possible for my Client to be aware of the LUS in the 3 clusters, so the remoting proxy can connect to any service PU in any cluster ?

thanks in advance Diego

asked 2014-01-14 07:01:00 -0600

dsusa gravatar image
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

Hi, Diego.

In a setup with 3 DCs an a GS Cluster on each, is it possible/recommended to have the LUS of each cluster "aware" of the PUs in the other 2 clusters.

It's highly not recommended to have lookup services connect to each other across data centers (more so with having the same space name in each) as it will introduce split brain scenarios as well as mix up between space instances. Better to keep everything separate.

Is it possible for my Client to be aware of the LUS in the 3 clusters, so the remoting proxy can connect to any service PU in any cluster ?

There are two ways to go about this:

1) Proxy-level basic fail-over: By initializing your proxy with a lookup locator from each data center, for example:

 jini://C1_lus:4174,C2_lus:4174,C3_lus:4174/*/space

If your proxy operaiton (e.g.; space.read()) operation fails due to DC failure, retrying the same operation will utilize the next lookup service in the jini URL list. Please note that while this is functionally possible, it is not a supported product feature yet.

2) Recommended approach, Application-level fail-over: By creating a proxy for each data center in your client, then wrapping those within a Facade proxy class. The "wrapper proxy" would switch to using a different proxy when an operation failure occurs.

Thanks,

Ali

answered 2014-01-16 07:25:26 -0600

ali gravatar image
edit flag offensive delete link more

Comments

Thanks for answering Ali ! Both options make sense, however the 1st approach would be less effort on our side. If (1) is not currently working, do you have any timeline estimate in case it is planned for some future version ? Cheers!

dsusa gravatar imagedsusa ( 2014-01-17 07:04:53 -0600 )edit

Regarding the recommended Approach (2). As far as I understand, it comes with a configuration drawback:

  • Clients in DC1, need to be configured the Wrapper Proxy with C1_lus, C2_lus, C3_lus
  • Clients in DC2, need to be configured the Wrapper Proxy with C2_lus, C1_lus, C3_lus
  • Clients in DC3, need to be configured the Wrapper Proxy with C3_lus, C1_lus, C2_lus

Where the first LUS in this configuration is the preferred one, while the others are backup.

The reason to call it drawback, and not feature, is this configuration is DIFFERENT in each DC :-)

I totally get this might be MINOR drawback compared to the benefits, so the final question would be: are you aware of GS customers who have used this approach and are happy with the results ? Thanks!

dsusa gravatar imagedsusa ( 2014-01-22 06:02:50 -0600 )edit

Diego -- yes we do have customers that have used this approach successfully....coupled with some cluster instrumentation/alerting features that you may utilize through the XAP Admin API which could facilitate automating the proxy facade fail over.

ali gravatar imageali ( 2014-01-22 13:33:06 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

4 followers

Stats

Asked: 2014-01-14 07:01:00 -0600

Seen: 384 times

Last updated: Jan 16 '14