Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question
0

Partitioned With Backups+HEDS Problem

I´m trying to deploy a PU using partitioned-sync2backup (4 GSC, 2 for primary and 2 for backups), the PU uses a HEDS for creating a space, the primary partitions are allocated successfully, but the 2 backups throws exceptions, it seems the backups try to insert records in the DB, but the records exist. Note I´m using hibernate annotations and Operacion-Medio has ManyToMany bidirectional relationships. How can I solve it...

Using: GigaspacesXAP 6.5.1 ga build 2400 Java 1.5 DataSource: org.openspaces.persistency.hibernate.DefaultHibernateExternalDataSource

Thanks h4. Attachments

[backup_gsc.log|/upfiles/1375971676180870.txt]

{quote}This thread was imported from the previous forum. For your reference, the original is [available here|http://forum.openspaces.org/thread.jspa?threadID=2519]{quote}

asked 2008-08-06 14:23:13 -0500

nullcipher2 gravatar image

updated 2013-08-08 09:52:00 -0500

jaissefsfex gravatar image
edit retag flag offensive close merge delete

2 Answers

Sort by » oldest newest most voted
0

Dave,
Can u post the code that does this?
I'm sure others will need this.

Shay

answered 2008-08-06 18:36:19 -0500

shay hassidim gravatar image
edit flag offensive delete link more
0

The OpenSpaces DefaultHibernateDataSource does not distinguish between a space transfer from the primary and a write to the database. We had to extend the DefaultHibernateDataSource in order to ignore writing thru to the database when the space is in backup mode.

answered 2008-08-06 15:40:41 -0500

davebyrne gravatar image
edit flag offensive delete link more

Comments

We have modified the Hibernate External Datasource quite a bit in order to support read-through / write-through in partitioned toplogies. I would be happy to contribute our patches if there is an appropriate place to post them.

Here is a simplified version which should fix the problem at hand with primary/backup paritioned environments.

{code} package com.patentvest.datagrid.gigaspaces.persistence;

import java.util.List;

import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.openspaces.core.cluster.ClusterInfo; import org.openspaces.core.cluster.ClusterInfoAware; import org.openspaces.core.space.mode.AfterSpaceModeChangeEvent; import org.openspaces.persistency.hibernate.DefaultHibernateExternalDataSource; import org.springframework.context.ApplicationEvent; import org.springframework.context.ApplicationListener;

import com.gigaspaces.datasource.BulkItem; import com.gigaspaces.datasource.DataIterator; import com.gigaspaces.datasource.DataSourceException;

/** * Hibernate external datasource implementation with support for partitioned * primary/backup topoplogies.


  • @author Dave Byrne

*/ public class PartitionedHibernateDataSource extends DefaultHibernateExternalDataSource implements ClusterInfoAware, ApplicationListener {

private static Log log = LogFactory.getLog(PartitionedHibernateDataSource.class);
private ClusterInfo clusterInfo;

private boolean isPrimary = false;

public void onApplicationEvent(ApplicationEvent evt) {
    if (evt instanceof AfterSpaceModeChangeEvent) {
        AfterSpaceModeChangeEvent spaceModeEvent = (AfterSpaceModeChangeEvent) evt;
        isPrimary = spaceModeEvent.isPrimary();

        log.info("Space status transfered to "
                + (isPrimary ? "primary" : "backup") + " mode");
    }
}

/**
 * do an initial load, but only if the backup id is not set. otherwise the
 * objects will be transfered from the primary member
 */
public DataIterator initialLoad() throws DataSourceException {

    if (clusterInfo.getBackupId() != null) {
        // this is a backup partition, dont do any intial load.
        if (log.isDebugEnabled())
            log.debug("This is a backup partiion, dont need to load from underlying store.");
        return null;
    }

    return super.initialLoad();
}

/**
 * only execute bulk if the datasource is in a writable mode as determined
 * by the state of primary/backup
 */
public void executeBulk(List<BulkItem> bulkItems)
        throws DataSourceException {

    if (!isWritable())
        return;

    super.executeBulk(bulkItems);

}

private boolean isWritable() {
    return isPrimary;
}

public void setClusterInfo(ClusterInfo clusterInfo) {
    this.clusterInfo = clusterInfo;
}

}

{code}

davebyrne gravatar imagedavebyrne ( 2008-08-07 10:54:16 -0500 )edit

Thank you!

I'm assuming you are running in external data source shared mode in disabled mode. In this case data is replicated from primary to the backup since the assumption each using different database instance.

With external data source shared enabled - data is not replicated (right now). The backup considered "cold backup".

We will resolve this issue with future versions.

Your fix makes sure both primary and backup would not write the data into the database. Only the primary.

Shay

Edited by: Shay Hassidim on Aug 7, 2008 3:40 PM

shay hassidim gravatar imageshay hassidim ( 2008-08-07 14:55:49 -0500 )edit

I can´t solve the issue. Note I´m using com.gigaspaces.datasource.hibernate.HibernateDataSource. When deploying, it seems the backups read the records from the primary spaces and try to write it to the DB but it exists in the DB. Why the backups write the records in the DB, is possible to disable this action???

Thanks

Backup log:

org.hibernate.engine.loading.Collecti... WARN [pool-1261-thread-3] (LoadContexts.java:108) - fail-safe cleanup (collections) : org.hibernate.engine.loading.Collecti... WARN [pool-1261-thread-3] (LoadContexts.java:108) - fail-safe cleanup (collections) : org.hibernate.engine.loading.Collecti... WARN [pool-1261-thread-3] (LoadContexts.java:108) - fail-safe cleanup (collections) : org.hibernate.engine.loading.Collecti... WARN [pool-1261-thread-3] (LoadContexts.java:108) - fail-safe cleanup (collections) : org.hibernate.engine.loading.Collecti... WARN [pool-1261-thread-3] (LoadContexts.java:108) - fail-safe cleanup (collections) : org.hibernate.engine.loading.Collecti... 11/08/2008 12:01:38 PM ADVERTENCIA [com.gigaspaces.core.engine]: Recovery operation failed. Reason: java.util.concurrent.ExecutionException: net.jini.space.InternalSpaceException: com.j_spaces.core.sadapter.SAException: com.j_spaces.core.client.EntryAlreadyInSpaceException: Entry UID=1537812916^37^77^0^0 class=com.pol.maf.modelo.impl.Valor rejected - an entry with the same UID already in space. Caused by: java.util.concurrent.ExecutionException: net.jini.space.InternalSpaceException: com.j_spaces.core.sadapter.SAException: com.j_spaces.core.client.EntryAlreadyInSpaceException: Entry UID=1537812916^37^77^0^0 class=com.pol.maf.modelo.impl.Valor rejected - an entry with the same UID already in space. 11/08/2008 12:01:38 PM INFO [com.gigaspaces.core.cluster.replication]:

Replication channel moved to state: CLOSED [ source space: MAFSpace_container2_1:MAFSpace ] [ target space: MAFSpace_container2:MAFSpace ; target space url: jini://*/MAFSpace_container2/MAFSpace?groups=gigaspaces-6.5.1-XAP-ga&ignoreValidation=true&backup_id=1&total_members=2,1&cluster_schema=partitioned-sync2backup&id=2&schema=persistent&state=started&timeout=5000 ]

11/08/2008 12:01:38 PM INFO [com.gigaspaces.persistent]: *** PERFORMS COLD INIT *** MAFSpace

11/08/2008 12:01:38 PM INFO [com.gigaspaces.cache]: Cache created with policy [ALL IN CACHE], persistency mode [external]

11/08/2008 12:01:39 PM ADVERTENCIA [com.gigaspaces.core.common]: Space recovery failure. Caused by: java.util.concurrent.ExecutionException: net.jini.space.InternalSpaceException: com.j_spaces.core.sadapter.SAException: com.j_spaces.core.client.EntryAlreadyInSpaceException: Entry UID=1537812916^37^77^0^0 class=com.pol.maf.modelo.impl.Valor rejected - an entry with the same UID already in space. 11/08/2008 12:01:39 PM INFO [com.gigaspaces.core.cluster.replication]:

Replication channel moved to state: CLOSED [ source space: MAFSpace_container2_1:MAFSpace ] [ target space: MAFSpace_container2:MAFSpace ; target space url: jini://*/MAFSpace_container2/MAFSpace?groups=gigaspaces-6.5.1-XAP-ga&ignoreValidation=true&backup_id=1&total_members=2,1&cluster_schema=partitioned-sync2backup&id=2&schema=persistent&state=started&timeout=5000 ]

11/08/2008 12:01:39 PM ADVERTENCIA [com.gigaspaces.core.cluster.replication]:

Replicator attempts to find members... Currently in progress.... Next report in: 30000 milliseconds.

Edited by: Jorge Mario Ulloa Marin on Aug 11, 2008 12:12 PM

nullcipher2 gravatar imagenullcipher2 ( 2008-08-11 12:02:00 -0500 )edit

use the datasource i posted above

davebyrne gravatar imagedavebyrne ( 2008-08-11 14:36:44 -0500 )edit

The new org.openspaces.persistency.hibernate.DefaultHibernateExternalDataSource doesn´t load children in Hibernate OneToMany relationships, so I´m using com.gigaspaces.datasource.hibernate.HibernateDataSource.

nullcipher2 gravatar imagenullcipher2 ( 2008-08-11 14:54:41 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2008-08-06 14:23:13 -0500

Seen: 43 times

Last updated: Aug 06 '08