Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question


Hi all,

I have big issue here. Hit the wall when starting to test my logic on big volumes of data. Here is all configuration. Just one more note about environment: it is designed to transform huge volumes of data loaded from some source database, it's not website/webservice.


pu 1 :

creates the space with parameters:

   *cluster-schema: partitioned-sync2backup

   *url: /./space?versioned=true

   *mirror: true

   *cache-policy: 0 (LRU)

with configured external data source:

   *external-data-source: true

   *external-data-source.usage: read-only

   *central-data-source: true

pu 2:

connects to the space with url "/jini:///*/space?versioned=true/"

*creates gigaSpace bean on the space and configured distributed-tx-manager:

<os-core:distributed-tx-manager id="/"transactionManager"///">/

<os-core:giga-space id="/"gigaSpace"/" space="/"space"/" tx-manager="/"transactionManager"//">

*creates polling container:

<os-events:polling-container id="/"PollingEventContainer"/" giga-space="/"gigaSpace"/</p">

concurrent-consumers=/"4"/ max-concurrent-consumers=/"12"/ pass-array-as-is=/"true"/>

<os-events:tx-support tx-manager="/"transactionManager"/" tx-timeout="/"60000"//">


<bean class="/"org.openspaces.events.polling.receive.MultiExclusiveReadReceiveOperationHandler"/">

<property name="/"maxEntries"/" value="/"100"/"/>

<property name="/"nonBlocking"/" value="/"true"/"/>

<property name="/"nonBlockingFactor"/" value="/"10"/"/>




<bean class="/"com.package.Data"/">

<property name="/"processed"/" value="/"false"//">





<os-events:delegate ref="/"refListener"//">




Pu 1 is run on two physical machines, one instance of the space is created on each machine. Pu2 is being deployed on one of those machines as well.


Everything seems to work fine on small amount of data.

When more data is loaded into the space -- some output objects are missing. I assume that when there is a lot of data in the space polling container creates more threads to consume it, and it is somehow causing that some output data is missing. I noticed that when more threads are run -- more data is missing in the end.

I checked the logs and there are no errors, but what is even more interesting it seems that objects, which are missing in the end, are being processed -- I can see inside logs something like this:

"Started processing of object with id: 1

Object with id: 1 was processed"

If there was an error during doing space.write() operation, I would have it logged, but there's nothing.

I started thinking that maybe the transaction manager is the problem here. Is it possible that it is doing tx.rollback() and not inform about it? I cannot remove it from configuration because I need transactional support in order to use "MultiExclusiveRead".

What could be cause of the problem?Any suggestions would be helpful.

Best regards and thanks for any support, Mateusz

{quote}This thread was imported from the previous forum. For your reference, the original is [available here|http://forum.openspaces.org/thread.jspa?threadID=3097]{quote}

asked 2009-05-20 16:52:23 -0500

mnemos gravatar image

updated 2013-08-08 09:52:00 -0500

jaissefsfex gravatar image
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted


1. You don't need to deploy the space in versioned mode.
Only the client should run in optimistic locking mode.

2. if you have large amount of data to process you should do that via a business logic running collocated with the space. This will be much faster. This will avoid the need to use distributed tx.

3. Does your polling-container updating the objects that it is reading?

4. Do you see the problem happening when using MultiTakeReceiveOperationHandler ?

5. Does it happen when you run in ALLINCACHE? The combination of what you do with LRU could impact the performance very much and might cause inconsistency ? since the database is updated in async manner you might have a cache miss that will enforce a lazy read from the database for data that has not been updated yet (in the database).


answered 2009-05-21 21:41:59 -0500

shay hassidim gravatar image
edit flag offensive delete link more


ad 1. you are right. In most branches we have it without versioned mode. Anyway it doesn't influence functionality.
ad 2. I know it would be faster. Anyway in current environment I cannot do it.
ad3. yes - here can be a problem. We changed services last time to return null, but source object is updated before @spaceEventData annotadet method end. Can it be the reason? If yes, how can I return source object with some lease time? Here is example:

    public Source eventListener(Source sourceArr) throws Exception {
        for (Source source : sourceArr) {
            Result target = null;
            try {
                if (validate(source)) {
                    target = transform(source);
                } else {
            } catch (Exception ex) {

try {
                if (target != null) {
                    space.write(target, writeLeaseTime, IJSpace.NOWAIT, UpdateModifiers.WRITEONLY);
                space.write(source, updateLeaseTime, IJSpace.NOWAIT, UpdateModifiers.UPDATEONLY);
            } catch (EntryAlreadyInSpaceException e) {
                log.severe("Duplicate record found");
            } catch (Exception e) {

/* throw exception to rollback transaction */
                throw e;
        return null;

ad 4. No.
ad 5. I'm aware of this problem. Currently this configuration works for us. We will change it in future.

mnemos gravatar imagemnemos ( 2009-05-26 08:41:19 -0500 )edit

Hi Shay,

we tested our infrastructure today. We changed our source back, to return source object array not just null. It doesn't influence results number in database. We checked lot of things and we found that in space there is correct number of objects created, but Mirror is not sending all results to database. When there is small number of consumers for polling container everything looks okay but when we increase number of consumers for service, Mirror fails to write all objects to database.

What can be reason of this? I'm confused because we never played with Mirror and it always worked correctly.

Regards, Mateusz

mnemos gravatar imagemnemos ( 2009-05-26 18:17:13 -0500 )edit

If you want to change the object lease time you should use the following GigaSpace method: {code} <t> com.j_spaces.core.LeaseContext<t> write(T entry, long lease, long timeout, int modifiers) throws DataAccessException {code}

And provide relevant lease time.

How your mirror is configured? Do you have this used as part of the data grid config: {code} cluster-config.groups.group.repl-policy.repl-original-state=true? {code}

See: http://www.gigaspaces.com/wiki/displa...

If you still having problems with the data within the database please contact support.


shay hassidim gravatar imageshay hassidim ( 2009-05-27 11:43:00 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2009-05-20 16:52:23 -0500

Seen: 51 times

Last updated: May 21 '09