Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question

Memory shortage when passing large amount of data through GS remoting


I wrote a post related to this issue but I forgot to mark it as a question, so that's why I'm asking again. Also we have more information about the problem.

The issue we are seeing is when retrieving large amount of data through GS remoting. In this case we have a web service that uses a service in another PU through GS remoting. This service returns a list of objects. When this list is large, let's say, 2,000 - 4,000 objects, then you can see that the memory usage in the GSC of the space starts to increase and never decrease, although the number of objects in the space remains the same.

We don't have the logic collocated with the space and even we call the garbage collector, the memory usage keeps really high. This behavior continues until the space breaks.

What we did in order to verify that was a problem with GS remoting was having the reference to the space in the web service and instead of calling another service through GS remoting to get the data, we did the query directly against the space. Doing it in this way worked without problems, so that's why it looks like there is a memory leak when retrieving large amount of data using GS remoting.

Please, let us know if you need more information or if you see that we are doing something wrong.


Diego h4. Attachments


{quote}This thread was imported from the previous forum. For your reference, the original is [available here|http://forum.openspaces.org/thread.jspa?threadID=3671]{quote}

asked 2011-05-26 16:43:17 -0500

dgaviola gravatar image

updated 2013-08-08 09:52:00 -0500

jaissefsfex gravatar image
edit retag flag offensive close merge delete

1 Answer

Sort by » oldest newest most voted

The problem you are describing sound as a problem with the remote service method implementation. I’m not aware with any memory leak with GigaSpaces remoting infrastructure. When the remote client asking for a large amount of data from the space, it is actually breaks the result buffer into smaller chunks of 64K to avoid the need to allocate large buffer on the NIO communication channel. You can control the buffer size using the com.gs.transport_protocol.lrmi.maxBufferSize system property.

Can you please post a test case that reproduces the problem?

As an alternative / workaround you can use the GSIterator to get the result set in chunks. This will allow you to break the result set into smaller chunk in more explicit manner. See:



  1. memoryleak.tar.bz2

answered 2011-05-26 17:11:07 -0500

shay hassidim gravatar image
edit flag offensive delete link more


Hi Shay,

Thanks for your answer. We have already tried using the iterator, but the problem is still there. I have created a test project with the same structure we have where you can reproduce the issue. If you build it using maven, you'll get three artifacts you have to deploy into Gigaspaces:

  • orderGrid
  • serviceLayerApiImpl
  • serviceLayerApiWs

The last one is a simple web application that uses a service from serviceLayerApiImpl through GS remoting. After you have those three PU deployed, use a web browser or something like RestClient and call the following URL:


This will create 4,000 groups and 4,000 users. Then call the following URL to get the list of users:


Call this last URL several times. In my test environment the GSC where the space was deployed had 256 mb and after 60 calls you start getting memory shortage exceptions.

If you uncomment line 38 in RestServiceLayerController.java (in serviceLayerApiWs project) and comment line 37 and redploy everything again, you'll see that the problem is gone, and the only thing that was changed is that instead of getting the list of users using GS remoting we get it querying the space directly.

I hope this can help to diagnose the problem and probably you can point us to the solution.


Diego h4. Attachments


dgaviola gravatar imagedgaviola ( 2011-05-26 20:12:48 -0500 )edit

Diego, I’ve reviewed your code. It was helpful. You are using a wrong configuration. Your web service invoking a space service that is accessing a remote space. This is not how you should configure the system. - You should use executor based remoting and not the event driven remoting that should be used only in special cases. - The service must be colocated with the space. Running the service with remote space proxy is not supported. This means you should move your service definition into the pu.xml that includes the space definition (/./Space).

See: http://www.gigaspaces.com/wiki/displa...


shay hassidim gravatar imageshay hassidim ( 2011-05-27 00:14:38 -0500 )edit


By design we decided not to collocate the space with the logic because we wanted to be able to upgrade some parts of the logic without bringing down the space. That's why we use Event Driven Proxy. If you look at the GigaSpaces documentation, it states that you should use the Event Driven Remoting when you don't want to co-locate the service with the sapce:

"However, there are two main scenarios where you should prefer Event Driven Remoting on top of Executor Based Remoting:

- When you would like the actual service to not to be co-located with the space. With Executor Based Remoting, the remote service implementation can only be located within the space's JVM(s). With Event Driven Remoting, you can locate the client on a remote machine and use the classic Master/Worker pattern for processing the invocation. This offloads the processing from the space (at the expense of moving your service away from the data it might need to do the processing)."


Let me know if I'm misunderstanding something. Even if that's not the best use case, you are avoiding our question which is the problem with the Event Driven Proxy and the memory leak we are seeing.

Let me know if you know how we can fix that issue.



dgaviola gravatar imagedgaviola ( 2011-05-27 12:12:02 -0500 )edit

When having a business logic colocated with the space you can update the business logic jars/classes without bringing down the space. Here are 2 options: - Hot deploy - http://www.gigaspaces.com/wiki/displa... - Refresh API - http://www.gigaspaces.com/wiki/displa...

Having a service interacting with a remote space does not make sense in many cases since it means that every data access operation will involve 2 remote calls. When reading large amount of data, this means the data will be serialized twice: From the space to the remote service and another time from the remote service to the client (a web service in your case that will need to perform another serialization to send the data to the its client). This imposes a performance impact you should consider.

Anyway , to identify the source of the leak I suggest you to call the
jmap -histo <jvm process="" id="" >="" <br=""> after each call and examine the heap class instance count list. This will allow you to find the objects that remains within the JVM heap with each remote call invocation. If it is a GigaSpaces class please report this to GigaSpaces support.


btw - I'm not sure how a remote event driven service support partitioned space properly. I will need to check this.

Edited by: Shay Hassidim on May 27, 2011 2:25 PM

Edited by: Shay Hassidim on May 27, 2011 2:25 PM

shay hassidim gravatar imageshay hassidim ( 2011-05-27 13:24:01 -0500 )edit

Use the admin API via GigaSpaces management center (or any other java tool) to generate a memory dump of the GSC that contains the space you suspect has a leak and have a look at that dump to see what is holding most of the memory.


eitany gravatar imageeitany ( 2011-05-28 01:47:30 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2011-05-26 16:43:17 -0500

Seen: 170 times

Last updated: May 26 '11