Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question
0

Memory shortage error messages

I'm seeing exceptions like these (they show up on the server as well as in client apps): : com.j_spaces.core.exception.internal.EngineInternalSpaceException: Memory shortage at: host: fooo.com , container:MySpace_container1, space MySpace, total memory: 978 mb, used memory: 787 mb

Which part of GS is saying it is low on memory here? I presume this means some GS process is low on memory, but it does not mean that the space is "full" (which I'm not sure even makes since if we're using LRU)??

thanks,

-dave

{quote}This thread was imported from the previous forum. For your reference, the original is [available here|http://forum.openspaces.org/thread.jspa?threadID=3381]{quote}

asked 2010-03-05 09:57:51 -0600

jazzbutcher gravatar image

updated 2013-08-08 09:52:00 -0600

jaissefsfex gravatar image
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

If you are running in LRU cache policy mode it probably means the space does not evicts fast enough. The machine might be too slow /too busy, or you are pushing new objects very fast.

You will need to tune the eviction settings. See more here:
http://www.gigaspaces.com/wiki/display/SBP/MovingintoProductionChecklist#MovingintoProductionChecklist-MemoryManagement
http://www.gigaspaces.com/wiki/display/XAP7/MemoryManagementFacilities#MemoryManagementFacilities-HowtheLRUEvictionWorks%3F

Here are settings you might want to try:

space-config.engine.cache_policy=0
space-config.engine.cache_size=10000000
space-config.engine.memory_usage.enabled=true
space-config.engine.memory_usage.high_watermark_percentage=70
space-config.engine.memory_usage.write_only_block_percentage=68
space-config.engine.memory_usage.write_only_check_percentage=65
space-config.engine.memory_usage.low_watermark_percentage=60
space-config.engine.memory_usage.eviction_batch_size=2000
space-config.engine.memory_usage.retry_count=20
space-config.engine.memory_usage.explicit-gc=false
space-config.engine.memory_usage.retry_yield_time=100

Shay

answered 2010-03-05 10:06:40 -0600

shay hassidim gravatar image
edit flag offensive delete link more

Comments

We are not pumping a high volume of entries into the space - so it is hard to believe that eviction policy is related.

Can you explain which component of GS reported the error I saw (ie... is it GSM, GSC, etc) -- is it complaining about memory used internally in these processes or is it complaining about memory that is mapped to objects in the space, etc...

I would like to know the source of this error message before I try tuning a bunch of parameters.

thanks,

-dave

jazzbutcher gravatar imagejazzbutcher ( 2010-03-05 10:51:04 -0600 )edit

The space itself report this message.

Shay

shay hassidim gravatar imageshay hassidim ( 2010-03-05 10:53:46 -0600 )edit

What are the default values for these params you mentioned? I don't see anywhere where they are specified in the GS folder, except commented out params in gs.properties.

What are the default values in the system?

We have a very fast Dell server with 4G RAM on it. The number of entries written to the space at any given time is not large (ie maybe a few thousand) -- overall throughput of objects is not large either. We use LRU.

thanks,

-dave

jazzbutcher gravatar imagejazzbutcher ( 2010-03-05 14:41:20 -0600 )edit

The defaults can be found here: http://www.gigaspaces.com/wiki/displa...

Maybe the space itself does not consume large amount of memory , but something else is. That might explain why the eviction does not reduce the utilized amount of memory.

What's the output of the jmap -histo <jvm pid=""> looks like? Lets see if we can find something unusual with the JVM heap.

Shay

shay hassidim gravatar imageshay hassidim ( 2010-03-06 23:22:49 -0600 )edit

What else might be using up GSC memory? Are there other parts of the system outside the space itself that would use GSC memory?

When the system starts up, GSC is claiming to use 25M of RAM. After a cluster failover and back it has gone up to 60-70, but then after an hour or two, it goes up to 500 and the error msgs start to show up. We seem to see this in a clustered environment mostly as opposed to a single machine.

I will try another test and use jmap.

thanks,

-dave

Edited by: jazzbutcher on Mar 12, 2010 2:58 PM

jazzbutcher gravatar imagejazzbutcher ( 2010-03-12 09:29:30 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2010-03-05 09:57:51 -0600

Seen: 870 times

Last updated: Mar 05 '10