Interaction between PU spaces and Cache Spaces and running processes

I'm new to Gigaspaces and I've just worked my way through the tutorial material. I am looking to see if it is suitable to be included in our architecture. We've got a number of problems to solve. Firstly we need a partitionable, queryable caching solution which Gigaspaces appears to provide. Secondly we need some infrastructure that can look after some of our processes that a) need to perform work in an going manner and b) in response to external callbacks, think message queue callbacks and thirdly we need to be able to observe events that occur on the data.

We have another upstream feed providers from various different sources. These events go together once they have all arrived to produce a complete view of our data. I was thinking that I would have a number of processes that would insert raw data into a PU space. The various PU's would then pick up the data and normalize it, or, if need be wait for more events to arrive. This leads me onto the first question about how the data in a PU is organised and how routing is performed. For the data to be available to be picked up by a particular PU the @SpaceRouting field would have to get data to be co-located in the same physical instance. When it comes to calling take() to see if other data has arrived, if I call take with a template object where the objects share some natural key, like a trade id, I'm assuming that for the template to work it must populate some sensible value for the space routing field and that I will be able to get hold of data on different physical parts of the same space?

Once the normalisation has occurred I'm thinking of "promoting" the usable data into a cache space from within the PU, I take it that also shouldn't be a problem, but I was going to ask how transactions span or don't span spaces, that's not clear to me. If I've just missed it in the documentation can someone point me in the right direction.

The last question relates to running persistent processes inside of the PU (or any other appropriate space). I was looking for patterns and guidance about the right way to fire up a thread or listener and have it be persisted. I appreciate this might not be a use case which PU spaces are geared up towards, assuming the model is that because a PU is event driven a PU is instantiated on the Primary and Backup and the cluster just ensures that the data is written to the right space (namely the Primary) and to call the PU callback on the Primary only, which means that my thoughts of starting the process inside the constructor is flawed. Is there anyway inside a PU to find out if it's on the Primary or Backup and if that status changes by implementing an interface? Ultimately I would like our MQ listeners and various other processes to live inside the grid.

I'm very early on in my eval but I've not got a huge amount of time to put together the case for this or another product, one of which we've got alot more experience with in house already so any help is greatly received.

Regards,
Max

This thread was imported from the previous forum.
For your reference, the original is available here

asked 2009-06-01 10:28:22 -0500

updated 2013-08-08 09:52:00 -0500

jaissefsfex gravatar image
edit retag flag offensive close merge delete