Welcome to the new Gigaspaces XAP forum. To recover your account, please follow these instructions.

Ask Your Question
0

Setting up integration tests with multiple embedded spaces

I was a little struggling the last hours to set up our integration test framework to create N embedded spaces and a cluster proxy for accessing those spaces.

Now it seems to work but there a few things I dont understand.

  1. The creation of the embedded spaces is blocking for the 6 sec lookup timeout, I dont understand this. I do not want to search for an existing space but always create a new one. Is there a way to specify this in the configurer?
  2. How should I setup the lookup service so that the cluster proxy can find the embedded spaces? Basically all spaces run in the same process so I was wondering if I even need to setup lookupgroups/locators and start an embedded LUS?
  3. What "schema" shall I use?
  4. What "cluster_schema" shall I use?
  5. What other properties will make the space setup fast and stable?

In general I would like to have a lightweight space setup to make tests stable and fast.

This is what I do now (I also attached the code in a file): C:\fakepath\MultiPartition.zip

public class MultiPartitionTest {
private List<Runnable> closeAfter = new ArrayList<>();
private PlatformTransactionManager txManager;

@Before
public void beforeTest() throws Exception {
    txManager = new DistributedJiniTxManagerConfigurer().transactionManager();
}

@After
public void afterTest() throws Exception {
    closeAfter.forEach(Runnable::run);
}

private LinkedHashMap<Integer, GigaSpace> createPartitionedSpace(final int numberOfPartitions, final String spaceName) {
    LinkedHashMap<Integer, GigaSpace> cluster = new LinkedHashMap<>(numberOfPartitions);

    for (int partitionId = 1; partitionId <= numberOfPartitions; partitionId++) {
        String url = spaceName
                + "?cluster_schema=partitioned"
                + "&create=true" // what does this do? it seems it has no effect
                + "&id=" + partitionId
                + "&total_members=" + numberOfPartitions;
        EmbeddedSpaceConfigurer configurer = new EmbeddedSpaceConfigurer(url)
                .versioned(true)
                .schema("cache") // what is schema good for?
                .lookupGroups("Test") // is this necessary?
                .lookupTimeout(90) // uses 6 second timeout per default, but i dont need a timeout
                .addProperty("fifo", "true")
                .addProperty("space-config.engine.cache_policy", "0") // what is this good for?
                .addProperty("space-config.lease_manager.expiration_time_interval", "500")
                .addProperty("com.j_spaces.core.container.directory_services.jini_lus.start-embedded-lus", "true") // is this necessary?
                .addProperty("com.j_spaces.core.container.directory_services.jini_lus.enabled", "true") // is this necessary?
                .addProperty("com.j_spaces.core.container.directory_services.jndi.enabled", "false")
                .addProperty("com.j_spaces.core.container.embedded-services.httpd.enabled", "false");

        closeAfter.add(configurer::close);

        GigaSpace space = new GigaSpaceConfigurer(configurer).transactionManager(txManager).clustered(false).gigaSpace();
        cluster.put(partitionId, space);
    }

    return cluster;
}

private GigaSpace createClusterProxy(final String spaceName) {
    String url = "jini://*/*/" + spaceName;
    UrlSpaceConfigurer urlSpaceConfigurer = new UrlSpaceConfigurer(url).lookupGroups("Test").lookupTimeout(2500);
    closeAfter.add(urlSpaceConfigurer::close);

    return new GigaSpaceConfigurer(urlSpaceConfigurer).transactionManager(txManager).clustered(true).gigaSpace();
}

@test
public void readingFromMultipleEmbeddedSpaces() {
    // Given
    LinkedHashMap<Integer, GigaSpace> spaceCluster = createPartitionedSpace(2, "SpaceA");
    GigaSpace clusterProxy = createClusterProxy("SpaceA");

    // When
    for (int partitionId = 1; partitionId <= spaceCluster.entrySet().size(); partitionId++) {
        SomePojo somePojo = new SomePojo(partitionId - 1);
        spaceCluster.get(partitionId).write(somePojo);
    }

    // Then
    for (int partitionId = 1; partitionId <= spaceCluster.entrySet().size(); partitionId++) {
        SomePojo template = new SomePojo(partitionId - 1);
        SomePojo actual = clusterProxy.read(template, 3000);
        assertThat("Reading from partition " + partitionId, actual, notNullValue());
    }
}

public static class SomePojo {
    private String id;
    private Integer routingKey;

    public SomePojo() {
    }

    public SomePojo(final Integer routingKey) {
        this.routingKey = routingKey;
    }

    @SpaceId(autoGenerate = true)
    public String getId() {
        return id;
    }

    public void setId(final String id) {
        this.id = id;
    }

    @SpaceRouting
    public Integer getRoutingKey() {
        return routingKey;
    }

    public void setRoutingKey(final Integer routingKey) {
        this.routingKey = routingKey;
    }
}

}

asked 2015-10-19 11:12:21 -0500

leozilla gravatar image
edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted
0

What version of XAP are you testing this against?

If your intent is to create a cluster with multiple partitions, you can use a ProcessingUnitContainer.

Also note, you don't have to create the partitions individually. Please note the following from the documentation for IPUC ( http://docs.gigaspaces.com/xap102/run... ):

When using a cluster and not specifying an instance Id (see setInstanceId()), the createContainer() method will return a CompoundProcessingUnitContainer filled with processing unit containers (IntegratedProcessingUnitContainer) for each instance in the cluster

You will need to set the following System properties: System.setProperty("com.gs.jini_lus.groups", "dixson_1"); System.setProperty("com.gs.jini_lus.locators", "10.10.10.111");

To instantiate the IntegratedProcessingUnitContainer:

IntegratedProcessingUnitContainerProvider provider = new IntegratedProcessingUnitContainerProvider(); // provide cluster information for the specific PU instance ClusterInfo clusterInfo = new ClusterInfo(); clusterInfo.setSchema("partitioned-sync2backup"); clusterInfo.setNumberOfInstances(2); //clusterInfo.setInstanceId(1); provider.setClusterInfo(clusterInfo);

// set the config location (override the default one - classpath:/META-INF/spring/pu.xml) provider.addConfigLocation("classpath:/test/my-pu.xml");

// Build the Spring application context and "start" it ProcessingUnitContainer container = provider.createContainer();

// ...

container.close()

If this isn't what you are looking for me, please let me know. C:\fakepath\IPUCTestv2.zip Thanks,

Dixson

answered 2015-10-19 15:45:55 -0500

Dixson Huie gravatar image

updated 2015-10-19 15:50:55 -0500

edit flag offensive delete link more

Comments

Thanks for your answer,
We are testing with: 10.1.1-12800-RELEASE

We previously tried to use ProcessingUnitContainer but it seemed to have to much overhead for our tests and one of our guys struggled with it. In the case ouf our example would you require to start a GSA/GSCs beforehand or is the integrated PU container running entirely in the JVM that is executing the junit tests?

Whats the main difference between my approach and the ProcessingUnitContainer?

Currently 100 tests run in approx 30 sec on good hardware we would like to keep this speed.

leozilla gravatar imageleozilla ( 2015-10-20 01:59:18 -0500 )edit

It's fine to use the approach you are using. However, the IPUC is designed with the goal of being able to test and debug your cluster within the IDE. In creating the entire cluster within the IPUC it simplifies your test cases.

Dixson Huie gravatar imageDixson Huie ( 2015-10-20 15:50:16 -0500 )edit

I just realized one drawback of my approach. For example the DistributedTask is not executed per embedded space partitions but is only executed once. So if I create an embedded space with 4 partitions and i execute a DistributedTask than its execute method seems to be executed only once instead of four times.

Does the DistributedTask work with the IPUC? Is there a way to get the DistributedTask work with my approach?

leozilla gravatar imageleozilla ( 2015-10-22 10:18:01 -0500 )edit

I've attached an example of running a distributed task within an IntegratedProcessingUnitContainer. There are 2 parts, the space and the client. For the space, you should add the following command line arguments: -cluster schema=partitioned total_members=2

Dixson Huie gravatar imageDixson Huie ( 2015-10-22 16:20:00 -0500 )edit
0

IntegratedProcessingUnitContainer example with DistributedTask attached. C:\fakepath\ipucdistributedtask.zip

answered 2015-10-22 16:23:04 -0500

Dixson Huie gravatar image
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2015-10-19 11:12:21 -0500

Seen: 1,086 times

Last updated: Oct 22 '15