-
Notifications
You must be signed in to change notification settings - Fork 354
Configuration
The host supplier associates the connection pool with a dynamic host registry. The connection pool will frequently poll this supplier for the current list of hosts and update its internal host connection pools to account for new or removed hosts. This is very useful for running with potentially ephemeral hosts that don’t have fixed ip addresses or host names. A host supplier is also required if you wish to recover from catastrophic outages where the entire cluster has been terminated and none of the seeds are relevant.
The host supplier can also be used in conjunction with describe_ring to filter out hosts in the ring. This can be used to force the client to only connect to hosts in a specific cloud zone or region.
The following example shows how we use the host supplier at Netflix. We have a discovery service with which all hosts must register. The NetflixDiscoveryHostSupplier returns only hosts that are up and registered with our discovery service.
keyspace = new AstyanaxContext.Builder()
.forCluster("SomeCluster")
.forKeyspace("SomeKeyspace")
.withHostSupplier(new NetflixDiscoveryHostSupplier("SomeCluster"))
...
.buildKeyspace(ThriftFamilyFactory.getInstance());
When implementing your own host supplier notice that it returns a Map<BigInteger, List>. The BigInteger represents the start token in the ring while the list of hosts represents the hosts that own the token range starting with this token. If you are with to defer token discovery to the internal discribe_ring then simply provide a map that only has one entry with token “0”.
The following example shows how to set up the token aware connection pool. The first example sets up a basic token aware pool which will round robin all hosts within a token range.
AstyanaxContext<Keyspace> context = new AstyanaxContext.Builder()
.forCluster("ClusterName")
.forKeyspace("KeyspaceName")
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.RING_DESCRIBE)
.setConnectionPoolType(ConnectionPoolType.TOKEN_AWARE)
)
.withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl("MyConnectionPool")
.setPort(9160)
.setMaxConnsPerHost(3)
.setSeeds("127.0.0.1:9160")
)
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
.buildKeyspace(ThriftFamilyFactory.getInstance());
context.start();
Keyspace keyspace = context.getClient();
This example shows how to set up token aware with latency aware using a simple moving average to sort and score hosts within a partition. (Note, SmaLatencyScoreStrategyImpl will be redone soon to not depend on the config object) You will likely want to experiment with these values for your specific use case. Depending on the type of operations you perform there may be huge variance in the latency so you may want to increase the latency aware window size to smooth out the averages.
ConnectionPoolConfigurationImpl poolConfig = new ConnectionPoolConfigurationImpl("MyConnectionPool")
.setPort(9160)
.setMaxConnsPerHost(1)
.setSeeds("127.0.0.1:9160")
.setLatencyAwareUpdateInterval(10000) // Will resort hosts per token partition every 10 seconds
.setLatencyAwareResetInterval(10000) // Will clear the latency every 10 seconds. In practice I set this to 0 which is the default. It's better to be 0.
.setLatencyAwareBadnessThreshold(2) // Will sort hosts if a host is more than 100% slower than the best and always assign connections to the fastest host, otherwise will use round robin
.setLatencyAwareWindowSize(100) // Uses last 100 latency samples. These samples are in a FIFO q and will just cycle themselves.
;
poolConfig.setLatencyScoreStrategy(new SmaLatencyScoreStrategyImpl(poolConfig)); // Enabled SMA. Omit this to use round robin with a token range
AstyanaxContext<Keyspace> context = new AstyanaxContext.Builder()
.forCluster("ClusterName")
.forKeyspace("KeyspaceName")
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.RING_DESCRIBE)
.setConnectionPoolType(ConnectionPoolType.TOKEN_AWARE)
)
.withConnectionPoolConfiguration(poolConfig)
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
.buildKeyspace(ThriftFamilyFactory.getInstance());
context.start();
Keyspace keyspace = context.getEntity();
Here host refers to each cassandra node. Hence if you have a 6 node cluster and if you set MaxConnsPerHost=5, then from a single Astyanax client instance you will have a total of 30 (6×5) conns to the entire cluster.
Typical configurations set this value to 3. You should consider your total throughput and latency when setting this value.
e.g
Say if you need a total of 6K rps as throughput from a 6 node cluster from a single client instance. Assume that this translates to 1K rps per node (assuming random distribution of requests). If each read takes 10ms, then you can do a total of 100 rps to a single node from your client, and hence you will need 10 conns per host.
Default is 2s. This is how long your app will wait for a connection to a node after selecting the node either using ROUND_ROBIN or TOKEN_AWARE strategy.
Default is 10 seconds. This is the regular SO_TIMOUT for the socket.
AstyanaxContext<Keyspace> ctx = new AstyanaxContext.Builder()
.withHostSupplier(hostSupplier)
.forKeyspace("MY KEYSPACE")
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.DISCOVERY_SERVICE)
.setDefaultReadConsistencyLevel(ConsistencyLevel.CL_ONE)
.setDefaultWriteConsistencyLevel(ConsistencyLevel.CL_ONE)
.setConnectionPoolType(ConnectionPoolType.ROUND_ROBIN))
.withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl("MY CONN POOL")
.setPort(7102)
.setConnectTimeout(2000)
.setSocketTimeout(10000)
.setMaxConnsPerHost(8))
.buildKeyspace(ThriftFamilyFactory.getInstance());
Keyspace keyspace = ctx.getClient();
A Netflix Original Production
Tech Blog | Twitter @NetflixOSS | Jobs
- Getting-Started
- Configuration
- Features
- Monitoring
- Thread Safety
- Timeouts
- Recipes
- Examples
- Javadoc
- Utilities
- Cassandra-Compatibility
- FAQ
- End-to-End Examples
- Astyanax Integration with Java Driver