Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thift framesize out of bound #1125

Closed
mishraawake opened this issue Jul 23, 2015 · 5 comments
Closed

Thift framesize out of bound #1125

mishraawake opened this issue Jul 23, 2015 · 5 comments
Assignees

Comments

@mishraawake
Copy link

We are facing one problem herein for some shallow node, default thift frame size exceeds from its limit defined in asyntax client.
Is titan planning to include latest as astyanax library because in new astyanax library there is provision of configuring this particular setting that we need to increase.

Version of titan that we are using is : 0.5.4

There is another work around for this problem is to use cassandrathrift as backend configuration.
But I don't think it is recommended for live environment.

For your kind information we are getting following exception.

com.thinkaurelius.titan.core.TitanException: Could not execute operation due to backend exception

   at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:44)

   at com.thinkaurelius.titan.diskstorage.BackendTransaction.executeRead(BackendTransaction.java:428)

   at com.thinkaurelius.titan.diskstorage.BackendTransaction.edgeStoreMultiQuery(BackendTransaction.java:269)

   at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.edgeMultiQuery(StandardTitanGraph.java:375)

   at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.executeMultiQuery(StandardTitanTx.java:1002)

   at com.thinkaurelius.titan.graphdb.query.vertex.MultiVertexCentricQueryBuilder.execute(MultiVertexCentricQueryBuilder.java:92)

   at com.thinkaurelius.titan.graphdb.query.vertex.MultiVertexCentricQueryBuilder.titanEdges(MultiVertexCentricQueryBuilder.java:112)

   at com.til.cms.graph.titan.read.dao.GraphReadDaoImpl.getChildHierarchyEdgesBasedOnRank(GraphReadDaoImpl.java:205)

   at com.til.cms.graph.titan.services.TitanGraphRead.getContentListNodes(TitanGraphRead.java:964)

   at com.til.cms.graph.titan.services.TitanGraphRead.getContentListByRank(TitanGraphRead.java:313)

   at com.til.cms.graph.controller.JCMSController.getSpecialSections(JCMSController.java:233)

   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

   at java.lang.reflect.Method.invoke(Method.java:497)

   at com.til.cms.graph.titan.netty.AnnotationProcessor.executeMethod(AnnotationProcessor.java:81)

   at com.til.cms.graph.titan.netty.RequestProcessor.handleRequest(RequestProcessor.java:18)

   at com.til.cms.graph.titan.netty.CMSGraphServerHandler.messageReceived(CMSGraphServerHandler.java:87)

   at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)

   at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:74)

   at io.netty.channel.DefaultChannelHandlerInvoker$6.run(DefaultChannelHandlerInvoker.java:143)

   at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:57)

   at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:794)

   at java.lang.Thread.run(Thread.java:745)

Caused by: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Could not successfully complete backend operation due to repeated temporary exceptions after Duration[10 s]

   at com.thinkaurelius.titan.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:86)

   at com.thinkaurelius.titan.diskstorage.util.BackendOperation.execute(BackendOperation.java:42)

   ... 23 more

Caused by: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Temporary failure in storage backend

   at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:114)

   at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getSlice(AstyanaxKeyColumnValueStore.java:73)

   at com.thinkaurelius.titan.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:70)

   at com.thinkaurelius.titan.diskstorage.keycolumnvalue.KCVSProxy.getSlice(KCVSProxy.java:70)

   at com.thinkaurelius.titan.diskstorage.BackendTransaction$2.call(BackendTransaction.java:272)

   at com.thinkaurelius.titan.diskstorage.BackendTransaction$2.call(BackendTransaction.java:269)

   at com.thinkaurelius.titan.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:56)

   ... 24 more

Caused by: com.netflix.astyanax.connectionpool.exceptions.TransportException: TransportException: [host=192.168.34.91(192.168.34.91):9160, latency=185(185), attempts=1]org.apache.thrift.transport.TTransportException: Frame size (20843333) larger than max length (16384000)!

   at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:197)

   at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65)

   at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28)

   at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:151)

   at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:119)

   at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:338)

   at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4.execute(ThriftColumnFamilyQueryImpl.java:527)

   at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxKeyColumnValueStore.getNamesSlice(AstyanaxKeyColumnValueStore.java:112)

   ... 30 more

Caused by: org.apache.thrift.transport.TTransportException: Frame size (20843333) larger than max length (16384000)!

   at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:137)

   at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)

   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)

   at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)

   at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)

   at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)

   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)

   at org.apache.cassandra.thrift.Cassandra$Client.recv_multiget_slice(Cassandra.java:732)

   at org.apache.cassandra.thrift.Cassandra$Client.multiget_slice(Cassandra.java:716)

   at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:533)

   at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$4$1.internalExecute(ThriftColumnFamilyQueryImpl.java:530)

   at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)

   ... 36 more
@dalaro
Copy link
Member

dalaro commented Aug 27, 2015

Is this the Astyanax feature that we need? I know they have a Thrift line and a CQL line, and I don't remember which one master corresponds to off the top of my head. I'll take a closer look later if I don't hear back from you.

https://github.com/Netflix/astyanax/pull/547/files

@concretevitamin
Copy link

@dalaro We got hit by this issue as well. Looking at that pull request, I think it does exactly that. Is it possible to make use of that in Titan? (We're using 0.5.4.)

@concretevitamin
Copy link

@mishraawake Did you find a workable workaround? I'm aware of using cassandrathrift (which allows the frame size to be enlarged), however its performance seems to degrade to an unacceptable level.

@graben1437
Copy link
Contributor

Same question I had on another Issue - but Astyanax doesn't look like it supports Cassandra 2.2.x.

Are we going to get into submitting pull requests there to upgrade the Astyanax / Thrift version of their driver...or what is the thought on how to continue doing Thrift with Cassandra 2.2.x + given the non-backwards changes that were made in Cassandra ? The Thrift driver still works, but per remarks here, it isn't always the preferred driver.

@dalaro
Copy link
Member

dalaro commented Dec 18, 2015

Titan release 1.0.0 contains this change which adds an option to control the Astyanax thrift frame size: 1001f3a. However, this doesn't help the 0.5 line. This open PR will extend the option to control the Astyanax thrift frame size to 0.5: #1146. I think that linked PR subsumes work remaining on this issue, so I'm going to close this issue (but keep the PR open until it is merged). Feel free to reopen if there's a separate problem in this issue that I missed.

@dalaro dalaro closed this as completed Dec 18, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants