timeout creating table

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

timeout creating table

Jimmy Lin
hi,
we have some unit tests that run parallel that will create tmp keyspace, and tables and then drop them after tests are done.

From time to time, our create table statement run into "All hosts(s) for query failed... Timeout during read" (from datastax driver) error.

We later turn on tracing, and record something  in the following.
See below between "===" , Native_Transport-Request thread and MigrationStage thread, there was like 16 seconds doing something.

Any idea what that 16 seconds Cassandra was doing? We can work around that but increasing our datastax driver timeout value, but wondering if there is actually better way to solve this?

thanks



---------------- tracing ----------


5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015 |                                                                               Key cache hit for sstable 95588 | 127.0.0.1 |           1592 | Native-Transport-Requests:102
5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015 |                                                                   Seeking to partition beginning in data file | 127.0.0.1 |           1593 | Native-Transport-Requests:102
5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015 |                                                                    Merging data from memtables and 3 sstables | 127.0.0.1 |           1595 | Native-Transport-Requests:102

=====================
5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015 |                                                                            Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 | Native-Transport-Requests:102
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015 |               Executing seq scan across 1 sstables for (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |       16381594 |              MigrationStage:1
=====================

5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015 |                                                                   Seeking to partition beginning in data file | 127.0.0.1 |       16381782 |              MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015 |                                                                            Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |              MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015 |                                                                   Seeking to partition beginning in data file | 127.0.0.1 |       16381789 |              MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015 |                                                                            Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |              MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015 |                                                                   Seeking to partition beginning in data file | 127.0.0.1 |       16381792 |              MigrationStage:1
5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015 |                                                                            Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |              MigrationStage:1
.
.
.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: timeout creating table

Jim Witschey
Jimmy,

What's the exact command that produced this trace? Are you saying that
the 16-second wait in your trace what times out in your CREATE TABLE
statements?

Jim Witschey

Software Engineer in Test | [hidden email]

On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin <[hidden email]> wrote:

> hi,
> we have some unit tests that run parallel that will create tmp keyspace, and
> tables and then drop them after tests are done.
>
> From time to time, our create table statement run into "All hosts(s) for
> query failed... Timeout during read" (from datastax driver) error.
>
> We later turn on tracing, and record something  in the following.
> See below between "===" , Native_Transport-Request thread and MigrationStage
> thread, there was like 16 seconds doing something.
>
> Any idea what that 16 seconds Cassandra was doing? We can work around that
> but increasing our datastax driver timeout value, but wondering if there is
> actually better way to solve this?
>
> thanks
>
>
>
> ---------------- tracing ----------
>
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
> |
> Key cache hit for sstable 95588 | 127.0.0.1 |           1592 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |           1593 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
> |                                                                    Merging
> data from memtables and 3 sstables | 127.0.0.1 |           1595 |
> Native-Transport-Requests:102
>
> =====================
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
> |
> Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
> |               Executing seq scan across 1 sstables for
> (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
> 16381594 |              MigrationStage:1
> =====================
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381782 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381789 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381792 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |
> MigrationStage:1
> .
> .
> .
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: timeout creating table

Jimmy Lin
Yes, sometimes it is create table and sometime it is create index.
It doesn't happen all the time, but feel like if multiple tests trying to do schema change(create or drop), Cassandra has a long delay on the schema change statements.

I also just read about "auto_snapshot", and I turn it off but still no luck.



On Mon, Apr 20, 2015 at 6:42 AM, Jim Witschey <[hidden email]> wrote:
Jimmy,

What's the exact command that produced this trace? Are you saying that
the 16-second wait in your trace what times out in your CREATE TABLE
statements?

Jim Witschey

Software Engineer in Test | [hidden email]

On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin <[hidden email]> wrote:
> hi,
> we have some unit tests that run parallel that will create tmp keyspace, and
> tables and then drop them after tests are done.
>
> From time to time, our create table statement run into "All hosts(s) for
> query failed... Timeout during read" (from datastax driver) error.
>
> We later turn on tracing, and record something  in the following.
> See below between "===" , Native_Transport-Request thread and MigrationStage
> thread, there was like 16 seconds doing something.
>
> Any idea what that 16 seconds Cassandra was doing? We can work around that
> but increasing our datastax driver timeout value, but wondering if there is
> actually better way to solve this?
>
> thanks
>
>
>
> ---------------- tracing ----------
>
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
> |
> Key cache hit for sstable 95588 | 127.0.0.1 |           1592 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |           1593 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
> |                                                                    Merging
> data from memtables and 3 sstables | 127.0.0.1 |           1595 |
> Native-Transport-Requests:102
>
> =====================
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
> |
> Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
> |               Executing seq scan across 1 sstables for
> (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
> 16381594 |              MigrationStage:1
> =====================
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381782 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381789 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381792 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |
> MigrationStage:1
> .
> .
> .
>

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: timeout creating table

Sebastian Estevez
Can you grep for GCInspector in your system.log? Maybe you have long GC pauses.

All the best,


datastax_logo.png

Sebastián Estévez

Solutions Architect | 954 905 8615 | [hidden email]

linkedin.png facebook.png twitter.png g+.png



DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Mon, Apr 20, 2015 at 12:19 PM, Jimmy Lin <[hidden email]> wrote:
Yes, sometimes it is create table and sometime it is create index.
It doesn't happen all the time, but feel like if multiple tests trying to do schema change(create or drop), Cassandra has a long delay on the schema change statements.

I also just read about "auto_snapshot", and I turn it off but still no luck.



On Mon, Apr 20, 2015 at 6:42 AM, Jim Witschey <[hidden email]> wrote:
Jimmy,

What's the exact command that produced this trace? Are you saying that
the 16-second wait in your trace what times out in your CREATE TABLE
statements?

Jim Witschey

Software Engineer in Test | [hidden email]

On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin <[hidden email]> wrote:
> hi,
> we have some unit tests that run parallel that will create tmp keyspace, and
> tables and then drop them after tests are done.
>
> From time to time, our create table statement run into "All hosts(s) for
> query failed... Timeout during read" (from datastax driver) error.
>
> We later turn on tracing, and record something  in the following.
> See below between "===" , Native_Transport-Request thread and MigrationStage
> thread, there was like 16 seconds doing something.
>
> Any idea what that 16 seconds Cassandra was doing? We can work around that
> but increasing our datastax driver timeout value, but wondering if there is
> actually better way to solve this?
>
> thanks
>
>
>
> ---------------- tracing ----------
>
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
> |
> Key cache hit for sstable 95588 | 127.0.0.1 |           1592 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |           1593 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
> |                                                                    Merging
> data from memtables and 3 sstables | 127.0.0.1 |           1595 |
> Native-Transport-Requests:102
>
> =====================
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
> |
> Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
> |               Executing seq scan across 1 sstables for
> (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
> 16381594 |              MigrationStage:1
> =====================
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381782 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381789 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381792 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |
> MigrationStage:1
> .
> .
> .
>


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: timeout creating table

Jimmy Lin
hi,
there were only a few (4 of them across 4 minutes with around 200ms), so shouldn't be the reason

The system log has tons of
 INFO [MigrationStage:1] 2015-04-20 11:03:21,880 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_keyspaces@2079381036(138/1215 serialized/live bytes, 3 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,900 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columnfamilies@1283263314(1036/3946 serialized/live bytes, 24 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,921 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columns

But that could be just normal given that our unit tests are doing lot of droping keyspace and creating keyspace/tables.

I read the MigrationStage thread pool is default to one, so wondering if that could be a reason it may be doing something that block others?



On Mon, Apr 20, 2015 at 2:40 PM, Sebastian Estevez <[hidden email]> wrote:
Can you grep for GCInspector in your system.log? Maybe you have long GC pauses.

All the best,


datastax_logo.png

Sebastián Estévez

Solutions Architect | <a href="tel:954%20905%208615" target="_blank" value="+19549058615">954 905 8615 | [hidden email]

linkedin.png facebook.png twitter.png g+.png



DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Mon, Apr 20, 2015 at 12:19 PM, Jimmy Lin <[hidden email]> wrote:
Yes, sometimes it is create table and sometime it is create index.
It doesn't happen all the time, but feel like if multiple tests trying to do schema change(create or drop), Cassandra has a long delay on the schema change statements.

I also just read about "auto_snapshot", and I turn it off but still no luck.



On Mon, Apr 20, 2015 at 6:42 AM, Jim Witschey <[hidden email]> wrote:
Jimmy,

What's the exact command that produced this trace? Are you saying that
the 16-second wait in your trace what times out in your CREATE TABLE
statements?

Jim Witschey

Software Engineer in Test | [hidden email]

On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin <[hidden email]> wrote:
> hi,
> we have some unit tests that run parallel that will create tmp keyspace, and
> tables and then drop them after tests are done.
>
> From time to time, our create table statement run into "All hosts(s) for
> query failed... Timeout during read" (from datastax driver) error.
>
> We later turn on tracing, and record something  in the following.
> See below between "===" , Native_Transport-Request thread and MigrationStage
> thread, there was like 16 seconds doing something.
>
> Any idea what that 16 seconds Cassandra was doing? We can work around that
> but increasing our datastax driver timeout value, but wondering if there is
> actually better way to solve this?
>
> thanks
>
>
>
> ---------------- tracing ----------
>
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
> |
> Key cache hit for sstable 95588 | 127.0.0.1 |           1592 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |           1593 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
> |                                                                    Merging
> data from memtables and 3 sstables | 127.0.0.1 |           1595 |
> Native-Transport-Requests:102
>
> =====================
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
> |
> Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
> |               Executing seq scan across 1 sstables for
> (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
> 16381594 |              MigrationStage:1
> =====================
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381782 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381789 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381792 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |
> MigrationStage:1
> .
> .
> .
>



Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: timeout creating table

Jimmy Lin
Also I am not sure it matters, but I just realized the keyspace created has replication factor of 2 when my Cassandra is really just a single node.
Is Cassandra smart enough to ignore the RF of 2 and work with only 1 single node?



On Mon, Apr 20, 2015 at 8:23 PM, Jimmy Lin <[hidden email]> wrote:
hi,
there were only a few (4 of them across 4 minutes with around 200ms), so shouldn't be the reason

The system log has tons of
 INFO [MigrationStage:1] 2015-04-20 11:03:21,880 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_keyspaces@<a href="tel:2079381036" target="_blank" value="+12079381036">2079381036(138/1215 serialized/live bytes, 3 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,900 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columnfamilies@1283263314(1036/3946 serialized/live bytes, 24 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,921 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columns

But that could be just normal given that our unit tests are doing lot of droping keyspace and creating keyspace/tables.

I read the MigrationStage thread pool is default to one, so wondering if that could be a reason it may be doing something that block others?



On Mon, Apr 20, 2015 at 2:40 PM, Sebastian Estevez <[hidden email]> wrote:
Can you grep for GCInspector in your system.log? Maybe you have long GC pauses.

All the best,


datastax_logo.png

Sebastián Estévez

Solutions Architect | <a href="tel:954%20905%208615" target="_blank" value="+19549058615">954 905 8615 | [hidden email]

linkedin.png facebook.png twitter.png g+.png



DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Mon, Apr 20, 2015 at 12:19 PM, Jimmy Lin <[hidden email]> wrote:
Yes, sometimes it is create table and sometime it is create index.
It doesn't happen all the time, but feel like if multiple tests trying to do schema change(create or drop), Cassandra has a long delay on the schema change statements.

I also just read about "auto_snapshot", and I turn it off but still no luck.



On Mon, Apr 20, 2015 at 6:42 AM, Jim Witschey <[hidden email]> wrote:
Jimmy,

What's the exact command that produced this trace? Are you saying that
the 16-second wait in your trace what times out in your CREATE TABLE
statements?

Jim Witschey

Software Engineer in Test | [hidden email]

On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin <[hidden email]> wrote:
> hi,
> we have some unit tests that run parallel that will create tmp keyspace, and
> tables and then drop them after tests are done.
>
> From time to time, our create table statement run into "All hosts(s) for
> query failed... Timeout during read" (from datastax driver) error.
>
> We later turn on tracing, and record something  in the following.
> See below between "===" , Native_Transport-Request thread and MigrationStage
> thread, there was like 16 seconds doing something.
>
> Any idea what that 16 seconds Cassandra was doing? We can work around that
> but increasing our datastax driver timeout value, but wondering if there is
> actually better way to solve this?
>
> thanks
>
>
>
> ---------------- tracing ----------
>
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
> |
> Key cache hit for sstable 95588 | 127.0.0.1 |           1592 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |           1593 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
> |                                                                    Merging
> data from memtables and 3 sstables | 127.0.0.1 |           1595 |
> Native-Transport-Requests:102
>
> =====================
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
> |
> Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
> |               Executing seq scan across 1 sstables for
> (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
> 16381594 |              MigrationStage:1
> =====================
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381782 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381789 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381792 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |
> MigrationStage:1
> .
> .
> .
>




Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: timeout creating table

Sebastian Estevez

That is a problem, you should not have RF > N.

Do an alter table to fix it.

This will affect your reads and writes if you're doing anything > CL 1 --> timeouts.

On Apr 23, 2015 4:35 AM, "Jimmy Lin" <[hidden email]> wrote:
Also I am not sure it matters, but I just realized the keyspace created has replication factor of 2 when my Cassandra is really just a single node.
Is Cassandra smart enough to ignore the RF of 2 and work with only 1 single node?



On Mon, Apr 20, 2015 at 8:23 PM, Jimmy Lin <[hidden email]> wrote:
hi,
there were only a few (4 of them across 4 minutes with around 200ms), so shouldn't be the reason

The system log has tons of
 INFO [MigrationStage:1] 2015-04-20 11:03:21,880 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_keyspaces@<a href="tel:2079381036" value="+12079381036" target="_blank">2079381036(138/1215 serialized/live bytes, 3 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,900 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columnfamilies@1283263314(1036/3946 serialized/live bytes, 24 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,921 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columns

But that could be just normal given that our unit tests are doing lot of droping keyspace and creating keyspace/tables.

I read the MigrationStage thread pool is default to one, so wondering if that could be a reason it may be doing something that block others?



On Mon, Apr 20, 2015 at 2:40 PM, Sebastian Estevez <[hidden email]> wrote:
Can you grep for GCInspector in your system.log? Maybe you have long GC pauses.

All the best,


datastax_logo.png

Sebastián Estévez

Solutions Architect | <a href="tel:954%20905%208615" value="+19549058615" target="_blank">954 905 8615 | [hidden email]

linkedin.png facebook.png twitter.png g+.png



DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Mon, Apr 20, 2015 at 12:19 PM, Jimmy Lin <[hidden email]> wrote:
Yes, sometimes it is create table and sometime it is create index.
It doesn't happen all the time, but feel like if multiple tests trying to do schema change(create or drop), Cassandra has a long delay on the schema change statements.

I also just read about "auto_snapshot", and I turn it off but still no luck.



On Mon, Apr 20, 2015 at 6:42 AM, Jim Witschey <[hidden email]> wrote:
Jimmy,

What's the exact command that produced this trace? Are you saying that
the 16-second wait in your trace what times out in your CREATE TABLE
statements?

Jim Witschey

Software Engineer in Test | [hidden email]

On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin <[hidden email]> wrote:
> hi,
> we have some unit tests that run parallel that will create tmp keyspace, and
> tables and then drop them after tests are done.
>
> From time to time, our create table statement run into "All hosts(s) for
> query failed... Timeout during read" (from datastax driver) error.
>
> We later turn on tracing, and record something  in the following.
> See below between "===" , Native_Transport-Request thread and MigrationStage
> thread, there was like 16 seconds doing something.
>
> Any idea what that 16 seconds Cassandra was doing? We can work around that
> but increasing our datastax driver timeout value, but wondering if there is
> actually better way to solve this?
>
> thanks
>
>
>
> ---------------- tracing ----------
>
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
> |
> Key cache hit for sstable 95588 | 127.0.0.1 |           1592 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |           1593 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
> |                                                                    Merging
> data from memtables and 3 sstables | 127.0.0.1 |           1595 |
> Native-Transport-Requests:102
>
> =====================
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
> |
> Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
> |               Executing seq scan across 1 sstables for
> (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
> 16381594 |              MigrationStage:1
> =====================
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381782 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381789 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381792 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |
> MigrationStage:1
> .
> .
> .
>




Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: timeout creating table

Jimmy Lin
well i am pretty sure our CL is one. and the long pause seems happen somewhat randomly.
But is creating keyspace or table statements has different treatment in terms of CL that may explain the long pause?

thanks


On Thu, Apr 23, 2015 at 8:04 AM, Sebastian Estevez <[hidden email]> wrote:

That is a problem, you should not have RF > N.

Do an alter table to fix it.

This will affect your reads and writes if you're doing anything > CL 1 --> timeouts.

On Apr 23, 2015 4:35 AM, "Jimmy Lin" <[hidden email]> wrote:
Also I am not sure it matters, but I just realized the keyspace created has replication factor of 2 when my Cassandra is really just a single node.
Is Cassandra smart enough to ignore the RF of 2 and work with only 1 single node?



On Mon, Apr 20, 2015 at 8:23 PM, Jimmy Lin <[hidden email]> wrote:
hi,
there were only a few (4 of them across 4 minutes with around 200ms), so shouldn't be the reason

The system log has tons of
 INFO [MigrationStage:1] 2015-04-20 11:03:21,880 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_keyspaces@<a href="tel:2079381036" value="+12079381036" target="_blank">2079381036(138/1215 serialized/live bytes, 3 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,900 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columnfamilies@1283263314(1036/3946 serialized/live bytes, 24 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,921 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columns

But that could be just normal given that our unit tests are doing lot of droping keyspace and creating keyspace/tables.

I read the MigrationStage thread pool is default to one, so wondering if that could be a reason it may be doing something that block others?



On Mon, Apr 20, 2015 at 2:40 PM, Sebastian Estevez <[hidden email]> wrote:
Can you grep for GCInspector in your system.log? Maybe you have long GC pauses.

All the best,


datastax_logo.png

Sebastián Estévez

Solutions Architect | <a href="tel:954%20905%208615" value="+19549058615" target="_blank">954 905 8615 | [hidden email]

linkedin.png facebook.png twitter.png g+.png



DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Mon, Apr 20, 2015 at 12:19 PM, Jimmy Lin <[hidden email]> wrote:
Yes, sometimes it is create table and sometime it is create index.
It doesn't happen all the time, but feel like if multiple tests trying to do schema change(create or drop), Cassandra has a long delay on the schema change statements.

I also just read about "auto_snapshot", and I turn it off but still no luck.



On Mon, Apr 20, 2015 at 6:42 AM, Jim Witschey <[hidden email]> wrote:
Jimmy,

What's the exact command that produced this trace? Are you saying that
the 16-second wait in your trace what times out in your CREATE TABLE
statements?

Jim Witschey

Software Engineer in Test | [hidden email]

On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin <[hidden email]> wrote:
> hi,
> we have some unit tests that run parallel that will create tmp keyspace, and
> tables and then drop them after tests are done.
>
> From time to time, our create table statement run into "All hosts(s) for
> query failed... Timeout during read" (from datastax driver) error.
>
> We later turn on tracing, and record something  in the following.
> See below between "===" , Native_Transport-Request thread and MigrationStage
> thread, there was like 16 seconds doing something.
>
> Any idea what that 16 seconds Cassandra was doing? We can work around that
> but increasing our datastax driver timeout value, but wondering if there is
> actually better way to solve this?
>
> thanks
>
>
>
> ---------------- tracing ----------
>
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
> |
> Key cache hit for sstable 95588 | 127.0.0.1 |           1592 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |           1593 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
> |                                                                    Merging
> data from memtables and 3 sstables | 127.0.0.1 |           1595 |
> Native-Transport-Requests:102
>
> =====================
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
> |
> Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
> |               Executing seq scan across 1 sstables for
> (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
> 16381594 |              MigrationStage:1
> =====================
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381782 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381789 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381792 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |
> MigrationStage:1
> .
> .
> .
>





Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: timeout creating table

Matthew Johnson

Hi Jimmy,

 

I have very limited experience with Cassandra so far, but from following some tutorials to create keyspaces, create tables, and insert data, it definitely seems to me like creating keyspaces and tables is way slower than inserting data. Perhaps a more experienced user can confirm if this is expected.

 

Cheers,

Matt

 

 

From: [hidden email] [mailto:[hidden email]] On Behalf Of Jimmy Lin
Sent: 23 April 2015 18:01
To: [hidden email]
Subject: Re: timeout creating table

 

well i am pretty sure our CL is one. and the long pause seems happen somewhat randomly.

But is creating keyspace or table statements has different treatment in terms of CL that may explain the long pause?

 

thanks

 

 

On Thu, Apr 23, 2015 at 8:04 AM, Sebastian Estevez <[hidden email]> wrote:

That is a problem, you should not have RF > N.

Do an alter table to fix it.

This will affect your reads and writes if you're doing anything > CL 1 --> timeouts.

On Apr 23, 2015 4:35 AM, "Jimmy Lin" <[hidden email]> wrote:

Also I am not sure it matters, but I just realized the keyspace created has replication factor of 2 when my Cassandra is really just a single node.

Is Cassandra smart enough to ignore the RF of 2 and work with only 1 single node?

 

 

 

On Mon, Apr 20, 2015 at 8:23 PM, Jimmy Lin <[hidden email]> wrote:

hi,

there were only a few (4 of them across 4 minutes with around 200ms), so shouldn't be the reason

 

The system log has tons of

 INFO [MigrationStage:1] 2015-04-20 11:03:21,880 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_keyspaces@<a href="tel:2079381036" target="_blank">2079381036(138/1215 serialized/live bytes, 3 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,900 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columnfamilies@1283263314(1036/3946 serialized/live bytes, 24 ops)
 INFO [MigrationStage:1] 2015-04-20 11:03:21,921 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_columns

 

But that could be just normal given that our unit tests are doing lot of droping keyspace and creating keyspace/tables.

 

I read the MigrationStage thread pool is default to one, so wondering if that could be a reason it may be doing something that block others?

 

 

 

On Mon, Apr 20, 2015 at 2:40 PM, Sebastian Estevez <[hidden email]> wrote:

Can you grep for GCInspector in your system.log? Maybe you have long GC pauses.


All the best,

 

datastax_logo.png

Sebastián Estévez

Solutions Architect | <a href="tel:954%20905%208615" target="_blank">954 905 8615 | [hidden email]

linkedin.pngfacebook.pngtwitter.pngg+.png


 

DataStax is the fastest, most scalable distributed database technology, delivering Apache Cassandra to the world’s most innovative enterprises. Datastax is built to be agile, always-on, and predictably scalable to any size. With more than 500 customers in 45 countries, DataStax is the database technology and transactional backbone of choice for the worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

 

On Mon, Apr 20, 2015 at 12:19 PM, Jimmy Lin <[hidden email]> wrote:

Yes, sometimes it is create table and sometime it is create index.

It doesn't happen all the time, but feel like if multiple tests trying to do schema change(create or drop), Cassandra has a long delay on the schema change statements.

 

I also just read about "auto_snapshot", and I turn it off but still no luck.

 

 

 

On Mon, Apr 20, 2015 at 6:42 AM, Jim Witschey <[hidden email]> wrote:

Jimmy,

What's the exact command that produced this trace? Are you saying that
the 16-second wait in your trace what times out in your CREATE TABLE
statements?

Jim Witschey

Software Engineer in Test | [hidden email]


On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin <[hidden email]> wrote:


> hi,
> we have some unit tests that run parallel that will create tmp keyspace, and
> tables and then drop them after tests are done.
>
> From time to time, our create table statement run into "All hosts(s) for
> query failed... Timeout during read" (from datastax driver) error.
>
> We later turn on tracing, and record something  in the following.
> See below between "===" , Native_Transport-Request thread and MigrationStage
> thread, there was like 16 seconds doing something.
>
> Any idea what that 16 seconds Cassandra was doing? We can work around that
> but increasing our datastax driver timeout value, but wondering if there is
> actually better way to solve this?
>
> thanks
>
>
>
> ---------------- tracing ----------
>
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d97-e6e2-11e4-823d-93572f3db015
> |
> Key cache hit for sstable 95588 | 127.0.0.1 |           1592 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d98-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |           1593 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d99-e6e2-11e4-823d-93572f3db015
> |                                                                    Merging
> data from memtables and 3 sstables | 127.0.0.1 |           1595 |
> Native-Transport-Requests:102
>
> =====================
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 58730d9a-e6e2-11e4-823d-93572f3db015
> |
> Read 3 live and 0 tombstoned cells | 127.0.0.1 |           1610 |
> Native-Transport-Requests:102
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a40-e6e2-11e4-823d-93572f3db015
> |               Executing seq scan across 1 sstables for
> (min(-9223372036854775808), min(-9223372036854775808)] | 127.0.0.1 |
> 16381594 |              MigrationStage:1
> =====================
>
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a41-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381782 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a42-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381787 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a43-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381789 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a44-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381791 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a45-e6e2-11e4-823d-93572f3db015
> |                                                                   Seeking
> to partition beginning in data file | 127.0.0.1 |       16381792 |
> MigrationStage:1
> 5872bf70-e6e2-11e4-823d-93572f3db015 | 62364a46-e6e2-11e4-823d-93572f3db015
> |
> Read 0 live and 0 tombstoned cells | 127.0.0.1 |       16381794 |
> MigrationStage:1
> .
> .
> .
>

 

 

 

 

 

Loading...