Quantcast

Exception while running cassandra stress client

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Exception while running cassandra stress client

ankit tyagi
Hi All,
 
while running cassandra stress tool shipped with cassandra 2.0.4 version, i am getting following error

./bin/cassandra-stress user profile=./bin/test.yaml
Application does not allow arbitrary arguments: user, profile=./bin/test.yaml

I am stuck on this and not able to find out why this exception is coming.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Exception while running cassandra stress client

Abhinav Ranjan

Hi,

We too got the same error. Use cassandra-stress shipped with cassandra 2.1.x to run the test like that.

Regards
Abhinav

On 02-Apr-2015 11:44 am, "ankit tyagi" <[hidden email]> wrote:
Hi All,
 
while running cassandra stress tool shipped with cassandra 2.0.4 version, i am getting following error

./bin/cassandra-stress user profile=./bin/test.yaml
Application does not allow arbitrary arguments: user, profile=./bin/test.yaml

I am stuck on this and not able to find out why this exception is coming.

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Exception while running cassandra stress client

ankit tyagi
Thanks a lot for helping me out here..

I have one more question about cassandra-stress tool. what exactly is cluster distribution in column distribution specifications while defining yaml based profile.

On Thu, Apr 2, 2015 at 3:03 PM, Abhinav Ranjan <[hidden email]> wrote:

Hi,

We too got the same error. Use cassandra-stress shipped with cassandra 2.1.x to run the test like that.

Regards
Abhinav

On 02-Apr-2015 11:44 am, "ankit tyagi" <[hidden email]> wrote:
Hi All,
 
while running cassandra stress tool shipped with cassandra 2.0.4 version, i am getting following error

./bin/cassandra-stress user profile=./bin/test.yaml
Application does not allow arbitrary arguments: user, profile=./bin/test.yaml

I am stuck on this and not able to find out why this exception is coming.


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

How much disk is needed to compact Leveled compaction?

Jean Tremblay
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used almost 100% of the drive. The other nodes refuse to continue compaction claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my data, well at least this is what I understand.

<<Size-tiered compaction requires at least as much free disk space for compaction as the size of the largest column family. Leveled compaction needs much less space for compaction, only 10 * sstable_size_in_mb. However, even if you’re using leveled compaction, you should leave much more free disk space available than this to accommodate streaming, repair, and snapshots, which can easily use 10GB or more of disk space. Furthermore, disk performance tends to decline after 80 to 90% of the disk space is used, so don’t push the boundaries.>>

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be the best solution to overcome this problem?

Thanks for your help

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: How much disk is needed to compact Leveled compaction?

daemeon reiydelle

You appear to have multiple java binaries in your path. That needs to be resolved.

sent from my mobile
Daemeon C.M. Reiydelle
USA 415.501.0198
London +44.0.20.8144.9872

On Apr 5, 2015 1:40 AM, "Jean Tremblay" <[hidden email]> wrote:
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used almost 100% of the drive. The other nodes refuse to continue compaction claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my data, well at least this is what I understand.

<<Size-tiered compaction requires at least as much free disk space for compaction as the size of the largest column family. Leveled compaction needs much less space for compaction, only 10 * sstable_size_in_mb. However, even if you’re using leveled compaction, you should leave much more free disk space available than this to accommodate streaming, repair, and snapshots, which can easily use 10GB or more of disk space. Furthermore, disk performance tends to decline after 80 to 90% of the disk space is used, so don’t push the boundaries.>>

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be the best solution to overcome this problem?

Thanks for your help

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: How much disk is needed to compact Leveled compaction?

DuyHai Doan

If you have SSD, you may afford switching to leveled compaction strategy, which requires much less than 50% of the current dataset for free space

Le 5 avr. 2015 19:04, "daemeon reiydelle" <[hidden email]> a écrit :

You appear to have multiple java binaries in your path. That needs to be resolved.

sent from my mobile
Daemeon C.M. Reiydelle
USA <a href="tel:415.501.0198" value="+14155010198" target="_blank">415.501.0198
London <a href="tel:%2B44.0.20.8144.9872" value="+442081449872" target="_blank">+44.0.20.8144.9872

On Apr 5, 2015 1:40 AM, "Jean Tremblay" <[hidden email]> wrote:
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used almost 100% of the drive. The other nodes refuse to continue compaction claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my data, well at least this is what I understand.

<<Size-tiered compaction requires at least as much free disk space for compaction as the size of the largest column family. Leveled compaction needs much less space for compaction, only 10 * sstable_size_in_mb. However, even if you’re using leveled compaction, you should leave much more free disk space available than this to accommodate streaming, repair, and snapshots, which can easily use 10GB or more of disk space. Furthermore, disk performance tends to decline after 80 to 90% of the disk space is used, so don’t push the boundaries.>>

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be the best solution to overcome this problem?

Thanks for your help

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: How much disk is needed to compact Leveled compaction?

Ali Akhtar
I may have misunderstood, but it seems that he was already using LeveledCompaction

On Tue, Apr 7, 2015 at 3:17 AM, DuyHai Doan <[hidden email]> wrote:

If you have SSD, you may afford switching to leveled compaction strategy, which requires much less than 50% of the current dataset for free space

Le 5 avr. 2015 19:04, "daemeon reiydelle" <[hidden email]> a écrit :

You appear to have multiple java binaries in your path. That needs to be resolved.

sent from my mobile
Daemeon C.M. Reiydelle
USA <a href="tel:415.501.0198" value="+14155010198" target="_blank">415.501.0198
London <a href="tel:%2B44.0.20.8144.9872" value="+442081449872" target="_blank">+44.0.20.8144.9872

On Apr 5, 2015 1:40 AM, "Jean Tremblay" <[hidden email]> wrote:
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used almost 100% of the drive. The other nodes refuse to continue compaction claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my data, well at least this is what I understand.

<<Size-tiered compaction requires at least as much free disk space for compaction as the size of the largest column family. Leveled compaction needs much less space for compaction, only 10 * sstable_size_in_mb. However, even if you’re using leveled compaction, you should leave much more free disk space available than this to accommodate streaming, repair, and snapshots, which can easily use 10GB or more of disk space. Furthermore, disk performance tends to decline after 80 to 90% of the disk space is used, so don’t push the boundaries.>>

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be the best solution to overcome this problem?

Thanks for your help


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: How much disk is needed to compact Leveled compaction?

Bryan Holladay
In reply to this post by DuyHai Doan

What other storage impacting commands or nuances do you gave to consider when you switch to leveled compaction? For instance, nodetool cleanup says

"Running the nodetool cleanup command causes a temporary increase in disk space usage proportional to the size of your largest SSTable."

Are sstables smaller with leveled compaction making this a non issue?

How can you determine what the new threshold for storage space is?

Thanks,
Bryan

On Apr 6, 2015 6:19 PM, "DuyHai Doan" <[hidden email]> wrote:

If you have SSD, you may afford switching to leveled compaction strategy, which requires much less than 50% of the current dataset for free space

Le 5 avr. 2015 19:04, "daemeon reiydelle" <[hidden email]> a écrit :

You appear to have multiple java binaries in your path. That needs to be resolved.

sent from my mobile
Daemeon C.M. Reiydelle
USA <a href="tel:415.501.0198" value="+14155010198" target="_blank">415.501.0198
London <a href="tel:%2B44.0.20.8144.9872" value="+442081449872" target="_blank">+44.0.20.8144.9872

On Apr 5, 2015 1:40 AM, "Jean Tremblay" <[hidden email]> wrote:
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used almost 100% of the drive. The other nodes refuse to continue compaction claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my data, well at least this is what I understand.

<<Size-tiered compaction requires at least as much free disk space for compaction as the size of the largest column family. Leveled compaction needs much less space for compaction, only 10 * sstable_size_in_mb. However, even if you’re using leveled compaction, you should leave much more free disk space available than this to accommodate streaming, repair, and snapshots, which can easily use 10GB or more of disk space. Furthermore, disk performance tends to decline after 80 to 90% of the disk space is used, so don’t push the boundaries.>>

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be the best solution to overcome this problem?

Thanks for your help

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: How much disk is needed to compact Leveled compaction?

Jean Tremblay
I am only using LeveledCompactionStrategy, and as I describe in my original mail, I don’t understand why C* is complaining that it cannot compact when I have more than 40% free disk space.



On 07 Apr 2015, at 01:10 , Bryan Holladay <[hidden email]> wrote:

What other storage impacting commands or nuances do you gave to consider when you switch to leveled compaction? For instance, nodetool cleanup says

"Running the nodetool cleanup command causes a temporary increase in disk space usage proportional to the size of your largest SSTable."

Are sstables smaller with leveled compaction making this a non issue?

How can you determine what the new threshold for storage space is?

Thanks,
Bryan

On Apr 6, 2015 6:19 PM, "DuyHai Doan" <[hidden email]> wrote:

If you have SSD, you may afford switching to leveled compaction strategy, which requires much less than 50% of the current dataset for free space

Le 5 avr. 2015 19:04, "daemeon reiydelle" <[hidden email]> a écrit :

You appear to have multiple java binaries in your path. That needs to be resolved.

sent from my mobile
Daemeon C.M. Reiydelle
USA <a href="tel:415.501.0198" value="&#43;14155010198" target="_blank" class="">415.501.0198
London <a href="tel:%2B44.0.20.8144.9872" value="&#43;442081449872" target="_blank" class=""> +44.0.20.8144.9872

On Apr 5, 2015 1:40 AM, "Jean Tremblay" <[hidden email]> wrote:
Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used almost 100% of the drive. The other nodes refuse to continue compaction claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my data, well at least this is what I understand.

<<Size-tiered compaction requires at least as much free disk space for compaction as the size of the largest column family. Leveled compaction needs much less space for compaction, only 10 * sstable_size_in_mb. However, even if you’re using leveled compaction, you should leave much more free disk space available than this to accommodate streaming, repair, and snapshots, which can easily use 10GB or more of disk space. Furthermore, disk performance tends to decline after 80 to 90% of the disk space is used, so don’t push the boundaries.>>

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be the best solution to overcome this problem?

Thanks for your help


Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: How much disk is needed to compact Leveled compaction?

Jeff Ferland
In reply to this post by Jean Tremblay
Check the size of your individual files. If your largest file is already more than half then you can’t compact it using leveled compaction either. You can take the system offline, split the largest file (I believe there is an sstablesplit utility and I imagine it allows you to take off the tail of the source file, truncate it, and repeat until you have several small files) and then try it. It might also take time until L1 drains down small enough since anything coming up from L0 has to merge with anything in L1 that overlaps. If none of this answers your state, debug logging may be helpful.

I’ve also seen Cassandra make some decisions I don’t yet understand about what to compact. For example:Compacted 30 sstables to [*snip* 30 tables]. 5,033,556,055 bytes to 4,981,255,900 (~98% of original) in 460,670ms = 10.312145MB/s. 10,726,194 total partitions merged to 10,646,039. Partition merge counts were {1:10565884, 2:80155}. Not exactly 10x. Im not sure if it would recalculate if space-constrained. Im also not sure why it would take such large chunks. I have somewhere in my logs a ~76 table compaction to move one table from L3 to L4. Im guessing the selector doesnt take the most dense table from a given level when it makes its choice.

-Jeff
 
On Apr 5, 2015, at 1:38 AM, Jean Tremblay <[hidden email]> wrote:

Hi,
I have a cluster of 5 nodes. We use cassandra 2.1.3.

The 5 nodes use about 50-57% of the 1T SSD.
One node managed to compact all its data. During one compaction this node used almost 100% of the drive. The other nodes refuse to continue compaction claiming that there is not enough disk space.

From the documentation LeveledCompactionStrategy should be able to compact my data, well at least this is what I understand.

<<Size-tiered compaction requires at least as much free disk space for compaction as the size of the largest column family. Leveled compaction needs much less space for compaction, only 10 * sstable_size_in_mb. However, even if you’re using leveled compaction, you should leave much more free disk space available than this to accommodate streaming, repair, and snapshots, which can easily use 10GB or more of disk space. Furthermore, disk performance tends to decline after 80 to 90% of the disk space is used, so don’t push the boundaries.>>

This is the disk usage. Node 4 is the only one that could compact everything.
node0: /dev/disk1 931Gi 534Gi 396Gi 57% /
node1: /dev/disk1 931Gi 513Gi 417Gi 55% /
node2: /dev/disk1 931Gi 526Gi 404Gi 57% /
node3: /dev/disk1 931Gi 507Gi 424Gi 54% /
node4: /dev/disk1 931Gi 475Gi 456Gi 51% /

When I try to compact the other ones I get this:

objc[18698]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
error: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
-- StackTrace --
java.lang.RuntimeException: Not enough space for compaction, estimated sstables = 2894, expected write size = 485616651726
at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace(CompactionTask.java:293)
at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:127)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:76)
at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:512)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)

I did not set the sstable_size_in_mb I use the 160MB default.

Is it normal that during compaction it needs so much diskspace? What would be the best solution to overcome this problem?

Thanks for your help


Loading...