Create table tomb_test (
primary key (guid, content, range, rank)
Sometime I delete the rows using cassandra java driver using this query
DELETE FROM tomb_test WHERE guid=? and content=? and range=?
in Batch statement with UNLOGGED. CONSISTENCE_LEVEL is local_one.
But if I run
SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1' and range='week'
SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1' and range='week' and rank = 1
The result shows the deleted rows.
If I run this select, the deleted rows are not shown
SELECT * FROM tomb_test WHERE guid='guid-1' and content='content-1'
If I run delete statement in cqlsh, the deleted rows won't show up.
How can I fix this?
Increase the read CL to quorum and you should get correct results.
How many nodes do you have in the cluster and what is the replication factor for the keyspace?
On Mon, Mar 30, 2015 at 7:41 PM, Benyi Wang <[hidden email]> wrote:
Thanks for replying.
In cqlsh, if I change to Quorum (Consistency quorum), sometime the select return the deleted row, sometime not.
I have two virtual data centers: service (3 nodes) and analytics(4 nodes collocate with Hadoop data nodes).The table has 3 replicas in service and 2 in analytics. When I wrote, I wrote into analytics using local_one. So I guest the data may not replicated to all nodes yet.
I will try to use strong consistency for write.
On Mon, Mar 30, 2015 at 11:59 AM, Prem Yadav <[hidden email]> wrote:
What version of Cassandra are you running? Are you by any chance running repairs on your data?
On Mon, Mar 30, 2015 at 5:39 PM, Benyi Wang <[hidden email]> wrote:
Chaordic | Platform
+55 48 3232.3200
In reply to this post by Benyi Wang
Have you checked time sync across all servers? The fact that you've changed consistency levels and you're getting different results may indicate something inherently wrong with the cluster such as writes being dropped or time differences between the nodes.A brute-force approach to better understand what's going on (especially if you have an example of the wrong data being returned) is to do a sstable2json on all your tables and simply grep for an example key.
On Mon, Mar 30, 2015 at 4:39 PM, Benyi Wang <[hidden email]> wrote:
In reply to this post by Paulo Ricardo Motta Gomes
Unfortunately I'm using 2.1.2. Is it possible that I downgrade to 2.0.13 without wiping out the data? I'm worrying about if there is a bug in 2.1.2.
On Tue, Mar 31, 2015 at 4:37 AM, Paulo Ricardo Motta Gomes <[hidden email]> wrote:
In reply to this post by Ken Hancock
All servers are running ntpd. I guess the time should be synced across all servers.
My dataset is too large to use sstable2json. It would take long time.
I will try to repair to see if the issue is gone.
On Tue, Mar 31, 2015 at 7:49 AM, Ken Hancock <[hidden email]> wrote:
|Free forum by Nabble||Edit this page|