"Is BATCH the recommended way of updating all three tables at one go so that the information between the three tables is consistent ? "
If you're thinking about "atomicity", no it's not atomic. Indeed with logged batches, what you gain is automatic retry done for you by the coordinator in case of failure in the middle of the batch, and that's pretty much it. The logged batch relieves the developer of the burden of having to set up a retry strategy client side
In general, it is recommended not to put too much data/statements in a batch because the coordinator will block until having the ack for each statement in the batch. Having thousands of statements in the same batch or few statements with huge payload is definitely a bad idea.
"Are there alternate approaches and recommendations?"
If you don't care about managing retry yourself (or relying on the retry policy of the driver), use executeAsync() to dispatch all statements over the nodes of your cluster