| The bad news is that Google Docs has just encountered an error.
The good news is that you've helped us find a bug, which we are now looking into.
Sorry, and thanks for your help!
- The Google Docs Team
Can't prepare without a quorum of living replicas Trace: [ 0.000] LogSession {[logKey="dchc522j_15cw6c8ng5", logSpace=logspace:global/writely.production-data.data.logspace]} +------------------------------------------------------- | Started 0.000 after root (absolute time 1276770846.526 seconds) | [ 0.000] CoordinatedLog.init key="dchc522j_15cw6c8ng5" replica=E params=op=WRITE, piggybackRead=true, dl=null, localReadPatienceMs=750, allowFailover=true | [ 0.000] Transitioning from=Init to=AskCoordinator | [ 0.000] SEND [#0] E.isUpToDate() | [ 0.000] Transitioning from=AskCoordinator to=FindPos | [ 0.000] SEND [#1] E.queryLog(queryMode=DEFAULT, piggybackReadRequest=[#8:=]com.google.rewritely.storage.ReplicatedReader$PiggybackReadRequest@f8125b, logPos=null, dl=11.000) | [ 0.000] Predicted local read (5.196 mdev=1.122) as fast as quorum (90.646 mdev=32.633); waiting 750ms before issuing majority read. | [ 0.001] CB [#0] E.isUpToDate => (up-to-date: true, pos: 216) | [ 0.004] CB [#1] E.queryLog => (appliedPos: 249, appliedTime: 1276769410763000, nextLeader: E, lastMustRoll: 249, LWM: {D=(low=228,high=228), E=(low=249,high=249), A=(low=248,high=249), B=(low=228,high=228), C=(low=248,high=249)}, logEntry: null, piggyRead: com.google.rewritely.storage.ReplicatedReader$PiggybackReadResponse@7d3de4) | [ 0.004] Transitioning from=FindPos to=FindFirstReplica | [ 0.004] Transitioning from=FindFirstReplica to=Querying +------------------------------------------------------- | *** trace being toString'd *** | Started 0.004 after root (absolute time 1276770846.530 seconds) | [ 0.000] CoordinatedLog.write key="dchc522j_15cw6c8ng5" replica=E logspace=logspace:global/writely.production-data.data.logspace | [ 0.000] Constructed proposer Proposer {[localReplica=E, id=8130, logPos=250]} timestamp=1276770846530000 syncApply=[E] syncLearn=[] | [ 0.000] Transitioning from=Init to=AcceptLdr | [ 0.000] SEND [#0] E.accept(logPos=250, proposal=[#18:=](0, 0): (nextLdr E, hash f811b883b75d1f21, @1276770846530000, logPos 250, 126 bytes), mustRollForward=false, logState=[#19:=](PS: Unready, Acpt=[], Inv=[], Mrf=[], View=replicas={A=dv, B=dv, C=dv, D=dv, E=dv}, gen=0, time=0, LWM: {D=(low=228,high=228), E=(low=249,high=249), A=(low=248,high=249), B=(low=228,high=228), C=(low=248,high=249)}), dl=1.001) | [ 0.008] CB [#0] E.accept => EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 36631910110495.500 of its own 108851651149824.000 disk:.|OVER_QUOTA|/bigtable/srv-gd/writely.production-data | [ 0.008] Transitioning from=AcceptLdr to=Prepare | [ 0.009] SEND [#1] A.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#2] B.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#3] C.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#4] D.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.009] SEND [#5] E.prepare(logPos=250, proposalNum=(298744293, 7902741955929074510), mustRollForward=false, logState=[#19], dl=6.991) | [ 0.039] CB [#4] D.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.230.37.70:25685 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 54750724655241.000 of its own 134580222820352.000 disk:.|OVER_QUOTA|/bigtable/srv-vb/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megast ... [exception text truncated by msrepl; total 6685 characters] | [ 0.058] CB [#3] C.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.13.114.19:25782 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 55970616351910.500 of its own 115976485994496.000 disk:.|OVER_QUOTA|/bigtable/srv-ia/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megast ... [exception text truncated by msrepl; total 6685 characters] | [ 0.059] CB [#2] B.prepare => EXCEPTION / com.google.storage.megastore.replication.StorageException: Remote RPC failed {[status=EXCEPTION, replicationStatus=MISC, stubbyErr=/AcceptorService.Prepare to 10.224.115.11:25699 [APPLICATION_ERROR(4)] << EXCEPTION / com.google.bigtable.BigtableOverQuotaException: OVER_QUOTA: owner class (@production) exceeds its quota, while the owner (chubby!mdb/writely) is using 54920498505744.000 of its own 124683614683136.000 disk:.|OVER_QUOTA|/bigtable/srv-qa/writely.production-data | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:65) | at com.google.storage.megastore.replication.StorageException.wrap(StorageException.java:46) | at com.google.storage.megastore.replication.monitor.RootCallback.failure(RootCallback.java:89) | at com.google.storage.megastore.replication.health.HealthTracker$HealthTrackingCallback.failure(HealthTracker.java:288) | at com.google.storage.megastore.replication.monitor.ApiTrace$ApiTraceCbk.failure(ApiTrace.java:254) | at com.google.storage.megastore.replication.acceptor.AcceptorImpl$15.failure(AcceptorImpl.java:1909) | at com.google.storage.megastore.replication.net.BtCallbackWrapper$2.run(BtCallbackWrapper.java:113) | at com.google.storage.megastore.replication.net.RequestContainer$ExceptionForwardingRunnable.run(RequestContainer.java:357) | at com.google.storage.megas ... [exception text truncated by msrepl; total 6686 characters] | [ 0.059] Proposer failing: Can't prepare without a quorum of living replicas | [ 0.059] Transitioning from=Prepare to=Failed | [ 0.061] CB [#1] A.prepare => CANCELLED | 1 unfired callbacks: {5} |