使用MySQL Group Replication时,某些成员可能落后于该组。 由于负载,硬件限制等原因…这种滞后可能会导致不能保持良好的性能认证行为和不能尽可能降低认证失败次数.应用队列(applying queue)越大, 与尚未应用的事务发生冲突的风险就越大(这在Multi-Primary Groups上是有问题的)。
Galera用户已经熟悉这个概念(译注: 指流控的概念). MySQL Group Replication的实现与Galera有两个主要不同点:
the Group is never totally stalled(译注: Group不会彻底停止接收写请求)
the node having issues doesn’t send flow control messages to the rest of the group asking for slowing down(译注: 短板节点不会向Group的其他节点发送flow control消息来要求集群减速)
实际上,该组的每个成员都会向其他成员发送有关其队列(applier queue and certification queue)的一些统计信息。 然后,如果每个节点意识到一个节点达到了其中一个队列的阈值,则决定是否减速:
因此,如果在节点上将group_replication_flow_control_mode设置为QUOTA,并且看到该集群的其他成员之一落后(达到阈值),则会将写入操作限制为最小配额。 此配额是根据最后一秒中应用的transactions数计算的,然后通过减去上一个周期的“over the quota”消息来减少。
and then it is reduced below that by subtracting the “over the quota” messages from the last period.
Galear Cluster将write-sets存储在一个称为Write-set Cache(或称为GCache)的特殊的cache中. GCache cache is a memory allocator for write-sets.主要目的是为了最大限度地减少RAM上的write-setfootprint. Galera集群通过将卸载写入集存储到磁盘来改善此问题.
GCache采用三种类型的存储:
Permanent In-Memory Store Here write-sets allocate using the default memory allocator for the operating system. This is useful in systems that have spare RAM. The store has a hard size limit.
Galera Cluster uses an allocation algorithm that attempts to store write-sets in the above order. That is, first it attempts to use permanent in-memory store. If there is not enough space for the write-set, it attempts to store to the permanent ring-buffer file. The page store always succeeds, unless the write-set is larger than the available disk space.