Uber 通过 CacheFront 改进实现每秒 1.5 亿次读取

  • Main point: Uber engineers updated CacheFront architecture to serve over 150 million reads per second with stronger consistency, addressing stale read issues and supporting growing demand.
  • Key information:

    • In earlier design, 40 million reads per second was achieved through deduplication but lacked end-to-end consistency.
    • New implementation introduced write-through consistency protocol, deduplication layer, and tombstone markers.
    • After transaction, storage engine returns commit timestamp and affected row keys to invalidate cached entries.
    • Flux continues tailing MySQL binlogs and performing asynchronous cache fills.
    • Engineers explained motivation as increasing demand for higher cache hit rates and stronger consistency.
    • They deprecated dedicated API and enhanced telemetry and observability dashboards.
  • Important details:

    • Cache invalidations relied on TTL and CDC before, introducing eventual consistency and delayed update visibility.
    • There were issues like read-own-writes and read-own-inserts inconsistencies.
    • Cache shards were reorganized for even load distribution.
    • Cache Inspector tool compares binlog events to cache entries.
    • TTLs for tables can be extended up to 24 hours with high cache hit rate and low latency maintained.
阅读 16
0 条评论