您的位置:首页 > 游戏 > 手游 > 建设网站多少钱_中企动力企业邮箱 手机邮箱_火蝠电商代运营靠谱吗_免费手游推广代理平台渠道

建设网站多少钱_中企动力企业邮箱 手机邮箱_火蝠电商代运营靠谱吗_免费手游推广代理平台渠道

2025/3/12 17:06:12 来源:https://blog.csdn.net/qq_42047140/article/details/144618021  浏览:    关键词:建设网站多少钱_中企动力企业邮箱 手机邮箱_火蝠电商代运营靠谱吗_免费手游推广代理平台渠道
建设网站多少钱_中企动力企业邮箱 手机邮箱_火蝠电商代运营靠谱吗_免费手游推广代理平台渠道

Chapter 2 Memory Hierarchy Design

Memory Technology and Optimizations

Performance metrics

  • Latency
    • Access time
      • Time between read request and when desired word arrives
    • Cycle time
      • Minimum time between unrelated requests to memory
    • Bandwidth

![[Pasted image 20241220174926.png|500]]

High Bandwidth Memory (HBM)

  • A packaging innovation rather than a circuit innovation
  • Reduce access latency by shortening the delay between the DRAM and the processor
  • Interposer stacking (2.5D) available, vertical stacking (3D) still under development due to heat constraint

![[Pasted image 20241220175322.png|650]]

Six Basic Cache Optimizations

Average memory access time:

  • Average memory access time = Miss rate × Miss penalty + Hit Time

Six basic cache optimizations:

  • Reduce the miss rate
    • Larger block size (reduce compulsory misses)
      • Increase conflict misses, Increases miss penalty
    • Larger total cache capacity (reduce capacity misses)
      • Increase hit time, Increase cost, Increase power
    • Higher associativity (reduce conflict misses)
      • Increase hit time
        • More tag comparisons
      • Increase power
  • Reduce the miss penalty
    • Higher number of cache levels
      • L1 miss penalty can be reduced by introducing L2 cache
      • balance fast-hits and few-misses
    • Giving priority to read misses over writes
      • “read-after-write” data hazard
        • Let read wait until write buffer is empty
        • Or check the content of the write buffer on a read miss
          • conflict -> wait
          • non-conflict -> handle read miss first
        • Prioritizing read can reduce unnecessary stalling of read miss
  • Reduce the time to hit in cache
    • Avoiding address translation in cache indexing
      • Virtual Cache avoids address translation
        • virtual address -> cached block
          • use virtual address in both set indexing & tag comparison
      • Obstacles to realize virtual cache
        • To ensure protection
          • copy protection bit of TLB into an added field on cache miss
        • To support process switching
          • adopt PID marker in cache
        • To ensure compatibility with multiple alias logic addresses (cache consistency)
          • antialiasing: compare physical addresses of existing blocks with the fetched one

![[Pasted image 20241220180510.png|650]]

![[Pasted image 20241220175802.png]]

Ten Advanced Cache Optimizations

Advanced Cache Optimizations

  • Reducing the hit time
  • Reducing the miss penalty
  • Reducing the miss rate
  • Increasing cache bandwidth
  • Reducing the miss pe

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com