uh-mem: utility-based hybrid memory management · executive summary n dramfaces significant...
TRANSCRIPT
UH-MEM: Utility-BasedHybrid Memory Management
Yang Li, Saugata Ghose, Jongmoo Choi, Jin Sun, Hui Wang, Onur Mutlu
1
Executive Summaryn DRAM faces significant technology scaling difficultiesn Emerging memory technologies overcome difficulties (e.g., high capacity,
low idle power), but have other shortcomings (e.g., slower than DRAM)
2
n Hybrid memory system pairs DRAM with emerging memory technologyq Goal: combine benefits of both memories
in a cost-effective mannerq Problem: Which memory do we place each
page in, to optimize system performance?
Processor
Memory A Memory B
n Our approach: UH-MEM (Utility-based Hybrid MEmory Management)q Key Idea: for each page, estimate utility (i.e., performance impact)
of migrating page, then use utility to guide page placementq Key Mechanism: a comprehensive model to estimate utility using
memory access characteristics, application impact on system performance
n UH-MEM improves performance by 14% on average over the best of three state-of-the-art hybrid memory managers
Outlinen Background
n Existing Solutions
n UH-MEM: Goal and Key Idea
n Key Mechanisms of UH-MEM
n Evaluation
n Conclusion
3
State-of-the-Art Memory Technologiesn DRAM scaling has already become increasingly difficult
q Increasing cell leakage current, reduced cell reliability, increasing manufacturing difficulties [Kim+ ISCA 2014],[Liu+ ISCA 2013], [Mutlu IMW 2017], [Mutlu DATE 2017]
q Difficult to significantly improve capacity, speed, energy
n Emerging memory technologies are promising3D-Stacked DRAM higher bandwidth smaller capacityReduced-Latency DRAM(e.g., RLDRAM, TL-DRAM) lower latency higher cost
Low-Power DRAM(e.g., LPDDR3, LPDDR4) lower power higher latency
higher costNon-Volatile Memory (NVM) (e.g., PCM, STTRAM, ReRAM, 3D Xpoint)
larger capacityhigher latency
higher dynamic power lower endurance
4
Hybrid Memory Systemn Pair multiple memory technologies together to
exploit the advantages of each memoryq e.g., DRAM-NVM: DRAM-like latency, NVM-like capacity
5
Memory A(Fast, Small)
Memory B(Large, Slow)
Cores/Caches
Memory Controllers
Channel A Channel B
Hybrid Memory Systemn Pair multiple memory technologies together to
exploit the advantages of each memoryq e.g., DRAM-NVM: DRAM-like latency, NVM-like capacity
n Separate channels: Memory A and B can beaccessed in parallel
6
Memory A(Fast, Small)
Memory B(Large, Slow)
Cores/Caches
Memory Controllers
Channel A Channel B
DRAM NVM
Hybrid Memory System: Background
n Bank: Contains multiple arrays of cellsq Multiple banks can serve accesses in parallel
n Row buffer: Internal buffer that holds the open row in a bankq Row buffer hits: serviced ~3x faster, similar latency for both
Memory A and Bq Row buffer misses: latency is higher for Memory B 7
… …
Row Buffer BankChannel A Channel B
Memory A Memory B(Fast, Small) (Large, Slow)
Cores/Caches
Memory Controllers
Hybrid Memory System: Problem
n Memory A is fast, but smalln Load should be balanced on both channelsn Page migrations have performance and energy overhead 8
Channel A Channel B
Memory A Memory B(Fast, Small) (Large, Slow)
Page 1 Page 2
IDLE
Which memory do we place each page in, to maximize system performance?
Cores/Caches
Memory Controllers
Outlinen Background
n Existing Solutions
n UH-MEM: Goal and Key Idea
n Key Mechanisms of UH-MEM
n Evaluation
n Conclusion
9
Existing Solutionsn ALL [Qureshi+ ISCA 2009]
q Migrate all pages from slow memory to fast memorywhen the page is accessed
q Fast memory is LRU cache: pages evicted when memory is full
n FREQ: Access frequency based approach [Ramos+ ICS 2011], [Jiang+ HPCA 2010]q Access frequency: number of times a page is accessedq Keep pages with high access frequency in fast memory
n RBLA: Access frequency and row buffer locality based approach [Yoon+ ICCD 2012]q Row buffer locality: fraction of row buffer hits out of all memory
accesses to a pageq Keep pages with high access frequency and low row buffer
locality in fast memory 10
Weaknesses of Existing Solutionsn They are all heuristics that consider only a limited part of
memory access behaviorn Do not directly capture the overall system
performance impact of data placement decisions
n Example: None capture memory-level parallelism (MLP)q Number of concurrent memory requests from the same
application when a page is accessedq Affects how much page migration helps performance
11
Importance of Memory-Level Parallelism
12
requests to Page 1
requests to Page 3
requests to Page 1
requests to Page 3
time
Before migration:
After migration:
requests to Page 2
requests to Page 2
time
Before migration:
After migration:
Mem. B
Mem. B
Mem. A
Mem. A
Mem. B
Mem. A
T T
Migrating one pagereduces stall time by T
Must migrate two pagesto reduce stall time by T:migrating one page alone
does not help
Mem. B
Page migration decisions need to consider MLP
Outlinen Background
n Existing Solutions
n UH-MEM: Goal and Key Idea
n Key Mechanisms of UH-MEM
n Evaluation
n Conclusion
13
Our Goal
A generalized mechanism that
1. Directly estimates the performance benefitof migrating a page betweenany two types of memory
2. Places only the performance-critical datain the fast memory
14
Utility-Based Hybrid Memory Managementn A memory manager that works for any hybrid memory
q e.g., DRAM-NVM, DRAM-RLDRAM
n Key Ideaq For each page, use comprehensive characteristics to
calculate estimated utility (i.e., performance impact)of migrating page from one memory to the other in the system
q Migrate only pages with the highest utility(i.e., pages that improve system performance the most when migrated)
15
Outlinen Background
n Existing Solutions
n UH-MEM: Goal and Key Idea
n Key Mechanisms of UH-MEM
n Evaluation
n Conclusion
16
Key Mechanisms of UH-MEMn For each page, estimate utility using a performance model
q Application stall time reductionHow much would migrating a page benefit the performance of the application that the page belongs to?
q Application performance sensitivityHow much does the improvement of a single application’s performance increase the overall system performance?
n Migrate only pages whose utility exceeds the migration threshold from slow memory to fast memory
n Periodically adjust migration threshold
17
𝑈𝑡𝑖𝑙𝑖𝑡𝑦 = ∆𝑆𝑡𝑎𝑙𝑙𝑇𝑖𝑚𝑒.×𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦.
Key Mechanisms of UH-MEMn For each page, estimate utility using a performance model
q Application stall time reductionHow much would migrating a page benefit the performance of the application that the page belongs to?
q Application performance sensitivityHow much does the improvement of a single application’s performance increase the overall system performance?
n Migrate only pages whose utility exceeds the migration threshold from slow memory to fast memory
n Periodically adjust migration threshold
18
𝑈𝑡𝑖𝑙𝑖𝑡𝑦 = ∆𝑆𝑡𝑎𝑙𝑙𝑇𝑖𝑚𝑒.×𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦.
Estimating Application Stall Time Reductionn Use access frequency, row buffer locality, and MLP
to estimate application’s stall time reduction
19
Access frequency Row buffer locality
Number of row buffer misses
Access latency reductionfor the page if migrated
Why not row buffer hits?Hits have equal latency
in both memories, sono need to track them.
Calculating Total Access Latency Reductionn Use counter to track number of row buffer misses (#RBMiss)
n Calculate change in number ofmemory cycles if page is migrated
n Need separate counters for reads and writesq Each memory can have different latencies for reads and
for writesq Allows us to generalize UH-MEM for any memory
20
∆𝐴𝑐𝑐𝑒𝑠𝑠𝐿𝑎𝑡𝑒𝑛𝑐𝑦6789= #𝑅𝐵𝑀𝑖𝑠𝑠6789×(𝑡?7@A,6789 − 𝑡?7@D,6789)
∆𝐴𝑐𝑐𝑒𝑠𝑠𝐿𝑎𝑡𝑒𝑛𝑐𝑦F6.G7= #𝑅𝐵𝑀𝑖𝑠𝑠F6.G7×(𝑡?7@A,F6.G7 − 𝑡?7@D,F6.G7)
Reduction in miss latencyif we migrate from
Memory B to Memory A
Estimating Application Stall Time Reductionn Use access frequency, row buffer locality, and MLP
to estimate application’s stall time reduction
21
Access frequency Row buffer locality
Number of row buffer misses
Access latency reductionfor the page due to migration MLP
Application stall time reduction
n MLP characterizes number of accesses that overlapn How do overlaps affect application performance?
n Determining 𝑀𝐿𝑃6789 and 𝑀𝐿𝑃F6.G7 for each page:q Count concurrent read/write requests from the same applicationq Amount of concurrency may vary across time à
use average concurrency over an interval
Calculating Stall Time Reduction
22
∆𝑆𝑡𝑎𝑙𝑙𝑇𝑖𝑚𝑒. = ∆𝐴𝑐𝑐𝑒𝑠𝑠𝐿𝑎𝑡𝑒𝑛𝑐𝑦6789 𝑀𝐿𝑃6789⁄
+𝑝×∆𝐴𝑐𝑐𝑒𝑠𝑠𝐿𝑎𝑡𝑒𝑛𝑐𝑦F6.G7 𝑀𝐿𝑃F6.G7⁄
More requests overlap àreductions overlap àsmaller improvement
from migratingthe page
Total number of cyclesreduced for all requests
Not all writes fall on critical path of executiondue to write queues inside memory:
p is probability that application stalls when queue is being drained
Key Mechanisms of UH-MEMn For each page, estimate utility using a performance model
q Application stall time reductionHow much would migrating a page benefit the performance of the application that the page belongs to?
q Application performance sensitivityHow much does the improvement of a single application’s performance increase the overall system performance?
n Migrate only pages whose utility exceeds the migration threshold from slow memory to fast memory
n Periodically adjust migration threshold
23
𝑈𝑡𝑖𝑙𝑖𝑡𝑦 = ∆𝑆𝑡𝑎𝑙𝑙𝑇𝑖𝑚𝑒.×𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦.
Estimating Application Performance Sensitivityn Objective: improve system job throughput
n Weighted speedup for multiprogrammed workloadsq Number of jobs (i.e., applications) completed per unit time
[Eyerman+ IEEE Micro 2008]
q For each application i, 𝑆𝑝𝑒𝑒𝑑𝑢𝑝. =NOPQRS,TNUVOWSX,T
q 𝑊𝑒𝑖𝑔ℎ𝑡𝑒𝑑𝑆𝑝𝑒𝑒𝑑𝑢𝑝 = ∑ 𝑆𝑝𝑒𝑒𝑑𝑢𝑝.].^_
24
How does each applicationcontribute to the weighted speedup?
Estimating Application Performance Sensitivityn Sensitivity: for each cycle of stall time saved, how much
does our target metric improve by?
n 𝑇 a8679,. can be counted at runtimen 𝑆𝑝𝑒𝑒𝑑𝑢𝑝. can be estimated using previously-proposed models
[Moscibroda+ USENIX Security 2007], [Mutlu+ MICRO 2007], [Ebrahimi+ ASLPLOS 2010], [Ebrahimi+ ISCA 2011], [Subramanian+ HPCA 2013], [Subramanian+ MICRO 2015]
25
𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦. = ∆𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒.
∆𝑇 a8679,.= 𝑆𝑝𝑒𝑒𝑑𝑢𝑝.𝑇 a8679,.
Number of cycles savedin last interval
How much of the weighted speedupdid the application save in the last interval?
Derivedmathematically
(see paperfor details)
Key Mechanisms of UH-MEMn For each page, estimate utility using a performance model
q Application stall time reductionHow much would migrating a page benefit the performance of the application that the page belongs to?
q Application performance sensitivityHow much does the improvement of a single application’s performance increase the overall system performance?
n Migrate only pages whose utility exceeds the migration threshold from slow memory to fast memory
n Periodically adjust migration thresholdq Hill-climbing algorithm to balance migrations and channel loadq More details in the paper
26
𝑈𝑡𝑖𝑙𝑖𝑡𝑦 = ∆𝑆𝑡𝑎𝑙𝑙𝑇𝑖𝑚𝑒.×𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦.
Outlinen Background
n Existing Solutions
n UH-MEM: Goal and Key Idea
n Key Mechanisms of UH-MEM
n Evaluation
n Conclusion
27
System Configurationn Cycle-accurate hybrid memory simulator
q Models memory system in detailsq 8 cores, 2.67GHzq LLC: 2MB, 8-way, 64B cache blockq We will release the simulator on GitHub:
https://github.com/CMU-SAFARI/UHMEMn Baseline configuration (DRAM-NVM)
q Memory latencies based on real products, prior studies
q Energy numbers derived from prior works [Lee+ ISCA 2009] 28
Memory Timing Parameter DRAM NVMrow activation time (𝒕𝑹𝑪𝑫) 15 ns 67.5 nscolumn access latency (𝑡ij) 15 ns 15 nswrite recovery time (𝒕𝑾𝑹) 15 ns 180 ns
precharge latency (𝑡lm) 15 ns 15 ns
Benchmarksn Individual applications
q From SPEC CPU 2006, YCSB benchmark suites
q Classify applications as memory-intensive ornon-memory- intensive based on last-level cacheMPKI (misses per kilo-instruction)
n Generate 40 multiprogrammed workloads, each consisting of 8 applications
n Workload memory intensity: the proportion of memory-intensive applications within the workload
29
Results: System Performance
30
0.91.01.11.21.31.41.51.61.7
0% 25% 50% 75% 100%
Nor
mal
ized
Wei
ghte
d Sp
eedu
p
Workload Memory Intensity Category
ALL FREQ RBLA UH-MEM
14%
5%3%
9%
UH-MEM improves system performanceover the best state-of-the-art hybrid memory manager
Results: Sensitivity to Slow Memory Latencyn We vary 𝑡lin and 𝑡ol of the slow memory
31
1.8
2.2
2.6
3.0
3.4
3.8
x3.0x3.0
x4.0x4.0
x4.5x12
x6.0x16
x7.5x20
Wei
ghte
d Sp
eedu
p
Slow Memory Latency Multiplier
ALL FREQ RBLA UH-MEM
13%13%
8% 6%14%
UH-MEM improves system performancefor a wide variety of hybrid memory systems
𝑡lin:𝑡ol:
More Results in the Papern UH-MEM reduces energy consumption, achieves
similar or better fairness compared with prior proposals
n Sensitivity study on size of fast memoryq We vary the size of fast memory from 256MB to 2GBq UH-MEM consistently outperforms prior proposals
n Hardware overhead: 42.9kB (~2% of last-level cache)q Main overhead: hardware structure to store access frequency,
row buffer locality, and MLP
32
Outlinen Background
n Existing Solutions
n UH-MEM: Goal and Key Idea
n Key Mechanisms of UH-MEM
n Evaluation
n Conclusion
33
Conclusionn DRAM faces significant technology scaling difficultiesn Emerging memory technologies overcome difficulties (e.g., high capacity,
low idle power), but have other shortcomings (e.g., slower than DRAM)
34
n Hybrid memory system pairs DRAM with emerging memory technologyq Goal: combine benefits of both memories
in a cost-effective mannerq Problem: Which memory do we place each
page in, to optimize system performance?
Processor
Memory A Memory B
n Our approach: UH-MEM (Utility-based Hybrid MEmory Management)q Key Idea: for each page, estimate utility (i.e., performance impact)
of migrating page, then use utility to guide page placementq Key Mechanism: a comprehensive model to estimate utility using
memory access characteristics, application impact on system performance
n UH-MEM improves performance by 14% on average over the best of three state-of-the-art hybrid memory managers
UH-MEM: Utility-BasedHybrid Memory Management
Yang Li, Saugata Ghose, Jongmoo Choi, Jin Sun, Hui Wang, Onur Mutlu
35
Results: Fairness
36
0.00.20.40.60.81.01.2
0% 25% 50% 75% 100%
Nor
mal
ized
Unf
airn
ess
Workload Memory Intensity Category
ALL FREQ RBLA UH-MEM
UH-MEM achieves similar or better fairness compared with prior proposals
Results: Total Stall Time
37
0.0
0.5
1.0
1.5
2.0
0% 25% 50% 75% 100%
Ave
rage
App
Sta
ll Ti
me
(x10
^9 c
ycle
s)
Workload Memory Intensity Category
ALL FREQ RBLA UH-MEM
UH-MEM reduces application stall time compared with prior proposals
Results: Energy Consumption
38
0
5
10
15
20
25
0% 25% 50% 75% 100%
Mem
ory
Ener
gy (J
)
Workload Memory Intensity Category
ALL FREQ RBLA UH-MEM
0
5
10
15
20
25
0% 25% 50% 75% 100%
Ener
gy (J
)
Workload Memory Intensity Category
ALL FREQ RBLA UH-MEM
0
5
10
15
20
25
0% 25% 50% 75% 100%
Ener
gy (J
)
Workload Memory Intensity Category
ALL FREQ RBLA UH-MEM
0
5
10
15
20
25
0% 25% 50% 75% 100%
Ener
gy (J
)
Workload Memory Intensity Category
ALL FREQ RBLA UH-MEM
UH-MEM reduces energy consumption for data-intensive workloads
Results: Sensitivity to Fast Memory Capacity
39
1.5
2.0
2.5
3.0
3.5
256MB 512MB 1GB 2GB
WSp
eedu
p
Fast Memory Capacity
ALL FREQ RBLA UH-MEM
UH-MEM outperforms prior proposals under different fast memory capacities
MLP Distribution for All Pages
0 10 200
5
10
15
20
MLP
Freq
uenc
y (%
)
40
0 10 200
5
10
15
20
MLP
Freq
uenc
y (%
)
0 5 10 150
10
20
30
40
MLP
Freq
uenc
y (%
)
(a) soplex (b) xalancbmk (c) YCSB-B
Main Hardware Cost
41
Baseline System Parameters
42
Impact of Different Factors on Stall TimeCorrelation coefficients between the average stall time per page and different factors (AF: access frequency; RBL: row buffer locality; MLP: memory level parallelism)
43
Migration Threshold Determinationn Use hill climbing to determine the thresholdn At the end of the current period,
compare 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒iq667rGm76.s9 with 𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒j8`Gm76.s9n If the system performance improves,
q the threshold adjustment at the end of last period benefits system performance
q adjust the threshold in the same directionn Otherwise,
q adjust the threshold in the opposite direction
44
Overall UH-MEM Mechanismn Period-based approach - each period has a migration thresholdn Action 1: Recalculate the page’s utilityn Action 2: Compare the page’s utility with the migration
threshold, and decide whether to migraten Action 3: Adjust the migration threshold at the end of a
quantum
45
Memory A Memory B
Compare & Decide
?
Recalculate Page Utility
Adjust the migration threshold
(Fast, Small) (Large, Slow)
Memory Controllers
UH-MEM: Putting It All Together
46