Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2205986
  • 博文数量: 436
  • 博客积分: 9833
  • 博客等级: 中将
  • 技术积分: 5558
  • 用 户 组: 普通用户
  • 注册时间: 2010-09-29 10:27
文章存档

2013年(47)

2012年(79)

2011年(192)

2010年(118)

分类: LINUX

2012-12-23 20:07:32

这次在上次的基础上精读了一下识别瓶颈和加速瓶颈的细节。
Implication Details
1.Tracking Dependent and Nested(嵌套的) Bottlenecks(跟踪相互依赖和嵌套的瓶颈)
Sometimes a thread has to wait for one bottleneck while it is executing another bottleneck.
Similiar situations occur when bottlenecks are nested.
The thread  waiting cycles should be attributed to the bottleneck that is the root cause of wait(造成瓶颈的根本原因应该是线程等待周期)
 
判断瓶颈
To determine the bottleneck Bj  that is the root cause of the wait for each bottleneck Bi,we need to follow  the dependency chain between bottlenecks until a bottleneck Bj is found not to be waiting for a different bottleneck(要判断瓶颈Bj的根本原因是等待其他每个瓶颈Bi,就需要遵循一个瓶颈之间的依赖关系直到瓶颈Bj不再等待其他某个瓶颈)
To follow the dependency chain we need to know (a)which threads is executing a bottleneck and(b) which bottleneck that thread is currentlu waiting for.
      To know (a)we add an executer_vec bit vector on each BT entry that records all current exeuters of each bottleneck.(不太理解)BT:即Bottleneck Table
      To know (b),we add a small Current Bottleneck Table associated with the BT and indexed with hardware thread ID that gives the bid that the thread is currently waiting for.
 
瓶颈表(BT)
硬件成本
 
 
从这个表中可以看出对于一个有2个大内核和56个小内核的CMP来说,会占用18.7KB的存储空间。这18.7的空间包括瓶颈表,目前的瓶颈表,加速度质数表和调度缓冲区这些结构。
 
处理中断
操作系统会中断内核。如果一个小内核在等待大内核执行瓶颈时被中断了,it(小内核)does not service the interrupt until a BottleneckDone or BottleneckCallAbort is received.(不能完全理解这两个名词)
If a large core gets an interrupt while accelerating a bottleneck,it aborts all bottlenecks in its Scheduling Buffer,finishes the current bottleneck,and then services the interrupt.
如果一个大内核在加速瓶颈时被中断了,它就会中止在调度缓冲区中所有的瓶颈,并结束目前的瓶颈去处理中断。
 
 
Transfer of Cache State to the Large Core(将缓存状态转移到内核)
A bottleneck executing remotelu on the large core mau require data that resides in the small  core,thereby  producing cache misses that reduce the benefit of acceleration.Data Marshalling has been proposed to reduce these  cache misses,by identifying  and marshalling the cache lines required bu the remote core.
在大内核上执行的瓶颈可能会用到存储在缓存中的数据,就会导致缓存遗漏,减少加速的好处。所以就提出用数据编组通过识别和编组缓存行来减少缓存遗漏。
 
 
 
阅读(882) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~