Jacob Brock
YOU?
Author Swipe
View article: ShareJIT: JIT code cache sharing across processes and its practical implementation
ShareJIT: JIT code cache sharing across processes and its practical implementation Open
Just-in-time (JIT) compilation coupled with code caching are widely used to improve performance in dynamic programming language implementations. These code caches, along with the associated profiling data for the hot code, however, consume…
View article: ShareJIT: JIT Code Cache Sharing across Processes and Its Practical Implementation
ShareJIT: JIT Code Cache Sharing across Processes and Its Practical Implementation Open
Just-in-time (JIT) compilation coupled with code caching are widely used to improve performance in dynamic programming language implementations. These code caches, along with the associated profiling data for the hot code, however, consume…
View article: Prediction and bounds on shared cache demand from memory access interleaving
Prediction and bounds on shared cache demand from memory access interleaving Open
Cache in multicore machines is often shared, and the cache performance depends on how memory accesses belonging to different programs interleave with one another. The full range of performance possibilities includes all possible interleavi…
View article: Prediction and bounds on shared cache demand from memory access interleaving
Prediction and bounds on shared cache demand from memory access interleaving Open
Cache in multicore machines is often shared, and the cache performance depends on how memory accesses belonging to different programs interleave with one another. The full range of performance possibilities includes all possible interleavi…
View article: Cache Exclusivity and Sharing
Cache Exclusivity and Sharing Open
A problem on multicore systems is cache sharing, where the cache occupancy of a program depends on the cache usage of peer programs. Exclusive cache hierarchy as used on AMD processors is an effective solution to allow processor cores to h…
View article: LD
LD Open
Data race detection has become an important problem in GPU programming. Previous designs of CPU race-checking tools are mainly task parallel and incur high overhead on GPUs due to access instrumentation, especially when monitoring many tho…
View article: Hardware support for protective and collaborative cache sharing
Hardware support for protective and collaborative cache sharing Open
Shared caches are generally optimized to maximize the overall throughput, fairness, or both, among multiple competing programs. In shared environments and compute clouds, users are often unrelated to each other. In such circumstances, an o…