chapter3 contents

4
Proposed Work 3.1 Problem Definition: Secondary storage is in abundance always as compared to primary memory and it's also very cheap. However, it suffers a serious drawback. Secondary memory devices tend to be very slow as compared to RAM. Read/write access to hard disk has always been a performance issue. And therefore, for an optimal performance of overall system, access to secondary memory must always be minimized. The same problem is present with traditional paging as well. Note that programs cannot work until data they need is present in RAM. Therefore, every time an application needs some data which not present in RAM, it has to be brought back to RAM and so process of paging takes place. Since a program cannot work if data is not present in RAM , the overall performance suffers because of paging process as existing pages must be paged out before new pages can be brought in. However, paging is crucial as it allows OS to access more memory than physically available. One solution is if we do paging but not on secondary storage but on a faster device then this issues can be solved. 3.2 Features: 1. High system performance: Due to reduction in accessing slower disk for paging, the overall paging process is speeded up. 2. Lesser memory usage: Under low memory load conditions, the memory usage is low. The process is adaptive in the sense that, the size of compressed buffer adjusts dynamically depending upon memory load conditions. 3.3 Project Scope: Adaptive Compressed Paging is a simulation program which aims at improving the existing paging mechanism. We implement it in two segments. In first segment, we will implement the traditional paging system and in second segment Advanced Compressed Paging .We will compare the performance of the two systems and present results. Advanced Compressed Paging will help one to clearly see advantage of such a system over traditional paging by visually presenting results. This projects does not aim to be a commercial product, rather it is more of a research subject. Our goal is to provide strong proof that adaptive compressed paging has advantage over traditional system. 3.4 Goals: The main goal of this project is implementing Adaptive Compressed Paging. We have tried to show how an adaptive approach to compressed paging can improve system performance by minimizing access to secondary storage i.e. hard disk. The goal of this project is basically a simulation of the process of adaptive compressed paging in user space. In a true implementation such a system must be implemented inside kernel itself. In this section, we have studied about scope, target user groups, operation environment and some design and implementation constraints. Adaptive Compressed Caching in RAM 8

Upload: ghangale

Post on 29-Apr-2017

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter3 Contents

Proposed Work

3.1 Problem Definition:

Secondary storage is in abundance always as compared to primary memory and it's also very cheap. However, it suffers a serious drawback. Secondary memory devices tend to be very slow as compared to RAM. Read/write access to hard disk has always been a performance issue. And therefore, for an optimal performance of overall system, access to secondary memory must always be minimized. The same problem is present with traditional paging as well. Note that programs cannot work until data they need is present in RAM.

Therefore, every time an application needs some data which not present in RAM, it has to be brought back to RAM and so process of paging takes place. Since a program cannot work if data is not present in RAM , the overall performance suffers because of paging process as existing pages must be paged out before new pages can be brought in. However, paging is crucial as it allows OS to access more memory than physically available. One solution is if we do paging but not on secondary storage but on a faster device then this issues can be solved.

3.2 Features:

1. High system performance: Due to reduction in accessing slower disk for paging, the overall paging process is speeded up.2. Lesser memory usage: Under low memory load conditions, the memory usage is low. The process is adaptive in the sense that, the size of compressed buffer adjusts dynamically depending upon memory load conditions.

3.3 Project Scope:

Adaptive Compressed Paging is a simulation program which aims at improving the existing paging mechanism. We implement it in two segments. In first segment, we will implement the traditional paging system and in second segment Advanced Compressed Paging .We will compare the performance of the two systems and present results. Advanced Compressed Paging will help one to clearly see advantage of such a system over traditional paging by visually presenting results. This projects does not aim to be a commercial product, rather it is more of a research subject. Our goal is to provide strong proof that adaptive compressed paging has advantage over traditional system.

3.4 Goals:

The main goal of this project is implementing Adaptive Compressed Paging. We have tried to show how an adaptive approach to compressed paging can improve system performance by minimizing access to secondary storage i.e. hard disk. The goal of this project is basically a simulation of the process of adaptive compressed paging in user space. In a true implementation such a system must be implemented inside kernel itself. In this section, we have studied about scope, target user groups, operation environment and some design and implementation constraints.

Adaptive Compressed Caching in RAM 8

Page 2: Chapter3 Contents

Proposed Work

3.4.1 Project StatementIn existing system paging process is carried out on Hard disk which slows down the

system performance and eventually increases the CPU idle time. To resolve this problem we introduce the new location for paging process that is on RAM itself rather than doing it on Hard disk which is very slow as compared to RAM.

3.5 Objective

The objective of this project is to improve upon traditional paging technique by storing pages to be removed from memory into a special memory area rather than on to a disk.In this way when pages which were paged out previously need not be fetched from disk instead they are decompressed and loaded from RAM directly. We also try to add further improvement to this compressed caching technique by making use of adaptive compression algorithm. We have used LZO algorithm for compression and decompression to achieve high compression and decompression ratio thus improving system performance, even more.

3.6 Constraints

In this section we list major design and implementation issues:

3.6.1 Page CacheCompressed caching has a strong tendency to influence the page cache, as it is

commonly larger than other caches. Pages holding data from blocks of all backing stores (like buffer data, regular file data, file system meta- data and even pages with data from swap) are stored in the page cache. Like other system caches, page cache may be smaller on a system with a compressed cache that only stores pages backed by swap. As a consequence of this possible reduction, blocks (usually from regular files) will have fewer pages with their data cached in memory, what is likely to increase the overall I/O. That is a sign that compressed caching should not only be aware of its usefulness to the virtual memory system, but also how it might degrade system performance. Instead of letting page cache and compressed cache compete for memory; our approach for this problem consists of also storing other pages from the page cache (besides the ones holding swap data) into the compressed cache. This actually increases memory available to all pages in page cache, not only to those backed by swap

3.6.2 Page OrderingIn the compressed cache, our primary concern regarding page ordering is to keep the

compressed pages in the order in which the virtual memory system evicted them. As we verified in experiments on Linux, which uses an least recently used (LRU) approximation replacement policy, not keeping the order in which the compressed pages are stored in the compressed cache rarely improves system performance and usually degrades it severely. As most operating systems, when a block is read from the backing store, Linux also reads adjacent blocks in advance, because reading these subsequent blocks is usually much cheaper than reading the first one. Reading blocks in advance is known as read-ahead and the blocks read ahead are stored in pages in non-compressed memory.

Read ahead operations alter the LRU ordering since the pages read in advance are taken as more recently used than the ones stored in the compressed cache, although they may even be not used. As a consequence, it is possible that this change forces the release of pages

Adaptive Compressed Caching in RAM 9

Page 3: Chapter3 Contents

Proposed Work

not in conformity to the page replacement algorithm. For this reason, whenever a page is read from the compressed cache, a read ahead must not be performed. It is not worthwhile to read pages from the compressed cache in advance since there is no performance penalty for fetching the pages in different moments. Furthermore, compressed pages read ahead from swap are only decompressed when explicitly reclaimed by the virtual memory system. In contrast to the pages read only due to the read-ahead operation, a compressed page reclaimed for immediate use preserves LRU page ordering, since it will be more recently used than any page in the compressed cache. We also consider essential to preserve the order in which the pages were compressed to be able to verify the efficiency of compressed caching. Otherwise the results would be influenced by this extra factor, possibly misleading our conclusions.

3.6.3 Cells with Contiguous Memory PagesTo minimize the problem of poor compression ratios, we propose the adoption of cells

composed of contiguous memory pages. With larger cells, it is more likely to have memory space gains even if most pages do not compress very well. For ex- ample, if pages compress to 65% in average, we will still have space gains if we use cells composed of at least two contiguous memory pages. In fact, in this case, it is possible to store three compressed pages in one cell. However, we should notice that allocating contiguous memory pages has some tradeoffs. The greater the number of contiguous pages, the greater the probability of failure when allocating them, given the system memory fragmentation. Furthermore, the larger the cell, the greater the probability of fragmentation in it and the cost to compact its compressed pages. As a good side effect, given that part of our metadata is used to store data about the cells, the use of larger cells reduces these data structures. Experimentally, we have concluded that two is the number of contiguous pages to be allocated that achieves the best results in our implementation.

3.6.4 Disabling Clean Page CompressionIf we release clean pages without compressing them into the compressed cache, the

LRU page ordering is changed because some of the pages freed by the virtual memory system will be stored into the compressed cache and others will not. Nevertheless, since few of the clean pages were being reclaimed by the system, most of them would be freed anyway. Hence, it is expected that releasing them earlier does not have a major impact on system performance. The metadata and processing overhead introduced by this heuristic are insignificant.

3.6.5 Variable Compressed Cache SizeHaving a static compressed cache size is not beneficial since on low memory load conditions. The memory still remains allocated for the cache where it is hardly used. However an Adaptive policy that determines the compressed cache size at runtime is close to optimal solution even though it is usually a complex task.

3.6.6 Performance issuesHere is list of major issues that cause performance penalty in some way.

1• Compression and decompression of pages. 2• Compacting of pages in Compressed cache cells. 3• Compressing clean pages. 4• Reduction in effective memory to programs.

Adaptive Compressed Caching in RAM 10

Page 4: Chapter3 Contents

Proposed Work

3.7 Proposed System

Our project is to implement a (basically simulation) program which shows the performance gain in using compressed cache over the traditional paging. So our project will consist of two parts one will implement traditional paging and other with compressed cache. The compressed cache that we are using is special in the sense that the buffer where compressed pages are stored dynamically adjusts its size. The initial cache size is very small and grows as and when needed. However there must be a limit beyond which it wouldn't grow. In general, we will try to get better memory usage and compression ratio so that we can store more data in a given memory area. ACP also attempts to compress kernel file caches since; these are stored in kernel memory data structures anyway. By storing them in compressed cache and modifying kernel to not cache file cache, further optimization can be carried out. We will clearly point out the difference in the performance and time lag between Traditional paging and our Adaptive Compressed paging simultaneously.

Adaptive Compressed Caching in RAM 11