正在加载图片...
There are a number of reasons for defragmenting physical RAM,two of which are: ·Many device driver memory is tv ypically pre-allocated at boot time.This practice increases initial RAM consumption .Typically memory consumption of a phone while it is idle is less than the total mem RAM chips or unused RAM banks within a RAM chip. 2.3 File Caching and applica ion start up speeds by redu cing the time required to read Lazy write File caching can also improve battery life.for example allowing an SD card containing MP3 files to be powered down more often,as the data is read from the card in larger chunks 2.3.1 Improved file system performance with server-side caching The Symbian OS file server has been enhanced to cache file and file system data,and to "read -anead strate The file system and media driver HAls (hardware adaptation interfaces)have been extended to report information req uired by the file server to dete driver does The file server uses otherwise unallocated memory for its caches.which are automatically In low memory sit uations,performance degrades to no worse tha currer sd8naSysiemwae6ass.9Mie2acmngsdsbe6yea rite caching on a per-session basis As part of its operation.the file server provides "fair"scheduling of client accesses to media;it ensu As a result of these enhancements.application developers should generally see faster and more consistent file read performance.Applications that use the new API to enable write caching will ance,but are expected to cope with the consequences of data loss such as power loss or media remova developer.symbian.com symbian3 There are a number of reasons for defragmenting physical RAM, two of which are: • Many device drivers require physical RAM to use for buffers etc. For example, a camera driver may require a physically contiguous buffer for holding the output of the CCD. Currently such memory is typically pre-allocated at boot time. This practice increases initial RAM consumption after boot. Total RAM consumption can be reduced if memory for such buffers is only allocated as required, rather than at boot time. • Typically, memory consumption of a phone while it is idle is less than the total memory available. Idle and active power consumption can be decreased by powering down unused RAM chips or unused RAM banks within a RAM chip. 2.3 File Caching Files are now cached even more intelligently in Symbian OS v9.4. Read-ahead caching speeds up file access, particularly for sequential file reads. This can have a beneficial effect on device boot and application start up speeds by reducing the time required to read in resource files. “Lazy write” caching can improve the performance of applications which stream data to disk, and also reduce the need for applications to implement their own buffering. File caching can also improve battery life, for example allowing an SD card containing MP3 files to be powered down more often, as the data is read from the card in larger chunks. 2.3.1 Improved file system performance with server-side caching The Symbian OS file server has been enhanced to cache file and file system data, and to "read ahead" file data. The caching and read-ahead strategies are tuned to recognise and deal appropriately with typical use cases, including streaming of large media files. The file system and media driver HAIs (hardware adaptation interfaces) have been extended to report information required by the file server to determine the optimal caching strategy. The file server uses default values (which may not be optimal) if the file system and/or media driver does not supply an implementation of this HAI. The file server uses otherwise unallocated memory for its caches, which are automatically reclaimed if required. In low memory situations, performance degrades to no worse than current levels. Additionally, the file server provides an API to enable write caching on a per-session basis and on a system-wide basis. Write caching is disabled by default. As part of its operation, the file server provides "fair" scheduling of client accesses to media; it breaks up long-running client requests into multiple smaller requests to ensure that no long-running client request can cause other clients' requests to be blocked for an unbounded period of time. As a result of these enhancements, application developers should generally see faster and more consistent file read performance. Applications that use the new API to enable write caching will experience faster write performance, but are expected to cope with the consequences of data loss due to events such as power loss or media removal
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有