Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 4372
  • Last Modified:

ehcache overflow to disk

hi guys

I am using ehcahce for caching. couple of questions

this is my ehcache settings

    <diskStore path="java.io.tmpdir"/>
      <cache name="myCache" maxElementsInMemory="10000"
            eternal="true"
            timeToIdleSeconds="0"
            timeToLiveSeconds="0"
            overflowToDisk="true"
            memoryStoreEvictionPolicy="LRU"
            diskExpiryThreadIntervalSeconds="120">
      </cache>

I am populating the cache using Ehcache.put(new Element(myitem.getId(), myItem));
I have overflowToDisk="true" , so once the cache is full of elements the objects get stored in the disk (as i understand). Any idea where in the disk are the objets stored?
Is the location controlled by java.io.tmpdir ?

thanks
0
royjayd
Asked:
royjayd
  • 5
  • 4
  • 2
2 Solutions
 
objectsCommented:
Its controlled by the diskStore configuration
http://ehcache.org/documentation/storage_options.html#Storage
0
 
for_yanCommented:

This is from

http://ehcache.org/documentation/storage_options.html#Some_Configuration_Examples
Storage
Files

The disk store creates a data file for each cache on startup called "cache_name.data". If the DiskStore is configured to be persistent, an index file called "cache name.index" is created on flushing of the DiskStore either explicitly using Cache.flush or on CacheManager shutdown.
Storage Location

Files are created in the directory specified by the diskStore configuration element. The diskStore configuration for the ehcache-failsafe.xml and bundled sample configuration file ehcache.xml is "java.io.tmpdir", which causes files to be created in the system's temporary directory.
diskStore Element

The diskStore element is has one attribute called path. --- diskStore path="java.io.tmpdir"/ --- Legal values for the path attibute are legal file system paths. e.g.for Unix

    /home/application/cache

The following system properties are also legal, in which case they are translated:

    * user.home - User's home directory
    * user.dir - User's current working directory
    * java.io.tmpdir - Default temp file path
    * ehcache.disk.store.dir - A system property you would normally specify on the command line e.g. java -Dehcache.disk.store.dir=/u01/myapp/diskdir ...

      Subdirectories can be specified below the system property e.g.

          java.io.tmpdir/one

          becomes, on a Unix system,

          /tmp/one
0
 
objectsCommented:
> Is the location controlled by java.io.tmpdir ?

in your case yes. And would use the location specified by System.getProperty("java.io.tmpdir");
See the link I posted above for other options
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
royjaydAuthor Commented:
what i am trying to understand is IF i specify  overflowToDisk="true"  ,the objects are stored in the disk
and used from the disk.
If i say  overflowToDisk="false" , what happens?

for example
<diskStore path="java.io.tmpdir"/>
      <cache name="myCache" maxElementsInMemory="10000"
            eternal="true"
            timeToIdleSeconds="0"
            timeToLiveSeconds="0"
           overflowToDisk="false"
            memoryStoreEvictionPolicy="LRU"
            diskExpiryThreadIntervalSeconds="120">
      </cache>

what happens to the 10001 th object ?

thx
0
 
for_yanCommented:
The data file is updated continuously during operation of the Disk Store if overflowToDisk is true. Otherwise it is not updated until either cache.flush() is called or the cache is disposed.
0
 
for_yanCommented:


that is what they write in the Persistence section of the same page:


DiskStore persistence is controlled by the diskPersistent configuration element. If false or omitted, DiskStores will not persist between CacheManager restarts. The data file for each cache will be deleted, if it exists, both on shutdown and startup. No data from a previous instance CacheManager is available.

If diskPersistent is true, the data file, and an index file, are saved. Cache Elements are available to a new CacheManager. This CacheManager may be in the same VM instance, or a new one.

The data file is updated continuously during operation of the Disk Store if overflowToDisk is true. Otherwise it is not updated until either cache.flush() is called or the cache is disposed.

In all cases the index file is only written when dispose is called on the DiskStore. This happens when the CacheManager is shut down, a Cache is disposed, or the VM is being shut down. It is recommended that the CacheManager shutdown() method be used. See Virtual Machine Shutdown Considerations for guidance on how to safely shut the Virtual Machine down.

When a DiskStore is persisted, the following steps take place:

    * Any non-expired Elements of the MemoryStore are flushed to the DiskStore
    * Elements awaiting spooling are spooled to the data file
    * The free list and element list are serialized to the index file

On startup the following steps take place:

    * An attempt is made to read the index file. If it does not exist or cannot be read successfully, due to disk corruption, upgrade of ehcache, change in JDK version etc, then the data file is deleted and the DiskStore starts with no Elements in it.
    * If the index file is read successfully, the free list and element list are loaded into memory. Once this is done, the index file contents are removed. This way, if there is a dirty shutdown, when restarted, Ehcache will delete the dirt index and data files.
    * The DiskStore starts. All data is available.
    * The expiry thread starts. It will delete Elements which have expired.

These actions favour safety over persistence. Ehcache is a cache, not a database. If a file gets dirty, all data is deleted. Once started there is further checking for corruption. When a get is done, if the Element cannot be successfully derserialized, it is deleted, and null is returned. These measures prevent corrupt and inconsistent data being returned.

    * Fragmentation

      Expiring an element frees its space on the file. This space is available for reuse by new elements. The element is also removed from the in-memory index of elements.
    * Serialization

      Writes to and from the disk use ObjectInputStream and the Java serialization mechanism. This is not required for the MemoryStore. As a result the DiskStore can never be as fast as the MemoryStore.

      Serialization speed is affected by the size of the objects being serialized and their type. It has been found in the ElementTest test that:
          o The serialization time for a Java object being a large Map of String arrays was 126ms, where the a serialized size was 349,225 bytes.
          o The serialization time for a byte[] was 7ms, where the serialized size was 310,232 bytes

      Byte arrays are 20 times faster to serialize. Make use of byte arrays to increase DiskStore performance.
    * RAMFS

      One option to speed up disk stores is to use a RAM file system. On some operating systems there are a plethora of file systems to choose from. For example, the Disk Cache has been successfully used with Linux' RAMFS file system. This file system simply consists of memory. Linux presents it as a file system. The Disk Cache treats it like a normal disk - it is just way faster. With this type of file system, object serialization becomes the limiting factor to performance.
          o Operation of a Cache where overflowToDisk is false and diskPersistent is true

            In this configuration case, the disk will be written on flush or shutdown.

            The next time the cache is started, the disk store will initialise but will not permit overflow from the MemoryStore. In all other respects it acts like a normal disk store.

            In practice this means that persistent in-memory cache will start up with all of its elements on disk. As gets cause cache hits, they will be loaded up into the MemoryStore. The oher thing that may happen is that the elements will expire, in which case the DiskStore expiry thread will reap them, (or they will get removed on a get if they are expired).

            So, the Ehcache design does not load them all into memory on start up, but lazily loads them as required.
0
 
objectsCommented:
> If i say  overflowToDisk="false" , what happens?

once the (memory) cache is full the oldest will simply expire and be removed from the cache.
overflowToDisk gives you the option of writing them to disk instead
0
 
royjaydAuthor Commented:
just for experimentation
when i did overflowToDisk="true"  i encountered a out of memory exception.
when my program count reached 16000

but for same configuration with overflowToDisk="false"  i dont see a  out of memory exception  and the count is 22000 and still going.

any comments?

thx for responses guys.
0
 
objectsCommented:
try decreasing diskSpoolBufferSizeMB, the disk spool is used to store object before writing to disk.
In your case it sounds like the spool is eating up too much memory
0
 
for_yanCommented:

That's waht people write here comparing JCS and EH:
http://jakarta.apache.org/jcs/JCSvsEHCache.html


The ehcache disk version is very simple. It puts an unlimited number of items in a temporary store. You can easily fill this up and run out of memory. You can put items into JCS purgatory faster than they can be gc'd but it is much more difficult. The EH store is then flushed to disk every 200ms. While EH is flushing the entire disk cache blocks!

and they of course recommend using JCS instead of EHCache

These folks  
http://forums.terracotta.org/forums/posts/list/3993.page
tried to change  diskSpoolBufferSizeMB to overcome OutOfMemory (though it seems
they believed you need to increase it) but were not very successful;
still it is probably something to try to play with


0

Featured Post

Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

  • 5
  • 4
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now