Print this page
*** NO COMMENTS ***


  63  * have variable sized cache blocks (rangeing from 512 bytes to
  64  * 128K bytes).  We therefor choose a set of blocks to evict to make
  65  * space for a cache miss that approximates as closely as possible
  66  * the space used by the new block.
  67  *
  68  * See also:  "ARC: A Self-Tuning, Low Overhead Replacement Cache"
  69  * by N. Megiddo & D. Modha, FAST 2003
  70  */
  71 
  72 /*
  73  * The locking model:
  74  *
  75  * A new reference to a cache buffer can be obtained in two
  76  * ways: 1) via a hash table lookup using the DVA as a key,
  77  * or 2) via one of the ARC lists.  The arc_read() interface
  78  * uses method 1, while the internal arc algorithms for
  79  * adjusting the cache use method 2.  We therefor provide two
  80  * types of locks: 1) the hash table lock array, and 2) the
  81  * arc list locks.
  82  *
  83  * Buffers do not have their own mutexs, rather they rely on the
  84  * hash table mutexs for the bulk of their protection (i.e. most
  85  * fields in the arc_buf_hdr_t are protected by these mutexs).
  86  *
  87  * buf_hash_find() returns the appropriate mutex (held) when it
  88  * locates the requested buffer in the hash table.  It returns
  89  * NULL for the mutex if the buffer was not in the table.
  90  *
  91  * buf_hash_remove() expects the appropriate hash mutex to be
  92  * already held before it is invoked.
  93  *
  94  * Each arc state also has a mutex which is used to protect the
  95  * buffer list associated with the state.  When attempting to
  96  * obtain a hash table lock while holding an arc list lock you
  97  * must use: mutex_tryenter() to avoid deadlock.  Also note that
  98  * the active state mutex must be held before the ghost state mutex.
  99  *
 100  * Arc buffers may have an associated eviction callback function.
 101  * This function will be invoked prior to removing the buffer (e.g.
 102  * in arc_do_user_evicts()).  Note however that the data associated
 103  * with the buffer may be evicted prior to the callback.  The callback
 104  * must be made with *no locks held* (to prevent deadlock).  Additionally,
 105  * the users of callbacks must ensure that their private data is


2619 
2620         /* execute each callback and free its structure */
2621         while ((acb = callback_list) != NULL) {
2622                 if (acb->acb_done)
2623                         acb->acb_done(zio, acb->acb_buf, acb->acb_private);
2624 
2625                 if (acb->acb_zio_dummy != NULL) {
2626                         acb->acb_zio_dummy->io_error = zio->io_error;
2627                         zio_nowait(acb->acb_zio_dummy);
2628                 }
2629 
2630                 callback_list = acb->acb_next;
2631                 kmem_free(acb, sizeof (arc_callback_t));
2632         }
2633 
2634         if (freeable)
2635                 arc_hdr_destroy(hdr);
2636 }
2637 
2638 /*
2639  * "Read" the block block at the specified DVA (in bp) via the
2640  * cache.  If the block is found in the cache, invoke the provided
2641  * callback immediately and return.  Note that the `zio' parameter
2642  * in the callback will be NULL in this case, since no IO was
2643  * required.  If the block is not in the cache pass the read request
2644  * on to the spa with a substitute callback function, so that the
2645  * requested block will be added to the cache.
2646  *
2647  * If a read request arrives for a block that has a read in-progress,
2648  * either wait for the in-progress read to complete (and return the
2649  * results); or, if this is a read with a "done" func, add a record
2650  * to the read to invoke the "done" func when the read completes,
2651  * and return; or just return.
2652  *
2653  * arc_read_done() will invoke all the requested "done" functions
2654  * for readers of this block.
2655  *
2656  * Normal callers should use arc_read and pass the arc buffer and offset
2657  * for the bp.  But if you know you don't need locking, you can use
2658  * arc_read_bp.
2659  */




  63  * have variable sized cache blocks (rangeing from 512 bytes to
  64  * 128K bytes).  We therefor choose a set of blocks to evict to make
  65  * space for a cache miss that approximates as closely as possible
  66  * the space used by the new block.
  67  *
  68  * See also:  "ARC: A Self-Tuning, Low Overhead Replacement Cache"
  69  * by N. Megiddo & D. Modha, FAST 2003
  70  */
  71 
  72 /*
  73  * The locking model:
  74  *
  75  * A new reference to a cache buffer can be obtained in two
  76  * ways: 1) via a hash table lookup using the DVA as a key,
  77  * or 2) via one of the ARC lists.  The arc_read() interface
  78  * uses method 1, while the internal arc algorithms for
  79  * adjusting the cache use method 2.  We therefor provide two
  80  * types of locks: 1) the hash table lock array, and 2) the
  81  * arc list locks.
  82  *
  83  * Buffers do not have their own mutexes, rather they rely on the
  84  * hash table mutexes for the bulk of their protection (i.e. most
  85  * fields in the arc_buf_hdr_t are protected by these mutexes).
  86  *
  87  * buf_hash_find() returns the appropriate mutex (held) when it
  88  * locates the requested buffer in the hash table.  It returns
  89  * NULL for the mutex if the buffer was not in the table.
  90  *
  91  * buf_hash_remove() expects the appropriate hash mutex to be
  92  * already held before it is invoked.
  93  *
  94  * Each arc state also has a mutex which is used to protect the
  95  * buffer list associated with the state.  When attempting to
  96  * obtain a hash table lock while holding an arc list lock you
  97  * must use: mutex_tryenter() to avoid deadlock.  Also note that
  98  * the active state mutex must be held before the ghost state mutex.
  99  *
 100  * Arc buffers may have an associated eviction callback function.
 101  * This function will be invoked prior to removing the buffer (e.g.
 102  * in arc_do_user_evicts()).  Note however that the data associated
 103  * with the buffer may be evicted prior to the callback.  The callback
 104  * must be made with *no locks held* (to prevent deadlock).  Additionally,
 105  * the users of callbacks must ensure that their private data is


2619 
2620         /* execute each callback and free its structure */
2621         while ((acb = callback_list) != NULL) {
2622                 if (acb->acb_done)
2623                         acb->acb_done(zio, acb->acb_buf, acb->acb_private);
2624 
2625                 if (acb->acb_zio_dummy != NULL) {
2626                         acb->acb_zio_dummy->io_error = zio->io_error;
2627                         zio_nowait(acb->acb_zio_dummy);
2628                 }
2629 
2630                 callback_list = acb->acb_next;
2631                 kmem_free(acb, sizeof (arc_callback_t));
2632         }
2633 
2634         if (freeable)
2635                 arc_hdr_destroy(hdr);
2636 }
2637 
2638 /*
2639  * "Read" the block at the specified DVA (in bp) via the
2640  * cache.  If the block is found in the cache, invoke the provided
2641  * callback immediately and return.  Note that the `zio' parameter
2642  * in the callback will be NULL in this case, since no IO was
2643  * required.  If the block is not in the cache pass the read request
2644  * on to the spa with a substitute callback function, so that the
2645  * requested block will be added to the cache.
2646  *
2647  * If a read request arrives for a block that has a read in-progress,
2648  * either wait for the in-progress read to complete (and return the
2649  * results); or, if this is a read with a "done" func, add a record
2650  * to the read to invoke the "done" func when the read completes,
2651  * and return; or just return.
2652  *
2653  * arc_read_done() will invoke all the requested "done" functions
2654  * for readers of this block.
2655  *
2656  * Normal callers should use arc_read and pass the arc buffer and offset
2657  * for the bp.  But if you know you don't need locking, you can use
2658  * arc_read_bp.
2659  */