Print this page
3742 zfs comments need cleaner, more consistent style
Submitted by:   Will Andrews <willa@spectralogic.com>
Submitted by:   Alan Somers <alans@spectralogic.com>
Reviewed by:    Matthew Ahrens <mahrens@delphix.com>
Reviewed by:    George Wilson <george.wilson@delphix.com>
Reviewed by:    Eric Schrock <eric.schrock@delphix.com>

@@ -55,15 +55,15 @@
  * high use, but also tries to react to memory pressure from the
  * operating system: decreasing its size when system memory is
  * tight.
  *
  * 3. The Megiddo and Modha model assumes a fixed page size. All
- * elements of the cache are therefor exactly the same size.  So
+ * elements of the cache are therefore exactly the same size.  So
  * when adjusting the cache size following a cache miss, its simply
  * a matter of choosing a single page to evict.  In our model, we
  * have variable sized cache blocks (rangeing from 512 bytes to
- * 128K bytes).  We therefor choose a set of blocks to evict to make
+ * 128K bytes).  We therefore choose a set of blocks to evict to make
  * space for a cache miss that approximates as closely as possible
  * the space used by the new block.
  *
  * See also:  "ARC: A Self-Tuning, Low Overhead Replacement Cache"
  * by N. Megiddo & D. Modha, FAST 2003

@@ -74,11 +74,11 @@
  *
  * A new reference to a cache buffer can be obtained in two
  * ways: 1) via a hash table lookup using the DVA as a key,
  * or 2) via one of the ARC lists.  The arc_read() interface
  * uses method 1, while the internal arc algorithms for
- * adjusting the cache use method 2.  We therefor provide two
+ * adjusting the cache use method 2.  We therefore provide two
  * types of locks: 1) the hash table lock array, and 2) the
  * arc list locks.
  *
  * Buffers do not have their own mutexes, rather they rely on the
  * hash table mutexes for the bulk of their protection (i.e. most

@@ -373,11 +373,11 @@
 };
 
 #define ARCSTAT(stat)   (arc_stats.stat.value.ui64)
 
 #define ARCSTAT_INCR(stat, val) \
-        atomic_add_64(&arc_stats.stat.value.ui64, (val));
+        atomic_add_64(&arc_stats.stat.value.ui64, (val))
 
 #define ARCSTAT_BUMP(stat)      ARCSTAT_INCR(stat, 1)
 #define ARCSTAT_BUMPDOWN(stat)  ARCSTAT_INCR(stat, -1)
 
 #define ARCSTAT_MAX(stat, val) {                                        \

@@ -593,13 +593,11 @@
 #define L2ARC_FEED_MIN_MS       200             /* min caching interval ms */
 
 #define l2arc_writes_sent       ARCSTAT(arcstat_l2_writes_sent)
 #define l2arc_writes_done       ARCSTAT(arcstat_l2_writes_done)
 
-/*
- * L2ARC Performance Tunables
- */
+/* L2ARC Performance Tunables */
 uint64_t l2arc_write_max = L2ARC_WRITE_SIZE;    /* default max write size */
 uint64_t l2arc_write_boost = L2ARC_WRITE_SIZE;  /* extra write during warmup */
 uint64_t l2arc_headroom = L2ARC_HEADROOM;       /* number of dev writes */
 uint64_t l2arc_feed_secs = L2ARC_FEED_SECS;     /* interval seconds */
 uint64_t l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval milliseconds */

@@ -3543,11 +3541,11 @@
          */
         anon_size = MAX((int64_t)(arc_anon->arcs_size - arc_loaned_bytes), 0);
 
         /*
          * Writes will, almost always, require additional memory allocations
-         * in order to compress/encrypt/etc the data.  We therefor need to
+         * in order to compress/encrypt/etc the data.  We therefore need to
          * make sure that there is sufficient available memory for this.
          */
         if (error = arc_memory_throttle(reserve, anon_size, txg))
                 return (error);